<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=446209&amp;fmt=gif">

Understanding the Software Defined Data Center of Today

  Mark Gabryjelski     Mar 07, 2019

sddcThe software defined data center (SDDC) has been used by many companies and people since 2012. The idea of this concept involved virtualizing the components most critical to data center operations. The three angles these technologies work to simplify and combine are compute, storage, and network functionalities. SDDC can:

  • Run VMs in any workloads
  • Take advantage of fiber channel and iSCI-based block storage
  • Quickly and easily communicate between workloads

Since then, new technologies and capabilities have been created. More companies have also been realizing and taking advantage of software defined capabilities.

Where are most organizations in their SDDC journeys?

The big three in software defined are servers (compute), storage, and networking. Every company has server virtualization. Storage virtualization is not as universal, but there is still a substantial population taking advantage of it. This group may be mixing compute and storage virtualization using vSAN, making two-thirds of their journey complete. The networking component is the only missing piece to create a fully optimal SDDC strategy.

Keep in mind that there is a flip side of customers as well. Many organizations have implemented the compute and the network components already, but they haven't gone through that additional step and virtualized their storage pieces. We have customers that are one third, two thirds, and then we have a select group of customers that are fully onboard with SDDC.

What holds organizations back?

The main thing holding organizations back from taking advantage of a fully software defined approach is fear. Organizations often have a tough time accepting a big shift in the way something as integral as IT operations is orchestrated. This is nothing new though. If you remember, the same hesitations existed years ago when server virtualization was in its early infancy. There were admins who worried that they would lose their jobs because there wouldn't be any physical servers to manage. Well, that never came true, right? Fully embracing SDDC is now the technology innovation that server virtualization once was.

A change in the way operations have historically been handled presents concerns. If storage has always been treated and managed as a separate entity with a separate storage site team to make requests to and it works, then why should organizations risk making it not work? This thought alone can stall or stop many from moving forward, especially if there is an initial investment and associated trainings to make a technology switch feasible.

It also isn’t the case that the traditional methods don’t work, because they do. But how slow is it? How many processes and steps in those processes have been created to make those traditional methods feasible? A server and storage admin can carve be ready to go in half an hour, hour, while siloed organizations might take a day or three days to get to the same point. Both ways are just as easy to carry out, but one has a much more simplified process, and that process is what technology partners can help organizations realize.

What skills should a company invest their time in to manage SDDC?

Storage administrators need to understand or be able to work with the server virtualization side of SDDC. While anyone can right-click and hit “configure storage,” storage admins already have the necessary background knowledge and skills needed to ensure things run smoothly. For instance, storage administrators are aware of:

  • performance requirements.
  • replication
  • how many redundant copies the business requires when they're building out these solutions.

In a VMware environment, this is understood as Storage Policy-Based Management. What was previously done at storage array level can now be brought in and done via policy. It enables administrators to make changes to these things as needed without having to deal with it on a per-line configuration basis. We can then determine which virtual machines need to adhere to which policies. So that would be a critical component of stepping up and doing the vSAN.

Network teams are still required because they help move packets around the environment. This can make it problematic to deal with networks. Since SDDC is all about agility, organizations without vSAN or NSX are limiting themselves. You don't have that programmatic access to be able to adjust quickly, but when you bring those three components together and you virtualize compute, storage, and network, you have a Software-Defined Data Center and all the benefits that go along with it.

The fastest way to get ahead or to put yourself clearly ahead with new technologies is to get additional trainings and certifications in the area. For instance, there are VMware certifications focused around data center virtualization, server virtualization, network virtualization, vSAN specialization, and more. Many IT teams most likely already have many of the skills displayed in these certification processes, but they act as a great way to measure yourself.

Hyperconverged infrastructure’s role in SDDC

Hyperconverged infrastructure is a route many organizations take when trying to software define their IT strategy. HCI is essentially compiled of servers with drives that give organizations server virtualization and the capability of storage virtualization. If organizations have hyperconverged infrastructure with VMware then they are using vSAN and VxRail customers are getting it all packaged up in a physical hardware appliance.  Keep in mind though that hyperconverged does not cover networking in any way, shape, or form. This means that hyperconverged only paints two thirds of the picture SDDC strives for, leaving a need for software defined networking.

Many organizations taking advantage of hyperconverged infrastructure understand they are still missing that networking component. They are ready for that next step, but adding that network component and the abstraction security around it creates hesitations for some. Cost, timing, any the classic concern of someone on the IT team losing their job when a new solution is introduced are common themes that arise. Again, these are all the same reservations that were held years ago when server virtualization was introduced.

Physical switches, traffic controls, and preferred patching still exist. All the important parts of a network team and their decisions on the architecture of how they move traffic throughout the environment are still required. Hyperconverged infrastructure can simplify agile development workloads and allow for DR to span multiple locations.

When to start considering a SDDC solution

Network virtualization is the easiest place to start. It allows teams to use NSX to go ahead and define all these policies per virtual machine or by attributes of a virtual machine, rather than per physical port or per IP address. You can still do it those ways, but if you really want to change and make it seamless across any cloud, define some policies that travel with the virtual machines between data centers or SDDC offerings. The same level of security can be maintained with the same tool sets and the skill sets. This distributed firewall is micro-segmentation, and it is the biggest benefit companies can see day one of their transition.

Then when IT teams start looking at NSX and abstracting out DR scenarios or shifting workloads between data centers, that's really where the virtualization of networks kicks in and starts to become realized. It’s that distributed firewall and the security component first, which you can realize almost the same day that you install it. All of a sudden everyone's like, "Well, we have this ability to virtualize the networks. Let's go ahead and start looking at how we're going to consume that".

The next piece is storage. If an organization is trying to do vSAN or software-defined storage in that fashion, the servers and storage both need to be purchased at the same time, or you have to retrofit. When trying to retrofit older servers, many challenges can be faced. This makes it the perfect time to start planning for software and hardware upgrades with vSAN ReadyNode to reach the SDDC.

SDDC is marked by that change in process and ownership, while bringing together teams. This can be the hardest part of the process. The technology works, but it's the teamwork required around what we used to consider the different silos. Typically, an IT Director, IT Manager, VP of Infrastructure, CTO, or another similar position is established with the core responsibility of bringing the IT teams together into one unit.

As companies make that transition and the SDDC, all those tools and teams start working together. Since they see the same capacity alerts, they are all generally aware of and see the same performance planning. These teams can go ahead and model workloads independently of each other, or simply work together in the same tool. SDDC brings the IT department and data center teams together.

Conclusion

SDDC isn’t defined to any one or any few industries. It is growing in popularity universally. Regardless of if the first step is software-defined networking or software-defined storage, the other is usually adopted in the near future. Talking to a trusted IT solutions provider can help to assess and assist with any technology changes, whether major or minor.

Next Steps: VMware vSAN is your complete data storage solution that offers you clarity, reliability, and certainty in an unpredictable, disruptive world. Learn more by downloading our white paper, “A 360-Degree View of the Agile VMware vSAN Platform.”

Download our white paper: 360 Degree View of VMware vSAN

Tags  SDDC IT Strategy hyperconvergence data center modernization

Mark Gabryjelski

Written by Mark Gabryjelski

Mark Gabryjelski, VCDX #23, leads up the Virtualization Practice here at WEI where he identifies, validates, and introduces new technologies that our customers can use to simplify and control their data center operations. Mark works with clients across all industries in the design, implementation and support of solutions that enable our customers’ consumption of virtualization technologies. Mark is an author and also conducts several of the customer training sessions in the WEI Knowledge Transfer Center (KTC).

About WEI

WEI is an innovative, full service, customer-centric IT solutions provider. We're passionate about solving your technology challenges and we develop custom technology solutions that drive real business outcomes.

Subscribe to WEI's Tech Exchange Blog


Categories

see all
Contact Us