Tear up the old sheet music and improvise – the Software Defined Data Center is out on a World Wide Tour
The Software defined data center (SDDC), like the spirit of the beloved Grateful Dead psychedelic band, is alive and well and headed out on the road despite what you may have last heard. The SDDC, for those that missed the inaugural tour, was envisioned to meet the requirements of our constantly changing data centers. By using a common, intelligent software layer built for change, it promised to create an efficient and flexible infrastructure for applications new and old on commodity hardware.
One area where SDDC could not quite get the show on the road was with data and storage capabilities needed to ignite IT crowds. The SDDC we all want must be able to quickly and efficiently provision, move and scale IT services across network segments and data centers, and into the cloud. While reliable software defined technologies for the compute and networking layers of the IT stack have been available for some time, software defined storage offerings had not delivered the level of performance, features or policy-based provisioning required. A fresh perspective was sorely needed.
Dead Houses, Communal Living and Changing the Rules
“Gonna leave this brokedown palace on my hands and my knees,
I will roll, roll, roll.” – Brokedown Palace
If you check the US housing market stats, you’ll find that somebody is buying houses – but it’s not the Gen Y or millennials. For the next generation the rules have already changed. Shared space in co-ops like the “Dead houses” of Palo Alto (all named after Grateful Dead songs) or the co-living projects springing up in bigger cities like Google’s planned “don’t-call-them-dormitories” in San Jose have become highly desirable. Sure, affordability has a lot to do with it but so does a preference for community, diversity and being around folks who are not inclined to just accept the status quo.
Like group houses, data centers are also steeped in diversity and the need to change. They are home to a variety of workloads that range from traditional database applications run on physical servers to virtual and container-based applications, along with vast amounts of structured and unstructured data. The SDDC needs storage that can support both modern, cloud-native and legacy applications and operating environments, all in a communal way.
There are vendors that specialize in each area of storage for different app compositions, like virtual machines, containers, and bare metal. And there are specialists in every kind of storage media, like all-flash, NVMe, hybrid and rotating rust. But what do you do if you need all of them, simultaneously, now? In the SDDC, these features and functions must be API driven and many if not all of the capabilities are needed simultaneously. So far, the only way to get them has been to assemble a band from a handful of vendors, but that just creates in-fighting when the show starts. Without a universal software hub to run the storage hardware that can meet the requirements of the entire data center, adding storage becomes an expensive, do-it-yourself, one-off project for each app and performance need. That generates a mountain of disparate gear that will break even the strongest roadie’s back and won’t produce a good gig.
Playing in the Band
“The wheel is turnin’ and you can’t slow down,
you can’t let go and you can’t hold on,
you can’t go back and you can’t stand still” – The Wheel
Why hasn’t storage been able to keep time with the networking and compute layers of the SDDC? Well, storage requirements tend to change as the data center modernizes and becomes more and more software-centric. But traditional storage arrays weren’t built for change, they were built for static requirements. And if you ask array-based storage to change, the reply, as the Dead said it in Uncle John’s Band, is “it’s the same story the crow told me, it’s the only one he knows.” In the dynamic world of the SDDC, array-based storage is singing the same old tune, not improvising as the set list changes dynamically or one player riffs on lead.
SDDC compute instances live and die with containers and are associated with stateless apps. But storage has to deliver data to the application efficiently, accurately, affordably and often long after the containers have moved on to the next tour stop. It should also have a way to make choices and respond to ever changing conditions, incorporating new technologies as they become available.
So, not to sound like a disgruntled band mate searching for the limelight, while compute’s job is to start an app and process some data and networking is responsible for moving the data where it needs to go, storage has a more complicated progression to deliver behind the scenes to keep the place in rhythm. Besides keeping tabs on each byte, the system has to get data to the application when asked, do it quickly, at a reasonable price and be able to respond to whatever new conditions appear. While software defined storage was relegated to lower class joints for a long time, generation two of software defined storage is now ready to be a headliner.
Deal
“Since it costs a lot to win, and even more to lose,
You and me bound to spend some time wond’rin’ what to choose.” – Deal
Okay, so the Dead probably weren’t talking about moving to a software defined data center. Nonetheless, it is indeed a gamble. Some software-defined offerings have improved mightily, but it took a new class of players free from “hardware heads” and curators of the storage museum to play the hand to generate a platform worthy of your time. But they aren’t all the same, and to be worth the bet, your choice had better give you:
- Freedom to use heterogeneous servers from multiple vendors that can utilize media across multiple generations of innovation and mix and match them as you grow and mature.
- Enterprise class performance to efficiently serve and store the data generated by high performance apps and run with high performance media.
- A robust set of data services, since any cover band can strip its software bare in pursuit of speed and give you a system without the features you need to win.
- Application-driven data orchestration and automation, so that you can set your requirements, like performance, resiliency, security, and efficiency, easily for a diverse set of applications, and forget about them, leaving the delivery of that service to the software for all forms of compute, especially containers and Kubernetes since those are the top of the bill these days.
- Applied machine learning to deliver predictive operations, whereby the system continuously analyzes the environment from top to bottom to anticipate and eradicate potential problems with not just analysis, but action, like moving data automatically without requiring any manual tuning.
Because enterprises are constantly changing, data centers will always have a mix of legacy, existing and new apps. To be ready to improvise on tunes yet to be written, you’ll have to be able to operate at scale across a complete data center environment with one systematic, enterprise-wide approach. With an approach built from the ground up to automate and optimize constantly, you can get your software defined data center ready for new technologies that are bound to come. If you’re going to move to a software-defined data center, be sure to choose software defined storage that gives you the comprehensive foundation to do so.
Life after Death – You Know You’re Gonna Shine
“California, I’ll be knocking on the golden door.
Like an angel, standing in a shaft of light,
Rising up to paradise, I know I’m gonna shine.” – Estimated Prophet
Nothing stays the same forever. As the Dead said, “Like the morning Sun you come, and like the wind you go…Where does the time go?” Data that was once important enough to be stored close to its application is now banished to some distant glacier. Today’s group house residents may soon run off to a “tiny home,” mobile home, condo in the city, or commune in Marin. Things are going to change and the way we respond will be different as well. Just don’t count on seeing an uptick in traditional single-family homes and minivans this go-round.
At Datera, we took an approach built for change right from the beginning. Our typical customer runs a multitude of applications simultaneously for a diverse set of user groups, and need to serve them all at their desired pace and expect that their needs will change. And the set list only gets longer and never scales back. At its core, Datera uses a storage virtualization approach based on the simple principle that stores and protects data in a present context and future context at the same time. The present context deals with storage needs – where to store and protect your data for the variety of applications and deployment styles you have right now. The future context is a mechanism to handle future needs and changes of any kind, allowing our sophisticated control path to project a future state of the system. The Datera platform will then take a systematic set of actions like caching on a node, in a cluster, or across geographic region automatically, in order to work toward that future state. All this happens while keeping the data consistent, perfect and continuously available. It also utilizes this data to suggest the appropriate type of server needed as your environment grows and upon installation, automatically rebalances the load to utilize that new horsepower or capacity and align the right applications and tenants to it to achieve their desired quality of service.
Datera’s programmable approach to software defined storage removes the obstacles to building an enterprise class SDDC by running high performance applications environments at global scale with crowd-cheering orchestration and automation.
“The wheel is turning and you can’t slow down.” The requirements of the modern data center just keep rolling along, but with Datera serving as your data hub, your SDDC will shine this time around and will be ready to rock for generations of innovation to come.
Ready to Rock On?
If you are ready to transition to a software-driven data environment with enterprise class performance and continuous availability running on standard server-based storage and reduce your storage infrastructure total-cost-of-ownership by as much as 70%, Contact Us to schedule a free consultation.