Sometimes, it seems as if success comes from just sticking it out. Take for example the infamous WD-40® named for its Water Displacement formula – perfected on the 40th try. Or bubble wrap – a newfangled textured wallpaper that morphed into home insulation before finally finding its niche as packing material. In each case these now famous products were failing and the people who made them decided to use what they learned to change them. Rather than give up on a good idea, they focused on transforming it. This happens a lot with new technologies. Especially if they are disruptive and promise to solve big problems.

With data centers handling massive amounts of data and traditional storage architectures, teams and budgets being pushed to their limits, software-defined storage (SDS) emerged as an antidote for data silos that are too complex to scale, maintain and afford. The goal of software-defined storage is to provide the speed and agility needed to quickly provision, move and scale IT services across network segments, data centers, and into the cloud. All of this is meant to happen transparently and independent of the physical infrastructure underneath. That’s a tall order and until recently, it has suffered more than a few setbacks. The problem of modernizing the data center to meet current workloads continues but, until recently, software-defined storage companies were unable to offer a viable solution. Let’s take a look at what ailed them.

Lessons from the SDS graveyard

I met them all, dozens of software-defined storage (SDS) companies. I met with them in different phases, some just starting out, a few finding solace in a true believer customer and many looking to sell the sinking ship. Unable to meet user expectations, they were what we in the business refer to as “the walking dead.” The industry is now littered with the headstones of SDS companies that could not get enough traction to survive in the market.

Being close to the corporate development groups during my time at Hewlett Packard and Western Digital afforded me the opportunity to meet nearly all of the storage startups in the last 20+ years. Among the very capable trail-blazers were luminaries like Mark Lewis (former Formation Data, now Violin), who I credit with much of my career success – I was Mark’s CTO at Compaq when he ran the data storage business. And yet, they came and went.

Smart people, perhaps too smart – love to build it better. But what does better mean? It sometimes means zeroing in on the technology and losing sight of the needs of the customer. This chasm arises from the fact that many storage visionaries assume customers think like they do and will measure success in the same way. I have personally been to the software-defined storage school of hard knocks: FaStor Technologies, lights on in 1994, lights off in 1998. I have watched brilliant storage experts graduate with honors from the school of hard knocks.

And from all of this experience, I learned a few key lessons I would like to share with those software-defined storage companies trying to break through:

  1. Keep your mind on the customer: Customers have multi-faceted data environments comprised of a raft of applications that span compute, networking and storage and transcend physical locations. Within the storage layer, the capabilities, configurations and media are always changing, and the software-defined storage platform must account for all of this. Fundamentally, your software needs to run on the customers’ “standard” server. Many customers have approved standards for servers that meet various business and technical criteria. And the standards evolve over time, based on the customers evolving business. The key lesson in this case, is that running your SDS stack on an “industry standard” server is necessary, but not sufficient. Your software must be able to run on the customer’s ever evolving standard list of servers.
  2. Get it done: Do boring to perfection. It is quite simple in theory, but very difficult in practice – data integrity, data availability and performance commensurate with cost. Store some data – get the same data back. Store the data on something expensive, give it back really fast. When fast is not important, it’s because cost is. But in all cases store it, and when asked give it back, correctly. You would be surprised to find out the number of SDS stacks that missed on at least one of these points. Missing any of them equals certain death.

    I remember speaking with the team of a large bank in Manhattan about a new approach for an application. The IT leader listened begrudgingly and replied “Okay, I’m interested. This better work so that you don’t make me haul myself back from Jersey over the weekend to recover it.” Failing to do the job not only hurts the business, it can cost real people their time, their reputations and even their jobs.

  3. Simplicity rules: Cool products and features go unused all the time because it is: too hard to figure out how to extract the value, too operationally complex to utilize, or it does not fit the customer infrastructure or organization. I can’t tell you how many times I have been told by an SDS startup that if customers would only change their application they could get some new and valuable capability. My advice has been consistent for 20+ years: if you force a change to the customer application, workflows or organization you will hit an insurmountable barrier. Storage people like cool storage products. Customers do too, but not when it requires rip, replace, and rewrite from the application on down.
  4. It’s time for programmable storage: This is the big one. When a customer says they want software-defined storage it is typically a cry for help. What the customer is really saying is that they don’t know what kind of storage they need, how fast their data is growing, or how much performance they should have. And even if they did know on day one, it would change on day two. What they need, not just want, is storage that is programmable. I don’t just mean application programming interfaces to help glue things together, it runs much deeper than that. What they need is a storage system that can be created, grown, managed and modified without requiring human involvement, i.e. can be done by software. Even more important is that the storage managed by the system can be provisioned to an application, protected, recovered and enhanced programmatically or even better, automatically and on-demand by the application itself. And when the customer get smarter – which happens every day – they can programmatically / automatically improve the system to meet their business needs.

Moving Forward

In playing hopscotch through the SDS graveyard, I’ve learned why so many software-defined storage companies fail. The objective of SDS is to move the storage system’s intelligence out of hardware and into a layer of software that sits above it and manage the storage infrastructure as a single entity. To be worthwhile, it has to free IT from dealing with the specialized interface protocols and the configuration complexities of individual storage units or subsystems. It needs to automatically determine how fast the data is growing, what kind of storage is being used and what level of performance is needed to solve the problem of modernizing the management of data and then make it happen. And finally, the economics have to make sense.

Software-defined storage companies die because they either get these basics wrong, don’t deliver the value as advertised or aren’t thinking about tomorrow’s problems while delivering on today’s.

In order to succeed SDS must:

  • Fit customer data center needs – starting with running on customer’s servers – their choice, their servers.
  • Get the basics right – deliver performance and be reliable.
  • Have compelling and accessible value – automation and simplicity drive out unnecessary work and the cost and frustration that accompanies it.
  • Fit into the customer’s environment with the ability to program the construction of the system; it’s about adapting to change.
  • Offer the ability to program the service of data to a diverse set of applications; it’s still about adapting to change.

Keep in mind that there were 39 failed attempts before the winning formula for WD-40 became a toolbox staple. At Datera, we have been fortunate enough to learn from and integrate the learnings of the past. In an upcoming blog we will delve into the broader goal that our customer have of achieving a software-defined data center and how Datera’s programmable approach is central to that macro goal.

Want to learn more about the testing? Or, if you are ready to achieve high-performance with continuous availability on commodity hardware and reduce your storage infrastructure total-cost-of-ownership by as much as 70%, Contact us to schedule a free consultation