Happy Holidays, Y’all!

As the old year closes up shop and a new one struts in, many of us are in the habit of making a list of the things we’d like to do differently this time around. New Year’s Resolutions aren’t always realized but they are based on the intention to make a change for the better and that alone can make them pretty useful. Why not take a little time to reflect on your DATA center operations and storage and set the course for how you’d like to improve it in the near future?

After scores and scores of conversations with CIOs in 2018, it became clear to us that IT leadership teams are looking for new approaches to not only make IT better, but make the core business better too. So, our 2019 resolutions for storage is based on looking, not only at performance and operational data, but also at time, dollars and the pace of innovation and execution that can be achieved. In short, we’ll help you “Do the Math” (DTM). Here’s a few ideas to get things started:

1. Be 100% clear – It’s time for Scale-Out Storage

Put simply, there is no scale in scale-up storage.

We hear it everywhere we go, from CIO counseling offices to industry events to the media: scale-out storage is coming to a theatre near you. Even steady-eddy analyst Gartner says scale-out will dominate the IT scene, doubling from 40% adoption in 2018 to 80% by 2022. Why? Too much up front capacity and cost stacked in racks of gear ill-suited to dealing with the reality of data now and for the applications of the future. The “enterprise data center” is no longer a single place, it is network of hub and spoke data centers that must be in sync with one another. Yet scale-up storage was built for the old world. We hear in our conversations an overwhelming resolve to get ahead of the curve this year – to scale just-in-time and pay-as-you-grow. But to be ahead of the curve you’ll need to get moving now before the industry tips past 50% adoption, so it remains high on the list for 2019.

2. From #1 Contender to Inmate #45472 – Get out of Storage-Jail

Take advantage of the mass movement to server-based storage.

We hear time and again that CIOs are sitting in storage jail. They may be held captive today by the typical legacy providers and array-based architectures of the past, but many are making the switch to a better future. As The Register’s Chris Mellor noted from the latest IDC market data, server-based storage continues to pull away from array-based storage even in our raging economy, which shows sever-based storage growing at 5 times the rate of its old school counterpart. But this break from the past is just the first salvo, so it’s essential to get past the marketing fluff and know what you are getting into. In other words, are you buying freedom from a hardware agenda or are you just buying another type of proprietary appliance? Fundamentally, true software-defined storage is just that – software – designed to enable vendor-, storage media-, and node-choice, not just the sad excuse of a thinly-disguised appliance business. There is no get-out-of-jail free card, but your peers are resolving to cut loose.

3. It’s all about the Benjamins – Save more!

To achieve real savings, revisit your estimated expenses.

Saving more money is a staple across any New Year’s Resolution list, and so too is it for the CIOs we talk to. But for CIOs tasked with maximizing savings, we understand how hard that is. Walk a day in their shoes and try to name vendor that doesn’t say their solution will save money. Some CIOs are now bringing CFOs into their operations to help them parse it all. What many are finding is that the math fundamentally changes when adopting a server-based infrastructure because expected server lifetimes and flash media lifetimes will change. One CIO we’ve worked with revised the common 3 year depreciation cycle used for old school arrays to the 5 years or more used for server-based storage. Now let’s do the math:

  • Say you spend $10M on array-based storage and depreciate it over 3 years. That represents $3.3M in cost per year.
  • Then, say you spend 70% less on server-based storage, or $3M, which is what our customers tell us they achieve, and depreciate it over 5 years. That would be a cost of $600K per year.
  • When you combine the CAPEX total with the revised 5 yr. depreciation cycle, you end up with an apples-to-apples savings of over 80% per year. How’s that sound?

And, if the well-documented challenges in flash media fabrication get solved in 2019 and manifest in lower prices, your savings will be even greater.

4. Run the numbers every month on your Public Cloud bills and measure up

Long live the Public Cloud, but don’t get screwed by it.

About every other week, a CIO slides his or her AWS bill across the table to us and says, “I had no idea, but I do know this is really high,” which effectively means that the usage of their data was opaque and the egress fees racked up. Most CIOs have already incorporated the public cloud in their go-forward plan, but for 2019, we see more calculating which apps and what data should move back on-premises and, for the load staying in the public cloud, which public cloud is going to deliver against cost and performance objectives. Speaking of calculating, Enterprise Strategy Group found that fully 35% of apps and data returning from the cloud land on a software-defined storage infrastructure (see Resolution #2).

5. Working 8 days a week? – Automate!

Automate everything with Machine-Learning, or Fill in for the machines on Saturday Night.

Shouldn’t you be “working for the weekend”, not working on the weekend? Applications and data centers need to work 24×7, not the entire IT team. So to overcome the demands that IT teams are facing, CIOs are committing themselves to automate everything in their environments. Automating storage no longer simply means automating basic tasks and hand-holding application developers and end-users. Instead, it means enabling self-provisioning and administration by the developers and users themselves. It also allows you to move faster because you can change and improve the system as you get smarter. Automation enables you to take back your weekends and spend them as you please (if you are an obsessive-compulsive type, work away, but work on something else).

6. 1-2-3-4 get your get your data off the floor

Containers plus QoS = next generation SDDC. Get moving toward a Container future.

If we are to believe published reports on the adoption of container technologies, a slim majority of companies will have deployed containers in parts of their environments by the close of 2019. Similarly, most of these deployments will come at the expense of Microsoft and VMware, which explains their recent exorbitant acquisition of Heptio. CIOs not wanting to be left behind in 2019 are moving fast toward container technology leaders like Docker, Kubernetes, and Mesosphere and toward private infrastructure designed from the ground up to come up fast, persist the data, and serve those new containerized apps. This is why Datera enables multiple storage classes with different qualities of service to provide, for example, K8s Platinum, K8sGold, & K8s Silver with user-defined performance and resiliency – to make adoption of containers easy, but without the carve off a new island from the virtual machine and bare metal server mainland.

7. 5-6-7-8-9 subtract the pain, your size is fine

HCI works for some apps, but apps that change don’t work with HCI.

There is more than one path to the Software Defined Data Center. HCI offers one and its most attractive quality is its easy, out-of-the-box setup. What it lacks, however, is the ability to become a real SDDC platform and serve a broad use of applications. HCI scale-out bricks are identical and only provide a narrow band of price/performance. Even those that profess the ability to independently scale the compute, network or storage tiers remain too rigid and compromised. Most Datera use cases require low latency, but with the ability to dynamically change as the apps come and go or need to scale and grow. HCI just promises a middle of the curve solution, which won’t get it done for the vast majority of applications and simply becomes just another legacy, siloed infrastructure that ushers in a new era of hyper vendor lock-in. Most CIOs are trying to avoid vendor lock-in and are looking to generate new options for their data center environments. Heterogeneity is not a problem – it is your advantage. This is why Datera focuses on right-sizing storage for each application within a shared environment – all running on a customer’s choice of hardware.

8. Count to 10 and Cut the Cord

Yeah, we’re finally talking about you Fiber Channel

iSCSI has proven itself more than capable to carry all storage traffic. It no longer makes sense to expand Fibre Channel SAN powered applications. Fiber Channel remains enormously expensive, from both a CAPEX and an OPEX perspective, but no CIO we talk to wants to take a step backward on availability and fault isolation. That’s why it is essential to focus on how the new dawn of storage enables continuous availability in the face of unplanned and planned maintenance including upgrades, expansion, and fault tolerance with fault zones. If you stop and do the math on availability, you won’t get lost in old school storage’s ability to say the number 9 over and over in an attempt to bamboozle you.

Well, there you have it – that’s our Top 8 List of resolutions for CIOs looking to tune up their software-defined data centers in 2019. Why not set aside some time this week to write down your own data storage resolutions? Better yet, share yours and we’d love to hear what you think of ours. Best wishes for a wonderful New Year from all of us here as Datera. Do The Math and Geek Out on Data with us in 2019!

Make your New Year’s Resolution

Get a consultation on your transition to a software-driven data environment with enterprise class performance and reduce your storage infrastructure total-cost-of-ownership by as much as 70%, Contact Us to schedule a free consultation.