Nicky Nichols from Netflix’s Orange is the New Black (OITNB) pretty much summed up what is happening right now in the typical enterprise data center when she said “You Know, another layer of icing on a sh!tcake doesn’t make it taste good.” These clean rooms full of aisles, racks and blinking lights have lots of old stuff (aka Digital Junk); older generation solutions past their prime. But slapping technologies on top like monitoring engines and dashboards, swapping in new boards and drives, or deploying a fleet of new admins to write and execute scripts every day to tune it all up is often just another layer of icing…and the cake still doesn’t taste right. Nicky later gave in to the weight of the present, saying “all we can do is make the most of right now,” which seems like a cop out to me and that more icing is on the way.

But you don’t just have to settle. You can make things right for now and the future. New studies indicate that half of Global 2000 enterprises are doing just that in the data center, looking to adopt and harness containers by the end of 2019 and set a course for a containerized future. They identify their infrastructure as the #1 challenge to overcome to make their container future a reality. And this is where Datera comes into play, bringing the software-defined revolution to enterprise data the way containers and specifically Kubernetes does to application environments. Kubernetes is revolutionizing the compute layer. In the world of container orchestration, Kubernetes is the new black.

“The second you’re perceived as weak, you already are.” – Red

So just like we do with applications, let’s first turn our attention to the compute layer and ask the question on everybody’s mind: should we start planning the funeral for VMs? The answer: not so fast. We know VMs are the standard in the enterprise data center and cloud computing environments and they aren’t going away anytime soon. But perception is reality these days, and these same studies show that a majority of enterprises are developing their new, cloud-native applications on containers at the expense of VMware and the other players that round out the virtual machine world.

This transition is being driven by the positive economics and increased velocity of innovation that result when DevSecOps, CI/CD, containers, and Kubernetes are leveraged to ease development, assembly, deployment, and portability of applications from one environment to another. Containers are simply lighter, more mobile, and less taxing on the infrastructure and IT budgets. So, no funeral yet, but it’s time to find an appropriate resting place for when the traveling hands of time close in. At Datera, we’re quite happy to see mixed environments of bare metal, virtual machines and containers because we support them all with equal vigor.

“Look, I still got some time left here, but I’m getting out eventually, and it feels like it’s time to start focusing on that. Make sure I have a plan.” – Pennsatucky

But if containers offer so much benefit now, then why aren’t we all already enjoying a container future? Glad you asked. There are 2 main challenges: the first is that containers don’t come ready to deploy at scale without help. The second is that containers heap a new set of requirements on the existing data center infrastructure stack that legacy platforms are often incapable of handling. Taken together, despite VMware’s claim to the software defined data center (SDDC) mantle, containers are the ultimate highway to the SDDC of the future which requires changes in approach, technology choice and skillset up and down the IT stack.

“You’re one Cheerio in the bulk box of life.” – Nicky

Who knew Nicky would so aptly describe a container without an orchestrator? At the compute layer, the runtime APIs for containers are designed to manage just one container on one host, not to manage multiple containers deployed across multiple hosts and data centers. Getting to a container future means moving to an overall container stack, replete with a container orchestration system, which is where Kubernetes comes in. In the world of container orchestration systems, you could say K8s is the ultimate maestro, and the trio of Docker, Kubernetes and Red Hat Openshift appear poised to become the de facto container stack.

Kubernetes automate the deployment, scaling, maintenance, scheduling and operation of multiple application containers across clusters of nodes. You can design apps while keeping scale in mind but without worrying about the resources at the infrastructure level. Applications specify their requirements for compute, network and storage resources without over-defining them, and leave it to the infrastructure technology to supply the resources required. Using K8s, advanced users organize resources using labels. Any custom information can be attached to resources using labels to give management tools a simple way to query the application state and proxy the right stuff for each container.

So the bottom line is this – without orchestrators, containers by themselves are just another handful of cheerios – some action, like tools for devops, but not much extra crunch in real deployments.

“You can just keep moving. Keep yourself so busy you don’t have to face who you really are.” – Piper

The second challenge is that containers don’t come with the data readily attached and toss that problem over the fence to the inmates, err… administrators, manning the storage cages. Containers were originally designed to develop and deploy microservice apps that would be hosted in a cloud and dynamically assembled into applications on the fly — stateless, transient, and ephemeral. Yet most enterprise applications need to persist their state even when a container is discharged. So the state of the container itself can be lost if it isn’t persisted.

“Yeah, I said stupid twice, only to emphasize how stupid that is.” – Pennsatucky

The parade of legacy storage vendors on the market are not 100% stupid, but their systems are losing IQ points as they age with every passing year, accelerated by the pace of change in IT land and the innovation opportunities they missed. They see the market trends just like you and have responded by generating specific plug-ins to enable Kubernetes to mount their proprietary external storage arrays, just like they did when VMware went viral on the scene north of a decade ago. This enables communication between the two through a couple of steps:

  1. Choose a Storage Volume Driver. Pick your protocol of choice.
  2. Create specific Storage Classes. Many legacy external storage systems only support one type of storage media, and are limited to providing one Storage Class to Kubernetes. Ideally, multiple Storage Classes could be provided by the storage system, to perfectly satisfy the needs of each container.
  3. Each container presents a Persistent Volume Claim (PVC) to Kubernetes to request a volume from one of the available Storage Classes to get the QoS and data services they need.
  4. Kubernetes deploys the container in a pod and wires it up with the PersistentVolume (PV) in the storage array courtesy of the PVC. So there you have a basic operating instance since now the compute layer cluster, comprising a raft of compute hosts, knows what storage to call. Now, K8s can bring up a new container on one host, kill it, or relocate it to another host at a different address and ensure successful data access. The container may run for 1 second, 1 hour, 1 week or more, but ultimately it doesn’t matter because that’s the nature of a dynamic runtime environment. For the storage layer, shared storage is your best choice as it is enterprise class, accessible to all compute servers and uses iSCSI (internet Small Computer Systems Interface), which has proved to be the most desirable protocol for SDDCs.

But while not 100% stupid, this kind of storage doesn’t do much to enable a dynamic data infrastructure to support Kubernetes, it just puts a layer of new icing on top of the existing legacy storage system.

“Everybody has a soulmate. But they’re usually on the other side of the bars, or the wall, or the planet from you. That’s the way the universe works.” – Red

Kubernetes and Datera are just that — soulmates — and on the same page. What Kubernetes does for applications is analogous to what Datera does for data. So you’ve got time to pause, think about the future, and learn a little bit more about Datera and how our 100% software-driven data services platform orchestrates data with the right quality of service to the right containerized, virtual and bare metal applications, locations, and nodes for a smarter, cloud-like future. Datera is here now, running containers now, and ready to deploy at scale.

Our intent-defined data orchestration complements the Kubernetes operational model well. Integrating Datera’s data automation engine with Kubernetes container automation isn’t icing on an old, stale cake, it’s an entirely new approach from the ground up. Our system:

  • Enables automatic provisioning and deployment of stateful applications at scale.
  • Translates application service level objectives across a host of dimensions to drive your data infrastructure, autonomously.
  • Allows enterprises and service providers to seamlessly and cost-effectively scale applications of any kind on demand and continuously optimize them, while human admins take a needed rest.
  • Makes good on the promise of the SDDC.

One of the reasons Datera integrates so seamlessly with in K8s is that our engineering-led team at Datera collaborated with Google on the Container Storage Interface (CSI) used to help standardize the storage plug-ins for containers, and worked on Kubernetes itself.

While we are proud of that development and that we have an enterprise ready solution for K8s, plug-ins are merely table stakes. But developing this interface exposed us first hand to the weaknesses of traditional approaches and helped us better define ours. We know firsthand that plug-ins don’t overcome the shortfalls of existing storage systems but exacerbate them, which is where our storage brain comes into play.

“My brain will always be there for you, thinking things so you don’t have to.” – Crazy Eyes

Like Kubernetes, Datera abstracts the software – the brain of the system – from the physical resources. This allows the delivery of a higher level of automation, agility and simplicity for data, both for K8S environments and those yet to modernize. So without going into all the details here, using Datera to underpin a successful K8s-centric future is done in a few simple steps:

  1. Install the Datera Plug-in. You Google it can get it and download it on GitHub and www.daterastage.wpengine.com.
  2. Create specific classes of service. You needn’t pick the storage node, media type, location, or that mundane stuff like you do with others. Instead, you simply define these classes at a higher level using a host of parameters – performance, availability, durability, efficiency (deduplication, compression), security (encryption) and beyond.
  3. Label these classes of service. K8s Platinum, K8s Gold, K8s Titanium… Go crazy.
  4. Expose these classes of service to the application developers and let them serve themselves via PVCs. Developers then map their K8S labels to our labeled classes of service, and voila, there you have it. So the developer calls the class of service himself or herself, essentially programming the Datera environment to give the application exactly what it needs. Done.

The Datera brain deploys the resources to deliver that consistent quality of service using applied machine learning to continuously optimize without the need for human intervention. But change happens for applications and their users, so what happens when things change?

  1. Want a new class of service? Change the label in the application. Done.
  2. There is no 2. See step 1.

“If you’re not building a future, it’s because you don’t believe there is a future.” – Vee

We’ll all know it when we get there – the modern, dynamic SDDC of future, comprising compute instances that come and go, flat networks that move bits back and forth across racks and fault domains, and enterprise class storage that provides perfect and persistent data at the right level of service, at the right time, automated, and guaranteed. Containers, Kubernetes and Datera are a high-performance engine ready to take you to that future right now. Many of you may not be ready for takeoff today, and we feel your pain. But we know from all of our conversations with folks in your shoes that half of you reading this are moving there now, and a third of you see storage as a central roadblock to getting there. So don’t just “fake it till you fake it some more,” because while we may be able to fool those around us, we can’t fool ourselves. Datera is ready now. No more sh!tcake. Period.

“God. This [SDDC] is the loneliest place I’ve ever been and I lived in a tree for eight months.” – Soso.

Automation is everything!

Ready to achieve high-performance with continuous availability on commodity hardware and reduce your storage infrastructure total-cost-of-ownership by as much as 70%? Contact us to get a free consultation