Datera and Fujitsu Partner to Support Acceleration of Enterprise Adoption of Software-Defined Architectures

An exciting day at Datera as we welcome Fujitsu, a well respected global brand, as a strategic partner to support enterprises who are rapidly adopting software-defined storage in order to achieve radically better business outcomes. As part of the relationship, Fujitsu will distribute Datera software under the Fujitsu storage brand with immediate availability in Japan and Europe. We are proud of the confidence that Fujitsu has placed in Datera’s technology, and for the technical rigor that Fujitsu has applied in evaluating our software platform.

Enterprises operating at scale have challenging decisions to make

We consistently see customers struggling to efficiently manage data at scale as they operate in a very dynamic and unpredictable environment. As data is critical to value creation and competitiveness, managing it at scale often creates complexity. Choosing the wrong technology can hinder business velocity and developer productivity, and cause missed opportunities and significant financial costs.

The world is no longer confined to just a set of well-known applications, where IT professionals could plan and select the right underlying storage technology to meet that specific set of requirements, and customers would be locked in the selected technology for several years..

The field is a lot broader and more dynamic than it was years ago. There are now more applications, more data, more diversity of requirements, and more storage media technologies available — plus IT data needs change faster than ever. And where is the best location for such data? Private, public or hybrid clouds? Yes, it is complicated!

In this paradigm, IT is challenged to make sense of all of the moving parts and serve the business with two basic goals:

  1. To deliver the right data at the right time
  2. Provide the data in the most cost effective and secure way all the time

Beyond CAPEX, businesses experience significant operational costs and time to evaluate, select, and manage storage products. Adding to the purchasing complexity, the actual value of the data data set changes over time. While being able to deliver data faster today, by using more expensive technology, could mean a competitive lead and/or create new revenue streams, that data may no longer need to be sitting on an expensive media class six months later.

So how can IT efficiently deal with managing 100s of TBs or PBs, operating at scale while dealing with a number of different point products, that each will lock them in for years, when their applications, requirements and the technology market are constantly evolving? This reality ultimately pushes customers to compromise, which in most cases results in higher costs.
Even though companies’ may have data with different values/requirements, that could benefit from different types of products, the trade off is always a balancing act between operational efforts and cost.. In many cases we see customers consolidating their workloads on the more expensive and performant products capable of accommodating their most demanding workloads. Others choose to optimize around elasticity and flexibility, opting for the public cloud.. Both options require compromises which end up being very expensive when operating at scale.

At Datera, our founders determined that there was a need for a completely new architectural paradigm to radically simplify the data infrastructure, and ultimately give customers unparalleled business agility, operational efficiency, and to radically lower overall costs.

How does Datera do it?

Software-defined storage solutions are dramatically different in spite of being categorized under the same product class. First generation software-defined storage aimed at delivering storage arrays with software running on servers. The concept provided some flexibility and cost savings. However, the performance was often short of expectations, and in most cases the solutions did not address the managerial shortcomings and rigidity of traditional storage products. As a result many first generation SDS solutions were deployed for Tier 2+ use cases.

Datera took a broader approach to the problem to deliver on customers’ expectations.
In the Datera paradigm, the storage infrastructure — composed of servers from multiple vendors with different classes of media that can all co-exist together — becomes invisible!
Everything is automated and orchestrated via APIs to eliminate all of the efforts associated with planning, selecting, deploying and managing storage. A key value in a Datera managerial platform is that customers can direct applications via policies to selected media classes to meet specific SLAs.

Imagine an application that is running on SATA flash, and the customer determined it would benefit from running on a NVMe or even Intel Optane flash technology to get better response times. All that a customer needs to do is to add a couple servers with the desired media to a running cluster, change one policy and data will start migrating live to the new nodes and customers will start seeing immediate performance improvements! If after a while, the customer determines looking at the insights, that the application would benefit to sit on lower cost media again, a simple policy change will move it back! Everything just happens live through advanced automation!

So effortlessly achieving the ability to deliver the right data at the right time at the right cost all the time, can really be possible!

The end result is unparalleled business agility, operational efficiency, and radically lower overall costs. The data infrastructure can now meaningfully raise the bar in supporting IT as a competitive advantage.

In summary, the Datera software defined block and object storage platform is:

  • as easy, self service and elastic as the public cloud;
  • as performant and feature-rich as enterprise class arrays;
  • capable of handling multiple classes of servers and media technology in the same architecture, all orchestrated by policies to meet application intent with live migration between tiers;
  • future proof for adopting any new media technology on the fly;
  • everything fully AUTOMATED via APIs;
  • and foundational for a robust, hybrid cloud world.

Bottom line

Markets, technologies, and requirements evolve. As software is now capable of delivering very high levels of performance, the battleground shifts to automation.

We are seeing software defined continue to mature rapidly, and it poised to become one of those major architectural paradigm shifts that created a lot of value for Enterprises. Similar transformational examples include the transition from mainframes to distributed servers, or from physical to virtualized environments, on prem to hybrid clouds, or even from traditional cell phones to smartphones. During these transitions, there were detractors and naysayers that believed that the status quo would have won in the end. In most cases the world moved on and as long as there was significant new value, customers helped lead that transition as they realized that status quo was not a viable winning strategy. This is one of those times.

At Datera our focus is on delivering the most performant, reliable and automated software-defined architecture for the hybrid cloud data center, eliminating all the complexities of managing storage and enabling customers with the most agile data infrastructure possible. And taking our technology to enterprises via some of the most recognized and trusted brands in the industry.

Software Defined Data Storage

Taking a Pulse on Red Hat: Ceph and OpenShift in 2020

It’s been three years since I worked up a software-defined comparison between Red Hat Ceph and Datera, which you can see here for reference. That’s 30 years in technology time (one year of human life equals 10 in technology evolution), so it’s more than time for an update. And, as you’d expect, a sea change has occurred during that period not only for each storage offering, but also in the preeminence of containers and Kubernetes as a foundation of future applications.

Setting the Scene

The use of software-defined technologies in storage and other layers of the IT stack has gone mainstream, whereas just over our shoulder it was still a niche, early adopter market. Recently, we profiled just how far software-defined technologies in the data plane have come, outgrowing classical hardware-defined arrays by 5X.

Red Hat is also a different entity altogether, now part of IBM with a heightened ability to reach Big Blue customers and beyond. Both Red Hat and Datera continue to see significantly more customer adoption on the back of powerful new feature development powering performance and usability improvements. Datera has experienced increased adoption in the Global 1000 enterprise space, supporting transactional IO use cases like MySQL databases while Ceph is mainly deployed at service providers and in developer heavy environments.

In the meantime, containers have emerged as the central technology of the future, with now half of the Fortune 100 reporting they have rolled out Kubernetes in production. Red Hat has made a big investment in Kubernetes with its OpenShift platform, and similarly Datera has cemented its support for a variety of container orchestrators including OpenShift.

Technology Strides

Ceph has made strides and has IBM’s support, which is good news for the software-defined storage market, particularly in the following areas: 

Ease of Rollout, Use and Reporting

Ceph has always been viewed as a powerful engine for the right kind of use cases, but considered to be a bit of a mixed bag on the ease of rollout and ease of use fronts. But our friends at Red Hat have taken many steps to emulate other SDS technologies like VMware vSAN and Datera that excel here. A case in point is Ceph’s Admin dashboard, which provides a graphic view of the overall cluster.

Software Defined Systems - Cluster Dashboard
Ceph’s Cluster Dashboard
SDDC - Data Analytics Cloud Portal
Datera’s Analytics Cloud Portal

Hyperscale

While making our platform easier to implement and improving our real-time analytics using telemetry data from every node remain fixtures on our engineering agenda, our focus in 2019 focused squarely on scaling our deployments and adding a slate of new media and server vendor options to reduce latency even further. On the hardware side, we have improved reporting across all our major supported server platforms including HPE, Fujitsu, Dell, Supermicro, Quanta, and Intel. We also added more predictive features around latency reporting and capacity projections. We are also tracking our customer’s production deployments and are happy to see 70%+ of all write IOs are serviced under 131 µs.

Our Fortune 1000 customers have challenged us to scale to entirely new levels, since our customers typically start with a petabyte of capacity and add from there. To this end, we put ease of rollout on display in multiple forms which are best left to the eye to view rather than discussed:

Demo: 20 Datera Nodes Up and Initialized in 4 Minutes

But getting nodes up and running is useless if the volumes aren’t set to deliver the right capabilities to individual applications or tenants. To this end, our engineering team has made doing this equally easy, which we refer to as developing a storage class—gold, platinum, silver or whatever precious metal or naming convention you prefer.

Datera Demo: 5 Storage Classes in 5 Minutes

Datera has continued to refine and extend our policy approach to management adding node labels and allowing users to craft granular control over volume placement throughout the system:

Datera Enterprise Storage System Policies

Kubernetes, OpenShift and Container Acceleration

Big Blue and Datera also share a commitment to supporting containers and container orchestrators. Red Hat has made a big bet on OpenShift and similarly, Datera has made optimizing Kubernetes & OpenShift core in its technology strategy, again better seen than yapped about:

Demo: Persistent Volume Claims and Datera PV Integration

On The Block: Ceph Bluestore and Datera’s Extent Store

Bluestore was released in 2017 and as an alternative to using traditional POSIX file systems (filestore) to manage data on each disk. Using existing file systems provides robust data integrity and block management but comes at a great cost to performance and latency as a block storage backend. Bluestore adds a method of managing the block metadata on the disk. To improve performance, metadata can be placed on separate media, which is a common technique for traditional file systems as well. In Bluestore’s case, it can be placed on an NVDIMM type device or Intel’s new Optane DIMM technology.

The diagram below shows a block diagram of how data, and metadata, flow with the Filestore backend and with the Bluestore backend:

SDDC Migration

In our case, our founding architects recognized from day 1 that to achieve ultra-low latency, they needed to build a system that did not rely on existing POSIX file system technologies. To this end, the Datera Extent Store is built using log structure techniques to increase performance and reduce wear on flash media. Log structure commits large blocks of data to media at a time, which we refer to as buckets, which can house either the data itself or simply the metadata. These buckets have different behaviors based on the type so that the storage can be further optimized.

Composable Infrastructure Solutions

Final Thoughts

Ceph has come quite some way in the last 30 technology years, offering a massive number of features and capabilities which regrettably come at a cost exacted in system complexity. Administrators need to thoroughly understand the deployment and operation of these features and the impact on the rest of the system, as well as keeping watch for those which may not yet be production-ready.

As for Datera, we remain focused first and foremost on block software defined storage enterprise-class deployments built and fully tested with industry software and hardware partners. Our goal is to help organizations make the inevitable move to a software-centric approach and remove reliance on aging legacy SAN and FC environments where they see fit.

SDX - Software Defined Everything

Red Hat and Datera share a commitment to a software-centric vision for the enterprise data center built on containers. While we offer two different paths, the destination is the same. I like to think of Ceph as an Orange and Datera as an Apple: if you are famished, you can bite into an unwrapped Orange and get nourishment, but you will not enjoy the taste if you do not take the time to carefully peel and prep it; with an Apple, it’s ready to go as soon as you pick it up.

Geekonomics - Enterprise Storage

Geekonomics: How The Enterprise Storage World Turned Upside Down in 2019

Who, When, How, Why and What’s Coming in 2020

Enterprise Scale Out Storage SolutionsGeekonomics? Yes, we pay homage again to Freakonomics, the National Public Radio Podcast series that reveals the hidden side of everything, applying economic theory and exposing often hidden data to reveal the underlying truth. In Geekonomics, we take the same approach to all things IT to see what is really happening. Let’s do the math.

Our focus in this installment is: “What in the ‘Sam Hell’ happened to storage in 2019?” We piece together a number of the year’s headlines and key developments, how they fit together into an interesting puzzle, and unveil the missing pieces.

January: Datera & Hewlett Packard Enterprise Accelerate Enterprise Transition to Software-Defined Storage

Software Defined Storage Vendors - DateraHPE, the #2 player in classical storage arrays, tapped Datera in early 2019 to round out its tier 1 block storage portfolio with the Datera enterprise software-defined storage software platform running HPE ProLiant servers, complementing its line of Mellanox 100 GbE switches. A year later in early 2020, eWeek credited Datera for filling a critical hole for HPE and credited HPE for having some foresight. Something must be afoot…

February: IDC Signaled Increased Services, Pullback From Public Clouds

IDC - International Data CorporationIDC announced that the massive rush to the public cloud had reversed to a massive repatriation, stating that 80% of enterprises have initiated efforts to repatriate 50% of their applications and workloads due to lack of performance, security, and ultimately the high price of public cloud services from the leaders like AWS and Azure.

IDC further clarified that a majority, or 56%, of data would live over the long haul in on-premises data centers, with the remainder tucked away in public clouds, giving new life to the corporate data center which had been left for dead by most industry commentators.

April: Google Cloud Introduced the Anthos Platform for Multicloud Apps.

 

Enterprise Cloud Storage

AnthosAlphabet, the parent company of Google, launched Anthos, its new platform for “write once, run anywhere” applications on-premises and in multiple public clouds, brought on Thomas Kurian from Oracle to get GCP ready for battle in the enterprise, and bought a handful of companies to expand services and capabilities. My colleagues at Google Cloud tell me every employee there knows the cloud game is on, that Google is behind, and is sprinting to catch up.

Software Defined Storage Solutions - SDDC

June & November: Datera Assembles Software-Defined Data Center Leadership Forum and Virtual Symposium

Lead, Follow or Get Out Of The Way. Datera assembled a coalition of leaders in data center infrastructure to drive the industry to the next major wave of adoption, bringing together Scality, WEKA.io, Intel, Hewlett Packard Enterprise, Mellanox and Cumulus to talk requirements, lessons learned, and the criticality of partnership amongst vendors to power the future software-defined enterprise clouds with Mark Peters, longtime industry analyst from Enterprise Strategy Group. Datera livestreamed two virtual tradeshows in a pre-COVID-19 world — the first from HPE Discover, its worldwide customer event, from Las Vegas in June and the second from the Silicon Valley Convention Center in November, to worldwide audiences. (Pardon the interruption, but let me quickly tip my hat to the Datera team that made this happen: well done Eric, Tom, Brett, Dominika, Laura and crew.)

December: Amazon Web Services Announced The Availability of AWS Outposts for On-Premises Hardware Systems.

AWS OutpostsAmazon made its long-awaited AWS Outposts, an on-premises proprietary compute and storage hardware platform compatible only with AWS cloud services, available and ready to address, as a last resort, the 56% of data not moving to the public cloud. Long live the enterprise data center.

Exiting 2019: Fastest Growing Storage Companies in 2019…Were Software Companies

Software Defined Storage Companies

17 of the 20 fastest growing storage concerns across the landscape in 2019 were software companies. That’s right, software companies. SDS platforms like WekaIO for HPC file use cases and Datera for Tier 1 block led the way while others bit off important albeit important challenges like data protection, file virtualization and multi-cloud access. Marc Andressen, early Internet innovator and VC behind some of the largest and most successful technology startups of the last two decadeData Migrations, said in 2011 that “software is eating the world” and this statement has now clearly touched down in the enterprise storage business. Of the major array players only Pure Storage, with its lead in all-flash arrays, was able to claim a spot on the list. It makes sense given that powerful flash and NVMe drives have hit a production volume that has reduced unit costs to a virtual stalemate with spinning disk drives, making better performance and reliability available in a server package priced at a fraction of the total cost and flash markup charged for classical arrays.

Entering 2020: Software-Defined Storage Market Will Become 3X the Array Market

IHS MarkitIndustry analyst Dennis Hahn, longtime industry player, launched his inaugural market projection for software-defined storage, projecting an $86B market by 2023 on the back of 28% annual growth. Advanced Data Services His effort wisely combined the move to hyperconverged infrastructure (HCI), which incorporates a storage layer in the hash, and standalone software-defined storage systems, like Datera, into a single projection, since both are key alternatives to classical storage arrays and deliver a public-cloud like experience on-premises.

Entering 2020: Worldwide Enterprise Storage Systems Market Revenue Down 5.5% to $28.6

IDCWithin a week of projected solid growth for software-defined, Eric Burgener and his team over at IDC lowered their original gloomy forecast for classical hardware storage systems from a 1.3% uptick to a decline of 5.5% for 2020, Data Orchestration Platform with that trend continuing for the foreseeable future, all against a backdrop of 50%+ growth in data. While many key hardware players struggled with their revenue numbers at the end of 2019, Pure Storage and Huawei stood out for their positive momentum, yet their results couldn’t and won’t reverse the slide at the category level.

So Where Does That Leave Us?

2019 exposed every key macro trend in enterprise storage—software is growing, hardware is slowing, all-flash environments are showing and cloud services are on-going. On the surface, this doesn’t look that complicated, but not many predicted it and stood by it. Geekonomics gives a hearty salute to David Floyer of Wikibon, who five years ago predicted the bifurcation between software and hardware based approaches. He termed the software approach “Server SANs,” a superior term for using software to turn servers into storage area networks — now standalone storage as well as even HCI — and the tapering of hardware-centric storage.

No one is predicting the imminent expiry of any storage category, since my experience over two decades and in particular the last several years in the industry tells me that large organizations from the Fortune 1000 to public and educational institutions will put all of them to use. And let’s not forget that data is growing 61% per year and it’s going to need to go somewhere. But the growth numbers clearly tell us who has the best outlook as we move forward.

Traditional Storage versus Enterprise Cloud Storage

So What’s Next?

Andressen’s well-argued thesis is that digital disruption has come or is coming to every industry, from mainstream industries like retail, entertainment and transportation that were offline to now, ironically, even digital storage, an industry that has been online from the onset. Software is simple eating this world too, with the only questions being how fast and by whom. 2020 will be THE seminal year in answering them.


Sources

Evaluating Enterprise Storage for Kubernetes

10 Principles for Evaluating Enterprise Storage for Kubernetes & Cloud Native Applications

Hint: Container Storage Interface (CSI) Plug-In doesn’t mean Container Storage-Optimized.

Straight out of DevOps land comes this missive: 90% of the Fortune 1000 are using containers in production, restructuring their architectures to meet the challenges of the next decade, and migrating their applications from virtual and traditional infrastructure to container solutions and Kubernetes (known industry-wide as K8s). It’s going to be an interesting ride fraught with a huge level of misinformation as systems vendors slap a “K8s Ready” label on top of their preexisting products up and down the stack. While the introduction of K8s may not be too challenging at the compute layer, it will offer new complexity to the networking and storage layers which requires a new level of scrutiny on how containers are supported.

To help separate the signal from the noise, we’ve compiled ten key principles for evaluating on-premises, persistent storage platforms to support cloud native applications as you and your organization head down the inevitable path toward a container-centric future.

Challenges of Storage Containers and Kubernetes
CSI Plug-In: Check | Now Let’s Go Deeper

The Benefits and Challenges of Containers and Kubernetes

We hear a lot about containers and K8s today in conversations with our customers and partners and their desire to achieve the automation, composability, velocity, scalability and efficiency benefits they’ve seen in initial.

Given these potential benefits, it’s obvious why large enterprises, laden with hundreds of applications, are moving aggressively to containers. But selecting the right storage, often the last layer of the stack to move, is essential to realizing them, because hyperscale cloud native applications require persistent storage platforms with very unique characteristics.

As you embark on your journey, you will find systems providers touting their Container Storage Interface (CSI) which marks the most basic form of interoperability. But the storage layer needs more than just interoperability; it should match the dynamism of new applications based on containers and Kubernetes. Here we offer a framework for evaluating storage for cloud native applications that goes beyond buzzwords and is designed to help you get the right storage capabilities to achieve container and cloud native success.

10 Principles for Evaluating Enterprise Cloud Native Storage

1. Application Centric
The storage should be presented to and consumed by the application, not by hypervisors, operating systems or other proxies that complicate and compromise system operation. Your storage system is likely based upon older abstractions of storage such as LUNs, volumes or pools. These once were workable for monolithic applications such as Oracle and SQL databases running on physical servers, but modern storage systems now use an “appinstance” or “storage instance” construct that lets you apply templates programmatically, enabling your container workloads to rapidly consume and release resources as needs ebb and flow. For example, your DevOps team may spin up 100 Kubernetes or Docker containers a day, requiring a stateful, persistent appinstance for just that day and release it after an hour or two. This ensures your application gets what it needs and only when it needs it.

2. Platform Agnostic
The storage should be able to run anywhere in any location, with non-disruptive updates and expansions non-disruptive. While legacy arrays have proved reliable for blinding speed on monolithic applications, tuning them introduces many restrictions, compromises in your usage models and often requires a fleet of highly individualized administrators. Modern, cloud native workloads require a composable platform that can run in racks, aisles and multiple datacenters as YOUR needs grow, without requiring rebuilds and migrations to slow you and your users down. More importantly, in multi-node, scale-out systems, all upgrades MUST be rolling, non-disruptive and minimally impact performance. Look for systems that use standard iSCSI and Ethernet for maximum flexibility as your needs grow to include multiple datacenters, stretch clusters, and other disaster recovery implementations.

3. Declarative and Composable
Storage resources should be declared and composed as required by the applications and services themselves, matching the compute and network layers. Policy-driven systems allow you to make changes to the underlying resources seamlessly from the container perspective. For example, set a policy that includes dedupe, performance and encryption and as you add or remove nodes, the system should autonomously move and re-balance workloads across heterogeneous nodes that comprise the cluster. The system should automatically inventory resources and make split second decisions about the most efficient way to run your containers.

Enterprise Cloud Native Storage

One additional tip is to ensure that these policies are changeable over time. As basic as it may sound, many systems based on policies give the illusion that they are dynamic, but in practice are static. Test the ability to change policies and have those changes ripple through the data so that your storage is as dynamic as possible.

4. Programmable & API Driven
Storage resources must be able to be provisioned, consumed, moved, and managed by API. Even better, these actions should be done autonomously by the system in response to application instantiation, which is at the heart of an infrastructure designed for self-service. Without this capability developers will not be able to generate their own storage when they want it, which becomes a bottleneck in the development process and requires the very thing that containers are designed to eliminate: manual intervention. In practice, programmability allows you to query the storage system to assign and reclaim persistent volumes on an as needed basis.

Cloud Infrastructure Services

5. Natively Secure
The security of the storage should be endemic to the system. Storage must fit into the overarching security framework and provide inline and post-process encryption as well as Role Based Access Control. Bolting on security capabilities should be avoided since it often requires extra overhead and impacts storage performance. Your system should be able to programmatically encrypt at the container level and do so programmatically as well as utilized data at rest encryption capabilities to minimize any performance impacts.

6. Designed for Agility
The fundamental goal of a Kubernetes implementation is agility for DevOps and for the application overall. The storage platform should be dynamic in terms of capacity, location, and all other key storage parameters including system performance, availability (controlled via the number of replicas desired), and durability. The platform itself should be able to move the location of the data, dynamically resize, and take snapshots of volumes. Further, it should be easily tunable and customizable for each application or tenant via policy, using policies to create storage classes. The most powerful systems react dynamically when system resources are increased and during datacenter outages, when workloads may need to be shifted to other failure domains.

7. Designed for Performance
The storage platform should offer deterministic performance by application and by tenant to support a range of requirements across a complex topology of distributed environments. Performance is comprised of the IOPs thresholds, media type (flash, NVMe, Optane), data efficiency desires (compression, dedupe) and the dynamic reaction to changes in workload demand or cluster resources. In less dynamic applications, administrators could often set a single service level objective (SLO) and check in on the achievement of those targets intermittently. But in today’s environment, the system itself must “check in” on the achievement of SLOs constantly and react in real-time to orchestrate the changes needed to achieve them.

8. Continuous Availability
The storage platform should ensure and provide high availability, durability and consistency even as application needs change and the environment scales. For example, modern storage systems are leaving RAID behind and moving to shared-nothing designs where data replicas are dispersed to different storage nodes situated across failure domains or metro-geographies, all to maximize availability. This is the new way to drive availability levels at a lower cost.
Having fine-grained, programmatic control over availability levels by workload is essential since a fraction of your applications and data will inevitably be more important than others. Most enterprises will experience wide variances in data availability. While some applications may generate just three replicas for apps and data of lesser import—housing some fraction on the lowest cost media, while others use five replicas for the most important instances stretched across three data centers with an aggressive snapshot schedule. Having options beyond the standard and fixed RAID schemes is often deemed essential for cloud native environments, which is consistent with the architecture of most cloud service providers.

Cloud Based Storage

9. Support More than Cloud Native Applications
This is not a typo. The move to containers and cloud native application design is a generational shift, and as such will take many large organizations a generation to complete it. Embracing a storage system that supports more than containers is critical to avoiding the creation of yet another data silo. As you evaluate a new storage platform for cloud native workloads, you should similarly ensure that it serves virtualized and bare metal applications to ensure the freedom of data usage for applications as they transition. In other words, while your organization may be racing toward the future, your system also needs to enable the past without losing a step. The storage platform should serve Kubernetes, Docker, VMware and bare metal workloads.

10. Operate Like Kubernetes Itself — the Modern Way
Carving out traditional LUNs is a tried and true method for providing storage resources to traditional applications. But by embracing storage built on policies that are declarative and built with composability in mind, enterprises can mirror the dynamism of a cloud native compute environment. Storage needn’t be static — DevOps teams should be able to spin up new instances and make new replicas to serve the application as traffic grows.

On-premise Storage Infrastructure

Conclusion

These 10 principles will help ensure that you make the right choice for modernizing your on-premises storage infrastructure to accommodate containers.

At Datera, we designed our cloud data management platform to utilize commodity servers, allowing you to build an enterprise-grade storage environment with these principles in mind, yielding a system that is:

  • Built for Hyperscale: Scale up, scale-out, and scale across the data center on commodity servers. Legacy systems often force organizations to be predictive in storage planning, which often restricts growth. With our approach, organizations start with a set of nodes and, as new nodes are added, performance, capacity and availability grow. At these moments, the system re-inventories itself and autonomously applies the extra performance against workload policies already in place. This yields a horizontally scaling, built for hyperscale environment.
  • Built for Autonomous Operations: Using policies to drive the storage requirements minimizes operator overhead and ensures that those requirements can systematically and programmatically adapt as application or tenant needs change and environments scale. These policies are built for change, so that all changes apply retroactively to the data written earlier to the system.
  • Built for Constant Change: Datera serves bare metal, hypervisors, and container deployments on a single platform, helping enterprises avoid another island of data in their data operations as they migrate workloads over time.
  • Built for Continuous Availability: Datera’s shared nothing architecture eliminates single point of failure challenges, and overcomes multiple component failures and multiple node failures without loss of data or performance. The platform uses telemetry from all nodes to constantly monitor the system, ensuring availability and other parameters in the SLO manifest.
  • Built for Tier 1 Performance With Services: Even when enabling dedupe, compression and encryption, Datera supplies sub 200 microseconds latency and can scale to millions of IOPs, which grow as nodes are added to the cluster. Further, Datera can provide this class of performance with essential data services like deduplication and encryption enabled with minimal performance impact.

Lastly, Datera operates like Kubernetes itself, letting you rapidly deploy persistent volume claims to support an application’s distinct requirements and take the appropriate action (e.g., add capacity, add high performance media, make a replica) to meet them. Datera autonomously supports container changes over time, spinning up new services, new tenants, or even recapturing resources automatically as you spin down Kubernetes instances.

Modern workloads may not require a modern approach, but enterprises can gain from new thinking and new capabilities that Datera’s software-driven approach delivers.

 


 

About the Author

Brett Schechter is the Sr. Director of Technical Marketing at Datera. He has spent time with many of the most innovative storage vendors of the past decade, as well as running product management for Rackspace storage, where he set the requirements for the first managed public/hybrid cloud storage products on the market. In his current role, he is responsible for collaborating with enterprises on their system requirements and test plans.