I spent many years buying and building High Performance and Highly Availability data storage environments using all-flash and hybrid arrays from three- and four-letter storage vendors. From this year’s newest flash to old school disk array system, these array vendors are always trying to convince us of two things: first, that with enough money and the new widget, we can accelerate to the promised land of performance with an occasional write in under 100 µs; second, that we can use their storage virtualization schemes to generate five or six “9s” of availability across three or four systems (and even geographies) without suffering downtime. And let’s not forget to factor in those EVERgreen costs for this year’s model with the newest, coolest media, and the challenge and expense of placing teams capable of handling those systems in each geography. 

To this day, it still makes my head hurt… which is why I left that past behind and moved to software-defined storage for the antidote.

What sets software-defined storage – and Datera in particular – apart is that we let companies do the same thing using commodity servers and switches, and with so much flexibility you may never migrate again. It’s all about the architecture, designed for no-compromise data availability and performance, all while enabling teams to mix and match heterogeneous media and servers. You love NVMe or Intel Optane NVDIMMs or plain old disk? Add a few Optane nodes and some NVMe nodes or hybrid nodes in a cluster, and let the Datera platform move workloads based upon your policies and templates autonomously across it. Performance increases, and resilience as well. Distributed systems are designed so that component failures have no effect on availability and little effect on performance. With Datera, 100% data availability with Tier 1 performance is no longer hard, or expensive. That is what Continuous Availability Lifecycle Management, or CALM, is all about.

While Continuous Availability is a must-have for modern enterprises managing mission-critical applications in today’s “always-on” world, architects are relying on outdated design methodologies for disaster recovery, playing “games with 9s” to meet SLAs that result in more complex systems, high ongoing expenses, and even higher costs when disaster inevitably strikes. Even “High Availability” 99.999% uptime, means more than five minutes of downtime per year! Can you afford that?

If you’re tired of playing games with 9s, we’re here to help you modernize your approach to IT resilience with CALM. CALM supports continuous data availability while simultaneously enabling two critical business capabilities. First, the transition of data through a business-value lifecycle where, at any point in time, the data can be non-disruptively optimized to match the value it brings to the business. Second, the evolution of the storage infrastructure through a business-value lifecycle where, at any point in time, the infrastructure can be non-disruptively optimized to match the value it brings to the business.

Here’s a snapshot of how Datera’s CALM works:

CALM embraces velocity and agility without compromising data availability. CALM is achieved by design, not just by observation. If you’re running a private cloud or enterprise data center with Datera and have more than six nodes (you can use a wide variety of commodity servers), you can easily construct a Datera system for continuous availability. 

These are the six key steps Datera took to help our users achieve CALM:

  1. Architecture: The control plane is AI-optimized for replication, distribution, and management of data. The data plane is built for speed and adapts to changes in the overall system capabilities via a unique time-based coherency protocol.
  2. Design: Shared-nothing architecture eliminates the need for inefficient, poorly performing RAID. Enterprises can maintain both current and future maps, providing a forward look at where data may go when the system experiences any change to resources or workloads.
  3. Implementation: Application and policy templates allow for very high data velocity, and support bare metal, VMs, and container ecosystems concurrently.
  4. Test: Datera is self-optimizing and autonomous. Any failed storage device or system is immediately remediated by shifting workloads to other nodes or volumes. New placement maps are generated immediately, optimizing cluster resources at all times.
  5. Observation: Datera is constantly gathering telemetry points, and comparing them in real-time to the aggregate telemetry of all users who share generically.
  6. Analytics: Datera provides a very rich, real-time analytics engine, where cloud-based and local telemetry is both monitored and assessed at all levels of the storage and network infrastructure. The telemetry information communicated to the control plane closes the loop to form the self-optimizing, intelligent, and automated Datera system.

Unlike legacy, dual controller scale-up arrays, Datera’s scale-out, policy- and application-driven architecture delivers significant innovation to enterprises, enabling the ability to achieve a high performing and continuously available data infrastructure. 

Datera limits impacts and risks to performance during hardware failures, lets you avoid costly downtime for upgrades, maintenance and the most troublesome for legacy arrays, migration. Simply continue to add heterogeneous media and server nodes to your Datera cluster, to increase performance, capacity, and availability with no additional software or appliances to purchase.

Read the full whitepaper: Datera – Built for Continuous Data Availability.