Beyond Storage – Automating Data

Dynamic and autonomous data infrastructure is now a reality

It’s not breaking news that the demand for data continues to grow exponentially. Businesses need ever more data to create value and compete; data velocity, availability, security and cost of such data all play a crucial role for businesses to succeed.

Translating these requirements into the world of IT, especially for companies that operate at scale, becomes a massive challenge steeped in complexity!

We have the privilege to serve the needs of some of the largest and most demanding enterprises in the world and have a direct lens into the struggle that IT professionals go through to operate and manage data at vast scale.

First, IT professionals have the enormous challenge of figuring out how to architect agile data infrastructures that can handle the requirements of the world they know today. Next, they have to be ready for unknown future requirements and rapidly adopt new storage technologies, which if used timely and efficiently, can deliver a positive impact to the business’ outcome.  And last but not least, they have to efficiently manage the life-cycle of data through acquisition and hardware obsolescence.

Whether customers use traditional enterprise storage arrays or more flexible software-defined solutions, they still require a lot of planning and management to make them work cohesively and deliver on expectations. Mistakes are nearly unavoidable and can be incredibly costly to the business.

Moment of Discovery and a Gordian Knot | UMHS Gender Services

So the question is: is it possible to relieve IT from the burden of many of these planning and management efforts?

The outcome would be that IT could become more “data” agile and efficient, save significant costs, and be able unlock critical IT skills that are currently trapped in managing the infrastructure to help create business value instead.

When considering the traditional approach of managing storage, the answer is NOT REALLY…

Legacy storage vendors continue to make storage products simpler and more scalable. Of course, these are very important efforts, but they are no longer enough.  Incrementalism is running its course at least for larger enterprises!

It takes re-imagining how data could be managed to eliminate all of these complexities and achieve optimum data velocity, availability and cost.  The public cloud has addressed many of the challenges just articulated.  It is simple, scalable and elastic.  However, the public cloud is not for everyone nor everything.  Cost, latency, security, and data governance are significant factors for companies that operate at scale in many business sectors that make the cloud prohibitive.

Creating an architecture to solve these issues is what Datera innovators set their aim at solving, which required looking at the problem from a broader context.

I sometime compare the thinking behind what Datera is delivering to the type of transformation we have all experienced moving from traditional cell phones to smartphones.

When the innovators behind the smartphone developed the operating system, they did not start with the idea to build a better cell phone. They imagined a new paradigm.  The phone suddenly become a platform that could integrate and radically simplify multiple functions as well as make/receive calls.

The Datera team followed a similar thinking. They looked at the problem from a higher construct; how to help customers manage their data more efficiently not just how to build a better storage product.  They tore down storage barriers through a radically simple idea: define storage by its data, driven automatically by applications.

The end result is a hybrid cloud, software-defined platform implemented for block and object data that enables a data infrastructure that is heterogeneous, dynamic, and autonomous. 

This is achieved through three distinct layers :

  • A heterogeneous software defined storage layer for block and object data
    • This software-defined layer provides traditional storage functionality of performance; with low latency, high availability, scalability, security, and enterprise data services capabilities such as data efficiency and protection.
    • A unique capability in this layer is the ability to concurrently and transparently manage any validated class of media / server. This allows customers the flexibility to adopt new technologies on the fly and implement a heterogeneous hardware infrastructure that can simultaneously maximize performance as well as cost; all automatically via policies.  Datera’s deep insights platform provides additional visibility to aid in that decision making process.
    • Why it matters: Customers no longer need to compromise or deploy multiple products to optimize for performance or cost. Different classes of storage media, present and future, can be added and coexist in the same infrastructure seamlessly and transparently. Grow, mix and match using any brand and server generation approved by Datera in the Hardware Compatibility List “HCL”.Everything is easily, and abstractly managed.
  • Data Management Layer – Making the infrastructure Dynamic

    • Data is placed on the appropriate storage media based upon policies. Should your requirements change, or new technologies become available that could benefit the business, a change in policy can be executed in minutes and the data will begin moving “live” to the new destination.
    • Why it matters: The more immediate and valuable benefit, these capabilities help customers eliminate all of the planning and efforts associated with data migrations or hardware obsolescence. When servers reach end of life, they are simply de-commissioned from the cluster and new servers joined. Everything is executed extremely simply and live. But it goes beyond that, companies can now cut MONTHS spent preparing to add new workloads or change existing ones.  They can eliminate the waste of over-provisioning, they can take advantage of new media technologies rapidly, or as the value of data changes, they can move data to more cost effective destinations. All this happens live and through simple policy changes. Infrastructure can now be used more efficiently, in the most optimized way.
  • Programmable Layer – Making the infrastructure Autonomous and easy to program

    • Every functionality of the solution is exposed through an API first model.
    • Customers can define the needs of their data through simple policies and the system will automatically implement them based on a heterogeneous infrastructure
    • Automated provisioning against pre-defined policy enables self service
    • Why it matters: Intelligent algorithms work 24/7, maximizing performance, data security, and overall efficiency across hundreds of workloads. Critical IT skills are now are able to focus on more value creating efforts rather than constantly watching and managing the infrastructure

Bottom line:

–  Datera has gone well beyond just delivering a better storage platform. 

–  Datera’s passionate engineers have re-imagined the way data can be managed, eliminating most of the complexities that IT professionals have to deal with as they manage data at scale.

–  For the first time IT professionals have the ability to architect a software-defined data infrastructure that is heterogeneous, dynamic, and autonomous, that can run the most demanding applications, enabling the business to be more agile, efficient and cost effective.

Please listen to our accompanying PodCast on this blog’s theme, as Datera leaders discuss the real world advances from our data platform.

For more information, we recommend reading our white papers:

Built for Performance

Built for Constant Change

Built for Autonomous Operations

Built for Continuous Availability

We can schedule a demo at any time. Please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

Where vSAN ends, Datera begins… Part II (and now we make it even easier!)

In our last blog we talked about how Datera helps customers that have outgrown the capabilities of vSAN and want to continue their software defined journey.

As environments grow larger, IT professionals are challenged with not only managing data at scale but being in position to rapidly respond to new requirements and adopt new technological advancements that can benefit the business.

In this paradigm is where customers turn to the Datera heterogeneous software defined platform to deploy a high performance, dynamic and autonomous scalable data infrastructure.

As most Datera customers have deployed vSphere within their organizations, Datera has had tight integration with vSphere for many years. Recently we further enhanced our vCenter Plug-In (VCP) to include health checks, and additional functionality.

The goal is always simplicity, so that vCenter administrators can remain within familiar consoles, and provision powerful, programmatic and autonomous storage without needing to know anything about the underlying storage.

Datera is deployed in a scale-out configuration of industry standard x86 servers containing a heterogeneous mix of standard servers utilizing a variety of storage media types including persistent memory, NVMe flash, SATA flash, 3D XPoint memory and/or conventional HDDs.

The Datera VCP Plugin is a user-friendly browser-based tool that integrates with the VMware™ vSphere™ Web Client, providing an alternative interface that allows monitoring and management of the Datera software defined platform.

The Datera VCP plugin is used to discover the Datera Data Services Platform, manage the volumes, view the status, and monitor the storage system environment. The user can monitor user activity, system activity and alert messages and make sure the system is configured according to Datera’s Best Practices for a VMWare™ Environment, all without leaving the familiar vCenter User Interface.

In this video (please click the image below), Kurt Hutchinson, one of our Sr. Solutions Engineers, walks you through how easy it is to provision and deploy Datera directly via the vSphere Client, without ever needing to enter the Datera GUI. Encourage you to watch, as he clearly shows how simple it is to provision intelligent block storage, at massive scale, by those with little storage knowledge!

For more information, we recommend reading our white papers:

Built for Performance

Built for Constant Change

Built for Autonomous Operations

Built for Continuous Availability

We can schedule a demo at any time. Please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

Where vSAN Ends, Datera Begins… Part I

Recently got together to record a PodCast with Bill Borsari, who heads up our System Engineering team, and he gave us the straight talk, directly from our customers, on where vSAN works perfectly for them, and where the greater performance, scale, flexibility and consolidation capabilities of Datera and enterprise software defined storage (SDS) take over. Bill has vast industry experience, and more importantly is deeply involved in architecting every Datera customer solution. I encourage you to take a listen, please click the image.

We “Co-Exist” alongside some of the most iconic names in the data storage kingdom, quite often in data centers built by the largest and most innovative companies in the world. This provides a very unique venue and perspective for Datera.

vSAN was originally designed by VMware to allow smaller companies to quickly deploy a virtual “SAN” without the need to buy proprietary gear from the likes of HDS, EMC, NetApp or IBM. From these humble origins, vSphere customers large and small, now flock to the technology, driven by the siren songs of reduced complexity/cost and more flexibility…while expecting all the performance, availability and storage services delivered by the traditional, controller based storage arrays that have ruled the back end of the datacenter for decades. For smaller clusters, and highly predictable workloads, it meets their needs.

What we hear from vSAN customers, is they have built a wide assortment of specifically crafted vSAN clusters through the years. One day they realize they are managing 20-40 loosely federated clusters, and are still stuck with 3-5 year migration cycles. The common word we hear often is “outgrown.”

vSAN customers move to Datera to regain the performance, flexibility, consolidation and dis-aggregation of storage that only a massively scalable, software defined data platform like Datera offers. We were originally architected to deliver on the most demanding use cases, and performant workloads.

The Takeaway

vSAN mates well with VMware customers, but only to a point. As the workloads begin to become less predictable, and more demanding, the architectural limits of vSAN can induce a managerial mess of multiple clusters. And, the simplicity that lets you rapidly deploy a restricted set of low cost servers, can lock you in to the same 3-5 year migration cycles that plague traditional storage arrays.

With SDS, and specifically with the Datara Data Services Platform, you may never need to migrate. Add new generations of compute, storage media, even mix and match different vendors within the same cluster, all to continuously adapt to the rapidly changing business and application needs of your internal and external customers.

For more information, we recommend reading our white papers:

Built for Performance

Built for Constant Change

Built for Autonomous Operations

Built for Continuous Availability

We can schedule a demo at any time. Please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

 

Demonstrating a Dynamic Software Defined Data Infrastructure

Software-defined is the new horizon for everything in the data center, enabling IT professionals to implement architectures that deliver a more agile infrastructure. These new infrastructures can rapidly respond to unpredictable requirements, as well as create a more operationally efficient data center where standard hardware components and automation become fundamental to achieving the goals.

Yes, this sounds like a perfect argument for the public cloud, but as we have learned from supporting the largest enterprises in the world, the public cloud is not for everyone– or at least– not for everything.  Hence, there is a need for a hybrid-cloud architectural model which allows customers to have the flexibility to decide when and what is most suited to be in each location.

When it comes to data, we are undergoing a major architectural transition. Scale-out software-defined storage (SDS), in this specific case, block storage, has continued to mature and can demonstrate the ability to deliver the performance and reliability of traditional enterprise class arrays.  Enterprises have taken notice, resulting in a significant shift away from such traditional products in favor of this more modern alternative.

Datera’s customer requirements are fairly consistent.  At a highest level, customers want the simplicity and elasticity of the public cloud on premise!  And they want that experience at a lower cost, with higher performance, better security, more buying power, etc.

Today’s business environment is extremely dynamic and new applications are constantly emerging.  Velocity has become paramount to enable better competitiveness and economic advantage.

In this new paradigm, especially for customers that operate at scale, planning scenarios using traditional enterprise products are no longer feasible. Customers need the elasticity to easily scale both capacity and performance over time to react to any new situation.  And of course, businesses want to be able to automate their data infrastructure while eliminating burdensome data migration efforts due to proprietary hardware obsolescence.  SDS removes hardware vendor ‘lock-ins’ by accommodating a wide range of standard server products.

The value proposition is pretty straightforward, and we will see more vendors shifting their offerings to provide these capabilities.

Datera’s unique “dynamic architecture” is a critical advantage over existing SDS offerings.  Datera enables the ability to create an infrastructure that is inherently heterogeneous by design.  In this model, customers can architect a data infrastructure that becomes timeless; capable of seamlessly and concurrently managing heterogeneity across server manufacturers, generations, and media types / sizes / classes.

The value of a dynamic data infrastructure that is autonomously managed is reflected in two scenarios:

  1. Principle one: Not all data has the same value, and the value of that data changes over time.  With this in mind, to maximize economics, it becomes valuable for customers to be able to architect the infrastructure with different classes of media as defined through simple policy settings. When a situation changes and data would be better suited for a different class of media, a simple policy change will move data “live” to a new destination.  In essence, the applications are allowed to ‘float above the hardware’ and are not tied nor constrained by it. Outcome: Customers can broadly consolidate their data infrastructure, ideally optimizing hardware investments across their most valuable data sets at any point in time.
  2. Principle two: The only constant is change.  Requirements and technologies ALWAYS change. New requirements may require new innovations in media technology.  In the Datera paradigm, if a customer begins with a cluster of servers using SATA technology but then determines new requirements would benefit from using either NVMe or Intel Optane™ storage classes, they can easily adjust on the fly. The customer can order a couple of servers with new media, and Datera will literally take only minutes to adopt the new technology.  With just a few clicks, the new servers can be joined to the existing cluster.  Then, with a simple policy change, the system will automatically move the workload “live” to the new server!  Yes, magic! Outcome: Customers have the broadest flexibility to adopt any new server or media technology within a running cluster to deliver best in class business agility and economics.

Heiko Wuest, our Sr. System Engineer in Germany will show a small cluster, deployed in the HPE lab in Geneva, architected with three classes of media technologies, consisting of hybrid/SATA and NVMe nodes.

Now to demonstrate all of this, please click the image:

In the video, you can see how easy Datera makes it for customers to allocate their workloads to the media technology that makes the most sense for them via policies, and how easily they can change the allocation of data “live” with policy changes.

Bottom line: 

SDS architectures will continue to gain momentum because they enable customers to become more agile and efficient while operating at scale. At Datera, we believe that customers need more than just the ability to scale-out on commodity hardware.  Customers need to be able to simply, dynamically, non disruptively move data in order to compete in an ever-changing, ever cost-conscious business environment.  Datera’s autonomous platform lets businesses drive on-demand decisions based on their data’s needs and not on rigid infrastructure constraints.  You can achieve business and operational agility without sacrificing economics.

For more information, we recommend reading our white papers:

Built for Performance

Built for Constant Change

Built for Autonomous Operations

Built for Continuous Availability

We can schedule a demo at any time. Please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

Does a Future Proof Storage Platform Exist?

They say that “nothing is more certain than death and taxes.” And if you are in enterprise IT you know that nothing is more certain than technology obsolescence.  Storage media has rapidly evolved from HDD, to Hybrid Flash, to All Flash, to NVMe, and to Optane. Who knows what will be next. When Datera was founded, the primary goal was to design an architecture that is “built for change,” that eliminates painful forklift upgrades and data migrations, that is elastic, and that organically grows to accommodate customer needs and technology trends.

Datera is a software defined storage platform built with a unique scale-out architecture that enables rapid and flexible adoption of the latest industry-standard server and storage media technologies. Datera autonomously delivers a cloud-like user experience, business model and agility at enterprise scale, with linear performance and continuous availability. Datera has been a game changing technology for some of the largest enterprises in the world. When traditional “scale out storage” was not enough to meet enterprise’s evolving requirements and ability to rapidly adopt new technologies, we innovated to make the architecture scalable, autonomous, dynamic and future proof, learn how…

1 | True Software Defined for Ultimate Flexibility

Storage solutions are generally “software-based” and run on some kind of hardware platform. What differentiates Datera is that the software runs on a broad range of industry standard servers accommodating different storage media types and network ports. The software currently supports HPE, Fujitsu, Dell, Intel, Supermicro, and Quanta server platforms.

Moreover, the software is optimized for the latest storage media technologies including SAS/SATA SSD, NVMe/Optane and Hybrid Flash as well as high performance NICs. Datera will even allow servers with different media types from multiple vendors to be composed into a single cluster to deliver a broad range of storage services for a variety of application price/performance needs. Want to mix and match Gen 9 and Gen 10 from the same vendor? That works as well. This ultimate flexibility allows your storage infrastructure to organically evolve to meet future needs while avoiding vendor lock-in.

2 | Dynamic Data Fabric for Continuous Availability

The foundation of any scale-out storage system is a network of nodes that distributes the data for scale and resiliency. What differentiates Datera is that the data fabric dynamically re-distributes data without requiring downtime or service disruption. A full mesh rebuild process enables fast and transparent data recovery, server maintenance, rolling OS upgrades and workload re-balancing. The more nodes in a cluster the larger the swarm, the greater the durability, the faster the process and the less the system impact. Combined with native, advanced data services like snapshots, failure domains, stretched clusters and cloud backup the system provides continuous availability with protection from operator errors or even site failures.

3 | Optimized IO Stack for Linear Performance at Scale

Every solution will claim that it “scales.” But “scale out” is no longer enough in your rapidly evolving world of changing requirements and new technologies. Can you scale rapidly, and will the system autonomously move workloads based upon new resources?

The Achilles heel of most distributed storage architectures is the implementation of lock mechanisms for managing coherent access to data. This becomes a significant performance bottleneck as the system scales out. Datera is far more advanced, using a patented Lockless Coherency Protocol that completely avoids this problem. Combined with bare metal form factor, sophisticated multi-tiered caching, write coalescing, QoS and other optimizations the system delivers linear performance (sub-200µS latencies) at scale along with other benefits like extending flash media endurance.

Our early architectural design goal was to deliver a dynamic and autonomous scale out software defined solution. Anyone can deliver a scale out solution, but without the right architecture it will be rigid, and inefficient.

4 | Intelligent Management at Cloud Scale

Storage management solutions have recently begun emulating familiar public cloud solutions with web-based REST APIs and graphical user interfaces. What differentiates Datera is the use of sophisticated application templates to precisely define and control storage provisioning policies like media type, data resiliency, and snapshot schedules and more. These templates and polices fully expose the flexibility of the Datera platform. For example, a reusable template can be created for tenants that specifies different copies that combine All Flash and Hybrid Flash media types to provide both performance and resiliency. Combined with cloud orchestration plug-ins and data analytics Datera delivers a cloud-like user experience for provisioning storage with velocity and performance at scale.

5 | Continuous Self-optimization for Simple Operations

Managing a storage service can be challenging. There are competing goals to improve both resource utilization and performance. Other complex demands include setting application service levels while optimizing these results in terms of operational complexity and cost. Datera is once again unique, in that application behavior and storage resources are continuously monitored and intelligently optimized without user intervention. When new storage nodes or applications are added or removed from a cluster, resources are auto-discovered and the system is load balanced to optimize resource utilization to meet the most demanding service levels — making forklift upgrades a thing of the past. Combined with the efficiency of Datera’s full mesh rebuild process, the system is continuously optimized without service disruption or user intervention to enable simple operations.

So, does a future proof storage platform exist?

Of course, the answer is “yes.” For example, you could begin with a simple 4-6 node Datera cluster today, and continuously update and extend with new server/CPU and media technologies for 10-25 years. Always enhancing the resilience and performance, while maintaining the same UUID! If you have struggled with forklift migrations and upgrades every 3-5 years, you do not need to any longer!