Software-Defined Storage Does Not Pass Go
Without Advanced Automation

The world will be creating 163 zettabytes of data a year by 2025, according to IDC. Switching from blocks to kilometers for a minute, one measurement shows “if each Terabyte in a Zettabyte were a kilometer, it would be equivalent to 1,300 round trips to the moon and back (768,800 kilometers).” Yes, the deluge of data traffic is here to stay as the variety, velocity and value of company data continues unabated. And, we need our data to be available anywhere, anytime. The cloud has been proliferating through the digital enterprise, changing the way it works and making sure our data is on hand. But now, for the first time, Enterprise Software-Defined Storage (SDS) is here to automate data operations and plumb the Software Defined Data Center (SDDC).

When it comes to the massive data growth they are seeing, IDC concluded in a recent survey of IT shops that “artificial intelligence (AI) for IT is “Not Optional” and is essential to meeting key business goals. Nearly 79% of the companies they spoke with are in active evaluation or in the process of rolling out AI in some form. While IT organizations may not be keeping up with automation, the burden of developing new apps and handling the data deluge without a plan to move to enterprise SDS with AI and orchestration capabilities, will lead to a hard… BOOM! …ouch, that smarts… fall.

The Software Defined Revolution Comes to Enterprise Storage

When conditions and concepts undergo a significant change, you sometimes have to come up with an extraordinary deed or transformation to move forward. Amazon changed how we shop. Tesla is changing how we do or don’t drive cars. And in the enterprise, the way we structure our data centers is being re-imagined for the cloud. The software-defined data center (SDDC) extends virtualization concepts such as abstraction, pooling, and automation to all data center resources. Seeing the need to orchestrate data everywhere, across private and public clouds, inspired Datera to rethink how to do data management and storage at scale.

The SDDC moves data management from a “storage-centric” to a more “service-centric” operational model, dynamically adapting resources to each individual application. Datera’s software-defined storage platform runs on industry standard servers and is designed for scalable on-premises and hybrid cloud deployments. It can be delivered on your choice of leading servers, pre-configured, loaded and ready to rack for rapid deployment. Running high performance workloads in the enterprise is a must and Datera’s scale-out block and object storage delivers sub-200µS latencies, and cost-sensitive archive storage, using different storage device types simultaneously. Driven by data orchestration policies and machine learning that continuously optimizes the data environment, Datera‘s enterprise SDS platform for Tier 1 workloads offers a strong foundation for the AI-defined datacenter. SHAZAM!

Advanced Automation:
Removing Repetitive Tasks and Adding Automatic Learning

However, just because IDC tells us it’s essential doesn’t mean it is. IDC surveys of Global 2000 IT shops detail that automation is essential, mainly because the modern data center is tasked with managing a greater mixture of workloads, compute frameworks, I/O demands, and tenants across a broader range of business types than ever, and they are increasingly the core of the enterprise itself.

The SDDC must keep the apps and data from going “POOF” while executing the litany of mundane data tasks, like manual migrations, upgrades and expansions. Doing so enables stretched IT staffs to focus on higher value added initiatives that grow the business and build for new levels of scale. Automation helps to address these issues and bring efficiency, simplicity and sanity to an environment of constant change.

Here’s how Datera’s scale-out architecture, use of storage classes, data orchestration, policies and machine learning provide IT with the advanced automation needed to meet the challenges of the modern data center:

  • Scale-out not scale-up

    Datera’s automated SDS scales to meet the requirements of any capacity demands while delivering enterprise performance. By using a scale-out architecture where storage nodes can be added and removed on-the-fly, with no downtime or manual efforts, Datera avoids the scalability issues and forklift upgrades of legacy disk arrays.

    Datera automatically discovers any new storage nodes added to the storage network and the storage devices within them and knows how to best take advantage of each without the need for manual efforts. If you want to be able to introduce new hardware quickly, with no downtime or interruptions to ongoing operations and have a weekend. Here’s a video on what it looks like to Deploy 9 Nodes in 9 Minutes.

  • Delivering Classes of Storage with QoS

    Datera’s SDS utilizes a wide variety of storage device types simultaneously, be they HDDs, SATA flash, NVMe flash, persistent memory such as Intel Optane or Samsung Z-SSD, or new devices. They do not require separate pools with similar storage devices which create storage silos. Eliminating storage silos optimizes costs, as workloads and datasets can be automatically placed onto the best tiers of price/performance from a single storage system, each with true QoS. Providing multiple tiers of price/performance from a single storage system requires advanced automation, including data placement, relocation, and self-balancing within the storage system to hide complexities. Here’s a quick video demo on Managing Volume Performance and Quality of Service with Datera.

  • Automatically Balance Performance & Capacities

    Storage devices are spread across storage nodes that can be different models or generations and come from various vendors. Storage nodes can be dynamically added and removed. Hardware can fail and workloads can demand dynamic changes. Datera’s advanced data orchestration runs automatically within the storage system to handle all these variables. If you or the application want to make a change in the performance or cost, you can ask for their volume to be placed onto higher performance or lower cost media, by simply changing the policy. The system will move the volume with no downtime required, providing automatic balance for performance and capacity.

    Datera SDS includes a unique policy for “keep one copy on flash” to ensure that at least one of the three copies of the volume are on a flash storage device. Reads are directed there to deliver fast performance, while other replicas of the volume can live on cost-effective HDDs for resiliency. Performance is very predictable with this approach, versus trying to cache the most frequently used blocks in flash, only to have other workloads eject cached data. When hardware fails, Datera uses all storage nodes working in parallel to perform necessary repairs quickly, while meeting QoS goals for all volumes.

  • Machine Learning Predicts and Averts Problems

    There is just too much going on in a scale-out storage system for it to be manually tuned, particularly at the scale of large enterprise. Datera uses applied Machine Learning (ML), an application of AI, to weigh hundreds of metrics that are continually collected in order to understand the implications of a change. ML is also used to optimize the cost, performance, and availability of the storage system amid constantly changing conditions, so all volumes deliver on their contracted policy with QoS. Eliminating manual tuning saves considerable time and money, as OpEx is commonly reduced by 70% freeing IT staff is to work on strategic initiatives.

    As demands on the storage system creep up each day, Datera ML will rebalance the storage system and it will anticipate that in 30-days rebalancing will no longer be enough. The administrator will receive an email from the system, telling them that additional storage nodes are required in order to avert the future bottleneck and the impact of its resolution on performance and availability.

  • Eliminate Manual Data Migrations

    Data migrations are an unpleasant reality in the world of the storage administrator. They happen all the time between different storage devices, storage nodes and sites. Taking the data offline while it is being manually migrated from one place to another could take hours or days, and is no longer acceptable in our 24×7, always-on world.

    When you ask for a volume from Datera, you request a level of performance, be it platinum, gold, silver, or bronze. Datera delivers the volume and when writes are performed, the data is placed on the appropriate type of storage device, be it persistent memory, NVMe flash, SATA flash or good old HDDs. Each delivers a different level of performance at a corresponding price point. Rather than manually moving the data on a volume to a different class of storage device, or a different disk array, Datera allows you to change the policy for the volume and trigger it to be migrated while it is being used.

    As an example, perhaps for database acceleration, a platinum volume of 3TB is requested. That’s going to be big enough to hold the database, and fast enough to ensure the end-of-month reconciliation and close can happen in 6 hours. Once the end of month event has past, the user can request that the volume be moved from platinum to silver – simply by changing the policy via an API call. Datera will move the volume automatically and in an orderly fashion while it is being used. This eliminates downtime, maximizes agility, and satisfies workload requirements over time. “POOF” goes the pain of manually dealing with your data environment.

The SDS + AI Payoff

The traditional data center with stand-alone storage arrays is punching above its weight when it comes to supporting the wide variety of workloads we have today. That is reason enough to move to software- defined storage that can manage these workloads with their individual performance and capacity demands and provide automated delivery of storage services.

But consider the potential impact on your daily life in the data center. Fifty-eight percent of IDC’s survey participants said that IT was the prime area where AI would be most beneficial, and that the biggest benefits would be getting IT staffs out of low-level tasks so they can be redeployed to new apps.

Datera’s Enterprise SDS platform automates manual, error-prone storage administrative tasks, while automatically moving data between systems using different storage technologies to meet predefined performance and reliability policies. The policy-based automation and machine-learning capabilities of Datera execute reads and writes of data to the best locations, nodes and media while continuously optimizing the system.

However, the real payoff isn’t how cool Datera software is, or how much automation and AI it employs. The real payoff is improving workplace productivity and helping people get their jobs done efficiently. POW!!!

Get a Consultation

Discover how you can take advantage of Datera enterprise software-defined storage with advanced automation: Contact us to schedule a free consultation.