The future of storage is a Software Defined Model

By Evan Powell, CTO, Nexenta.

  • 10 years ago Posted in

Imagine a data centre that dynamically responds to the needs of your enterprises; a data centre that “sees” that, for example, you need more storage and compute for your desktops and, on demand, adjusts to address this and other demanding computing loads without a lot of wizardry and bespoke engineering.


What you’re imagining is what I call a software-defined data centre and it promises flexibility, end to end flow through provisioning and management, along with application awareness to deploy applications on demand and the ability to access, also on demand, appropriate resources from the data centre.


The goal is to fix IT once and for all by tying it directly to the productivity delivered by new applications. This promises a new age in IT, one in which innovative solutions are adopted faster thereby finally flowing through the benefits of Moore’s law to enterprises. Software defined data centers will fix one of the great quandaries in IT and in our economies more broadly – while Intel DOUBLES the productivity or price performance of their chips every 18 months, an average enterprise sees their productivity increase by no more than 10-15% per year and the productivity of developed economies grows at perhaps 2-3%; if we can get the benefits of Moore’s law to flow through then we will have not just improved IT, we will have accelerated the growth in wealth and standards of living for us all.


The part of the software-defined data centre that I might claim real expertise in is software-defined storage (SDS), because I am CEO of a company whose whole mission is to make software defined storage a reality. But before I start talking about what SDS is, I’m going to tell you what it isn’t. Because, already, we’re seeing the enormous marketing budgets of legacy storage providers adopting and exploiting the notion of SDS thereby putting at risk clarity about a fundamentally transformative way of delivering information technology. So here goes a few comments:


SDS is not storage hardware driven by APIs. Nor is it storage software that only works with one vendor’s hardware. Just because a vendor can drive storage hardware via software doesn’t mean its management user interface or multi-system management capabilities qualify as SDS.
SDS is not something sold by a legacy storage hardware vendor. A hardware company might start calling itself a software company because it has hired some software developers, but that doesn’t make it one.
SDS needs to provide a consistent storage solution across virtual and physical infrastructures alike, so it is not something that only works as a virtual storage appliance. Further, SDS cannot sacrifice fundamental data storage capabilities because it has to be enterprise-class.


So what is SDS? Two main requirements: 1) It’s an approach to data storage and management that abstracts the underlying hardware, enabling the flexibility and dynamism promised by virtualisation to be fully achieved and extended into the management of storage. 2) The storage must be defined by software, which means able to respond on demand to the requirements of the data center.


SDS has to be an open system that extends from on-disk formats through APIs and business models. It has to be widely available and able to work with all major protocols, so it probably has to be open source or freemium to achieve widespread distribution. It also has to be able to serve block, file and object protocols.


Abstraction, the separation of data from the data control layers, is important. Everything should be delivered as software that can, as needed, define the attributes on the storage in the server or on the JBOD on the fly. And when I say everything, I mean RAID, HA, replication, NFS, CIFS and other protocols.


To define a product as SDS, it must be possible to inherit service level agreement (SLA) requirements from the compute level or from the overall data centre business logic. VMware, CloudStack and OpenStack are all heading in the same direction in terms of being able to pass along requirements from the application provisioning and management layer through the entire stack, including storage.


It must also be possible to run storage protection, replication, cloning and other capabilities co-resident on compute boxes for certain use cases. Ironically, this may mean removing aspects of that logic from the box’s storage capabilities and letting the storage deliver services that are instantiated on the physical device or rack as needed.


If the storage industry does this right, we can bring openness and a fundamentally more flexible storage approach to market in the months and years to come. This is about more than Nexenta or storage—this is about fixing IT once and for all. Let’s achieve the promise of IT and pass along the benefits of Moore’s law to customers. By doing so we’ll make the world smarter, wealthier and much more responsive to the real needs of enterprises and their customers.
 

Exos X20 and IronWolf Pro 20TB CMR-based HDDs help organizations maximize the value of data.
Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Collaboration will safeguard HPC storage systems and customer data with Panasas hardware-based...
Peraton, a leading mission capability integrator and transformative enterprise IT provider, has...
Helping customers plan for software failure, data loss and downtime.
Cloud Computing and Disaster Recovery specialist, virtualDCS has been named as the first UK-based...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.