Software Defined Storage: Future Undefined

By Alex McMullan, Field CTO, Pure Storage.

  • 9 years ago Posted in

The increasing complexity and scale of current disk storage has driven a new innovation curve for managing the various homogeneous pools of storage in customer datacentres. The software-defined paradigm seeks to present a simplified and virtualized layer over all available storage devices. The success of this approach will be decided by the extent that vendors co-operate in producing interoperable products and unified standards, a key area where the storage industry historically has not excelled.

Defining Software Defined Storage
The concept of software-defined storage (SDS) is considerably easier to define than the implementation, given the varying interpretation from a range of software and hardware vendors. The increasing proliferation of 2U server hardware with 20+ drive slots and the rise of software-based storage (SBS) systems also add further complexity in an already complex marketplace. For the purposes of this article we will define SDS as a software layer that interrogates the capabilities of all connected storage devices, applies an abstracted control layer and data plane and then exposes a simplified and aggregated set of those capabilities to consumers, accessible through one unified interface.

The control layer uses the natively exposed capabilities of each storage type (generally through a custom module) to deliver a simplified portfolio of services in a catalog. That service catalog can then be used by both users and application servers to consume the underlying storage capacity based on a requested policy, price point or feature such as performance or availability etc.


Benefits
The biggest consumer benefits of SDS come from the simplification, abstraction and automation of a range of storage devices to deliver one unified model for management and consumption of that portfolio of storage. The service catalog approach makes it easier for consumers to select and use storage based on application requirements and capability definitions (uptime, latency, replicas) rather than discussions on storage-centric themes (RAID level, network speed, IOPS). This allows users to manage their storage costs more effectively by selecting the most appropriate service from the catalog based on the functional and non-functional requirements of the particular use case.


Abstraction of a service by decoupling it from the physical device(s) which provide that service allows the provider to replace, expand or move the devices without the user necessarily being aware of the change, as long as the level of service continues to be met. This is becoming increasingly important with the rise of cloud-based storage, since it allows external service providers to directly compete in a free market with internal IT services. The twofold benefits of this are that competition generally increases the service quality whilst reducing the overall cost. The competitive pressure then allows an abstracted service that is being consumed internally to either expand (or burst) to the external provider as and when the cost profile makes it attractive.


One of the largest causes of outages and downtime in any complex infrastructure is human error, with storage being one of the most complex parts of the infrastructure. The automation of these often long and complex command sequences through a well-tested and reliable SDS interface means that change times can be reduced, and unplanned downtime kept to a minimum. The complexity aspect has been improved more recently with all-flash arrays that tend to present more streamlined interfaces since they generally don’t have the same level of heritage concepts to support, but the automation benefit still applies since it enables the same seamless and simple SDS interface that the end user consumes.

Challenges
The primary challenges of using SDS are data locality, standards and scale. The locality aspect is most pertinent in use cases where the data being stored is subject to regulatory or compliance control, and the consumer is required to attest that the data and its copies are being held in a specific geography or physical location. In this instance, the policy for the storage service needs to be carefully managed to ensure that there is no potential for non-compliance.


There have been a number of industry-led efforts to standardize storage management interfaces from SNMP MIBs to CDMI and SMI-S, but no clear or unified standard has yet emerged. This means that SDS needs to have built-in knowledge of the capabilities of each product line (which may vary by product software version) since there is no standard way of interrogating and compiling the list of capabilities. The additional complexity factor here is that storage tends to be a 3-5 year asset investment in the datacenter, and older storage arrays in particular are generally less feature-rich, meaning that the service catalog can be limited in capability for some services. This may not be a problem if the service associated with that capability is priced accordingly, but tends to limit its overall appeal.


There is a minimum entry point in terms of organizational size and complexity before the necessary resource investment in SDS becomes viable over the short and medium term. If an organization has a small, virtualized or homogeneous storage estate then the benefits of SDS are generally small and diluted.

The Future
There are a number of diverging trends in the storage marketplace that can be considered either accretive or dilutive to the SDS business case, and it will be interesting to watch these unfold over the next few years. The increasing popularity of cloud and object storage, driven by distributed computing and new application paradigms, the rapid increase in generated data volume, and approaches such as OpenStack. These accretive trends that allow storage to be simplified will continue to support the case for SDS, alongside the general drive towards consumption of infrastructure and platforms as a service.


The meteoric rise of mobile computing, virtualization and cloud technology coupled with bigger datasets have driven exponential levels of growth in the storage market. That growth has fostered a number of different approaches to solve the problems caused by it, from scale-out to hyperconverged and from hybrid disk/flash to all-flash systems. In any large computing or storage system, complexity is inversely proportional to reliability so minimizing the number of component parts and delivering a streamlined and easy to use management layer are critical. Brewer’s CAP theorem needs to be observed and ultimately the user simply needs data delivered with the lowest possible levels of latency and jitter to meet the required level of service.


There are a number of SBS products such as VMware vSAN, Red Hat Ceph, Nexenta, Microsoft Storage Server that offer a range of capabilities similar to SDS (and in some cases will operate underneath a SDS implementation), but are more focused on supplanting that capability with a streamlined and simple SBS offering. The long-term outcome will be driven not just by commercial comparisons, but the nature of application delivery and consumption as the technology market continues to evolve at a breakneck pace.

Conclusion
SDS as a concept is perfect for the needs of many large enterprises to simplify and de-risk the management and automation of complex and difficult-to-operate disk storage arrays. The implementation reality of SDS is that a range of constraints and caveats often accompanies it because one or more underlying vendor products do not expose sufficient capabilities for a use case. The capability of a SDS system should ultimately be judged by how well it deals with a range of storage arrays, not just those from one vendor.


The rate of change in today’s storage array market means there is a limited window for SDS to gain a wider foothold before the more interoperable and modern products such as AFAs, VSAN, object stores etc. replace these arrays in their entirety.

Exos X20 and IronWolf Pro 20TB CMR-based HDDs help organizations maximize the value of data.
Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Collaboration will safeguard HPC storage systems and customer data with Panasas hardware-based...
Peraton, a leading mission capability integrator and transformative enterprise IT provider, has...
Helping customers plan for software failure, data loss and downtime.
Cloud Computing and Disaster Recovery specialist, virtualDCS has been named as the first UK-based...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.