1. The Easy Button: Simplicity … and Visualisation
Make no mistake – 2015 will be the year of the Easy button. Data complexity is out of control. The problem is driven not just by capacity growth. Mobile and sensor devices - 20B of them by the year 2020, according to IDC – are all generating new endpoints where data is getting created. Data is arriving in an increasingly larger quantity of objects to be indexed, with each object being bigger than it used to be, due to the availability of more granular data to collect. Continuously streaming data presents new and unique performance and availability problems. And all data needs to be maintained longer as it might be usable to create context for future analysis. The only thing not growing is the administration budget. Meanwhile, new workforce Millennials who have grown up on the easiness of the iPad and the Internet will not tolerate old style administration systems that lack obvious user interfaces and reporting. Visualisation will be king as storage vendors respond with more and better monitoring as well as management and reporting tools for both storage and (where possible) the data itself.
2. The Killer Combination: Automated and Aware
Beyond greater administration simplicity – the easiest administration is to require none at all. 2015 will begin to see the realisation that storage processes must not only be automated – they must also be business-aware, with the ability to link data placement and movement with data source, type, user or organisational demographics and business process management. All automated - including automation to the cloud.
3. Software-Defined Storage: Beyond the Buzz
Software-defined storage is the latest rage. In fact, 96 percent of respondents in a recent 451 Research study said they were “somewhat or very likely” to adopt it. Yet despite all the buzz and the name itself, there doesn’t seem to be an agreed-upon definition of what SDS is. Does the software define only the control plane … or does it also define the data path? Is it proprietary ... or open source? Or is it some combination of the above? Depending on the vendor, you’re going to get a variety of answers to these questions, making it quite a challenge for a customer to determine the value of different software-defined storage solutions and select one. Also, particularly at a time of increased demands on IT staff and continued budget constraints, it’s unlikely that customers will have or be able to maintain all the skills and expertise to understand – and trade off – the wide variety of IOPS, throughput, latency, cost and durability features different options provide. Sometimes flexibility just equals complexity – and that means many software-defined solutions won’t make the cut because they just plain aren’t easy enough to understand.
4. Converged Storage Products: Reality Hits
As with software defined storage, there are as many different definitions of what converged storage is as there are converged storage vendors. But everyone seems to agree that its purpose is to make things easy (there’s that easy button again) by applying preset data movement algorithms to some combination of SSDs and disk, in some scale-out combination of block, file and object, and sometimes with a pipe to the cloud and/or an integration layer with the hypervisor. This is data automation, in a device.
The challenge, of course, is that for the smart algorithms and vendor’s selected storage configuration to work properly, the designer needed to make assumptions about the use case, ecosystem and data demographics and tailor the configuration to that usage. This makes converged storage a natural way for us to trend right back to application-centric (aka non-shared) data storage. Hello again storage islands! But that’s a problem for 2016. Next year, it will just become increasingly clear that although converged storage can provide benefits for well-defined, moderate-performance use cases, it isn’t well-suited to highly data-intensive applications. These applications are extremely sensitive to customer workflow, storage selection and configuration. Net net, while the concept of converged storage is a good one, the high-performance customer needs to be able to tune the data policies to meet his unique needs – to define his own policies for automated data movement.
5. Clouds Part, Costs Shine through
There’s no question that public cloud data services for archive or backup can provide substantial savings, both in infrastructure costs and management time. However, as customers are beginning to realise, the economics of public storage clouds are only compelling if they don’t plan, or need, to regularly download any of their data stored in the cloud. Otherwise, the cost of moving data back and forth over the wire quickly adds up. That’s why hybrid cloud architectures will increasingly become the preferred model, enabling customers to access or restore data locally while still taking advantage of the public cloud for infrequently accessed data and disaster recovery.
6. Thinking Twice about One Cloud Provider
Speaking of regularly downloading data from an archive in the cloud, in addition to being expensive, it’s also not very fast. Unless a customer is willing to pony up for a direct (and dedicated) network connection to his service provider, the network performance is going to be gated by a combination of general WAN network speed and/or the issues of sharing well with others. Downloading a PB will take weeks, not hours or days. As companies begin to realise this, the idea of using more than one cloud provider will have greater appeal – just in case one of their vendors suddenly becomes too pricey or painful to deal with.
7. Object Storage: Starring Role Behind the Scenes
Over the last few years, next generation object storage has increasingly been recognised as the answer to the shortcomings of traditional RAID in managing and protecting large data volumes (particularly in the cloud), but object storage startups still seem to be struggling. Why is that? Well, object storage – as an architecture – is a key enabler for storing large amounts of data for a long time with great integrity and availability – and no cost of forklift upgrades. It’s great for static content, for example. But your legacy block and file applications won’t talk to it – and because its natural economies of scale tend to kick in at larger capacities, it’s expensive for a commercial enterprise to test it as a standalone device. So object storage startups have had challenges in gaining adoption. But don’t be confused – while object storage systems haven’t taken off as standalone products, object storage technology is taking hold – buried in online archives, content distribution, and managed workflow solutions, and increasingly in converged offerings from traditional storage suppliers. In the coming year, this trend will accelerate, and object storage will increasingly become a foundational element of storage solutions.
8. Increasing Storage Challenge: Video Here, There, Everywhere
Video is sprouting up everywhere. Once the darling of only the Media business, suddenly everybody is making video and using it as a communication – and compliance – tool. On the communication side, 79% of consumer internet traffic by 2018 will be video, according to a Cisco study. When it comes to marketing, Forbes Insights research has found that 59% of senior executives say they’d rather watch a video than read an advertisement, which will help drive $5 billion in video ads by 2016. On the compliance side, more and more public safety organisations are using video, particularly via on-body cameras (think GoPro for police) to capture live interactions and evidence, useful in both the courtroom and with the public. All of this will create increasing storage challenges in 2015. Video files are huge, even compressed, and they don’t dedupe. They also tend to be static – and re-usable – so they get stored for a long time. The combination of these factors tends to result in unpredicted data growth. And by the way, get ready for similar issues with streaming sensor data – in some applications; it looks a lot like video.
9. The Broader Impact of Video Data Growth
Beyond creating problems for the overall storage budget, all those videos (and sensor-collected information) have a secondary problem too – they create huge challenges for traditional backup and replication processes. They waste significant resources in terms of time, staffing and compute, undermining data availability and highlighting the fact that backup’s busted. As a result, in the coming year more and more customers will take advantage of new approaches and solutions to get these large, un-dedupable, static files out of the active data store and into a separate content/archive store – knowing that it’s the only way to keep primary storage and related backup costs under control. Otherwise they will find themselves just walking, processing and moving the same static data sets over and over again.
10. Ceph Goes Mainstream for Objects and Blocks
Ceph, once only the darling of the academic set, has crossed the chasm, becoming a supported distribution from Red Hat (Inktank). Ceph is like the Swiss Army knife of open storage platforms – presenting object, block and (planned) files. While the distribution can still use some maturing in both features and support (e.g., better documentation), there’s no doubt that Red Hat has the interest and the capability to make these things happen. Solution vendors are already figuring this out, so plan on seeing a raft of new (likely converged) storage platforms based on it in 2015. The good news is that all this attention and focus is bound to cause rapid maturation and innovation, but as solutions begin to ship, keep a watchful eye on where it really fits, performance-wise. While it sounds universal, like all storage systems, it will be really good for something, not good at everything.