Caching does not have to be complex

By Neuralytix.

  • 11 years ago Posted in

Current Overview
Caching is a critical part of any IT infrastructure. The use of caching has been around for a very long time, from the mainframe to today’s distributed cluster architectures. The problem with caching had always been the hardware component – namely the storage medium on which a cache can exist.
With the viability of NAND flash solid-state disk drives (SSDs), even the most modest datacenters can benefit from server-side caching.
Arguably, every customer’s environment is different in some way. Nevertheless, customers all have performance hungry applications, and capacity hungry applications. Sometimes, customers even have performance and capacity hungry applications.


Data efficiency solutions such as deduplication or compression solve the capacity problems.


Performance problems are not as straightforward. Monitoring of various computing, networking and storage resources provides the necessary information to pinpoint where problems occur. However, the exponential increase in computing power through multi-core, multi-socketed motherboards has clearly shifted the performance issues to the I/O subsystem.


Adding main memory in the form of DRAM can mitigate many performance challenges, but it comes with a high cost.


SSDs provide a much higher capacity, without the accompanying costs. While SSDs do not perform to the same degree of performance, it does provide at least an order of magnitude when compared with traditional magnetic rotating hard disk drives (HDDs). In most cases, SSDs are good enough to extend the life of the application and dramatically improve performance, as well as extend the useful life of the server itself.


Neurspective™
SSDs by themselves cannot solve anything without accompanying software. Server-side caching software is one straightforward approach to achieving the three benefits above – extending the life of the server and application and dramatically improving performance.


Recently, Neuralytix was offered a closer look at Proximal Data’s AutoCache software.


We found the server-side read caching approach taken by Proximal Data to be a straightforward method for users to improve performance without having to reinvest in new server equipment, rewriting applications or migrating data.


AutoCache does just that – it automatically caches reads. It supports the VMware ESX hypervisor and can be managed within vSphere and vCenter, a necessary feature. It also automatically “pre-warms” the cache vMotion to maintain optimal performance in a dynamically changing environment. This means that the built-in intelligence fetches relevant blocks even before they are requested so that vMotion can take place with minimal latency.
Currently, AutoCache does not support write caching. While this has been a highly criticized omission by Proximal Data’s competitors, Neuralytix does not believe that this is a big issue. Flash-based storage devices (including SSDs) have limited write cycles. Augmented with the fact that performance-focused applications are usually mission critical applications, it makes absolute sense that data be committed to a permanent storage medium as soon as practical.


Some may argue that it is splitting hairs, but Neuralytix strongly believes that the caching is a read function; and that write caching should be left to storage systems, since they are the part of an architecture responsible for committing the ultimate write.


Neuralytix research shows that the Proximal Data AutoCache represents an optimal price-to-performance ratio. Many of Proximal Data’s competitors rely on the use of specialized PCIe solid-state devices. This dramatically increases the capital investment. Therefore, for each unit of performance gain, Proximal Data’s competitors are very likely to require a substantially higher investment.


The combination of Neuralytix research and data from Proximal Data indicates that most enterprises will spend roughly US$1,500 per server to improve performance dramatically. This improvement in performance can come by way of removing the need to buy a more powerful server to run the application (an investment that is likely to cost over US$2,000 at a minimum); or allow the server to run considerably more virtual machines (again, by eliminating the need to invest in additional servers, saving at least US$2,000).


Whichever position you, as a user, are in, Neuralytix has little doubt that if there is a performance related problem in your enterprise, the integration of Proximal Data’s AutoCache with an entry level SSD will provide immediate payback.
 

ATTO Technology has published the findings of an independent survey of IT decision-makers from...
NetApp extends its collaboration to accelerate Ducati Corse’s digital transformation and deliver...
Delivering on the promise of SSDs that address future enterprise infrastructure requirements KIOXIA...
FlashBlade at Equinix with Azure for EDA: industry first validated solution to leverage...
Infinidat says that Richard Bradbury has been appointed SVP, EMEA & APJ. Leveraging his extensive...
New storage automation and delivery platform and cloud native Database-as-a-Service offering bring...
Leveraging its strength and leadership in flash, Western Digital has launched the new WD Red SN700...
Nutanix has added new capabilities to the Nutanix® Cloud Platform that make it easier for...