Disaster recovery is dead

We live in fast times. So fast in fact that a recent Google study showed that over half of users will abandon a website if it takes longer than 3 seconds to load. Any online business knows if the website goes down, users will simply check out the competition and get the product/service they want elsewhere. By Eran Brown, CTO, EMEA, InfiniDat.

  • 5 years ago Posted in

This fast pace of business is exactly what is driving the revolution in IT architecture. Disaster Recovery (DR) is dead.  Any process that takes hours to recover is unacceptable.

 

Not all infrastructures are equal but one that helps retain customers by guaranteeing uptime and performance surely has to be head and shoulders over everything else on a corporate shopping list. Without a modern infrastructure, there is no modern business.

 

Traditionally, most businesses have been underpinned by DR, but this has really run its course due to the time it can often take to actually recover business services. This has led to a fundamental shift in how data is stored (and distributed) geographically. Businesses have to evolve towards a data infrastructure that can support real-time recovery without adding more layers of infrastructure.

 

Why non-stop data is the start of business growth.

It’s perhaps not a huge surprise that a recent Gartner CEO survey on business priorities revealed that digital business is a top priority for next year. Survey respondents were asked whether they have a management initiative or transformation program to make their business more digital - the majority (62 percent) said they did. In fact, according to Gartner* the future of IT infrastructure is “always on, always available, everywhere.”

 

It seems that in 2019 (and beyond) businesses will be learning to evolve and will become increasingly agile as a result of interacting with customers through digital channels. Their reliance on data availability, in real time, will also increase. Using technology to help compete more efficiently and not fall victim to inertia is paramount. As businesses become increasingly dependent on the insights from data analytics, and face-up to competition fuelled by the 24/7 society of instant gratification, Always On becomes mission-critical.

 

The dark side of digital transformation. 

We’ve become reliant on IT running a business. So much so that infrastructure architects are now designing for site-level failure by building ‘Always On’ data infrastructures to support a business operating non-stop.  However, the transition from DR to Always On is not without its pain points.

 

Organisations are struggling with traditional solutions that employ dedicated Always On solutions (for example, High Availability Gateways) as these complicate administration and increase the total cost of ownership (TCO). A key element in finding the optimal solution is a technology that encourages organisations to protect more applications by not charging more. One also needs to consider how data management can be simplified, and configured, appropriately to reduce latency.

 

TCO has to be built into any Always On strategy at the start, otherwise businesses face the prospect of being trapped in a costly and unsustainable plan. Non-stop data access grows a business’ reputation but it is the ability to move quickly, and adopt new products and services, that delivers company growth.

 

Is loyalty still key to success?

Yes, but consumers are changing. New customers do not come cheap either. According to Profit Well, in the last five years, the cost of acquiring new customers has increased by almost 50%.  Working harder to retain customers makes both commercial and strategic sense.

 

Expectations are higher today than ever before. Salesforce revealed recently that “92% of customers think that the experience a company provides is as important as the product or service it offers.”

Getting the underlying technology right is crucial. If systems and applications are not seamless and readily available, loyalty will always be threatened.

 

Market conditions demand more intelligent approaches to keeping customers happy. Ensuring a service is Always On becomes the bottom line.

 

Top 5 ‘Always On’ best practices.

 

1.       Remove IT complexities – Today, most businesses require an additional, dedicated, point-solution (HA Gateway) with a different feature set, management tools and monitoring requirements. This adds complexity to both operations and automation processes. Look for an integrated solution combining storage and Always On capabilities.

 

2.       Reduce costs HA gateways are expensive and often priced per capacity. This forces organisations to minimise the number of applications benefiting from Always On. A second cost is added when the HA solution requires Fibre Channel between sites. Organisations protect more when they are not charged more.

 

3.       Consistent performance, anywhere Synchronous operations across two storage systems on two sites, increases latency. This makes it harder to adopt for latency sensitive applications, regardless of criticality. With long distance deployments, sending one write to the local storage array and another to the remote one will result in up to double the latency, creating an inconsistent user experience. This is often the result of misconfiguration.

It can be easily rectified by simplifying and automating the configuration to enable hosts to differentiate the paths they should use all the time, from the ones they should only use only in a failure scenario.  Minimise the performance impact allowing wider adoption.

 

4.       Variable functionalities – Many solutions that claim ‘Always On’ capabilities vary dramatically in functionality. Some offer a read-only copy on one of the sites and automated failover, whilst others have high performance penalties when sending writes to the “secondary” copy. By treating both copies equally, with minimal to no performance penalty between them, hosts can write on both sides (truly active copies) without any failover process.

 

5.       Reliability - As any geographically distributed solution, Always On clusters require a way to “break the tie” in the event of a communication failure between the two systems.  Ultimately, businesses need to deploy a witness, in a third fault domain with redundant networking to each site, to complete a quorum in the event the two systems can’t communicate directly. Ensuring the witness fit the customer’s preferred deployment method (cloud / on-premises) is the vendor’s responsibility.

 

*Gartner Press Release, Gartner Says the Future of IT Infrastructure Is Always On, Always Available, Everywhere, December 3, 2018

 

 

By Gareth Beanland, Infinidat.
To ensure full confidence that your documents, spreadsheets, and correspondence are kept safe,...
By JG Heithcock, General Manager of Retrospect, a StorCentric Company.
Michael Del Castillo, Solutions Engineer, Komprise, looks at how to design a cloud storage strategy...
By Ezat Dayeh. Senior Systems Engineering Manager, Western Europe at Cohesity.
The past year significantly changed the way organisations protect and store their data. By Joe...
By Rainer W. Kaese, Senior Manager Business Development, Storage Products Division, Toshiba...