Vodafone’s experience
Vodafone was one of the high profile companies affected. The IT press said that the floods had hit the company’s datacentre. A spokesperson at Vodafone, for example, told Computer Business Review on 4th January 2016: “One of our key sites in the Kirstall Road area of Leeds was affected by severe flooding over the Christmas weekend, which meant that Vodafone customers in the North East experienced intermittent issues with voice and data services, and we had an issue with power at one particular building in Leeds.”
The flooding restricted access to the building, which was needed in order to install generators after the back-up batteries had run down. Once access became possible engineers were able to deploy the generators and other disaster recovery equipment. However, a recent email from Jane Frapwell, Corporate Communications Manager at Vodafone, claimed: “The effects on Vodafone of flooding were misreported recently because we had an isolated problem in Leeds, but this was a mobile exchange not a datacentre and there were no problems with any of our data centres.”
Hurricane Sandy
While Vodafone claims that its datacentres weren’t hit by the flooding, and that the media had misreported the incident, datacentres around the world can be severely hit by flooding and other natural disasters. Floods are both disruptive and costly. Hurricane Sandy is a case in point.
In October 2012 Data Center Knowledge reported that at least two datacentres located in New York were damaged by flooding. Rich Miller’s article for the IT magazine, ‘Massive Flooding Damages Several NYC Data Centers’ said: “Flooding from Hurricane Sandy has hobbled two datacentre buildings in Lower Manhattan, taking out diesel fuel pumps used to refuel generators, and a third building at 121 Varick is also reported to be without power…” Outages were also reported by many datacentre tenants at a major data hub at 111 8th Avenue.
The possibility that a datacentre’s service could be disrupted by flooding and other natural disasters therefore raises the following question: Is having one disaster recovery site enough? Should ideally be 2-3 of them? Many datacentres are often located far too close to each other, and fall within the same circle of disruption. This is a reflection of the limitations of their current technology and the fact that distance creates latency issues that have a major impact on data throughput. Equally as worrying is the fact that a survey by Zenium Technology has found that half of the world’s datacentres have been disrupted by natural disasters, and 45% of UK companies have – according to Computer Business Reviews article of 17th June 2015 – experienced downtime due to natural causes.
“So I don’t care whether it’s Chennai, Texas or Leeds. Most companies make do with what they have, and they aren’t looking casting their net wide enough to look at technologies that can help them to do this”, says Claire Buchanan – Chief Commercial Officer at Bridgeworks. She finds that people are compromising on their business continuity and disaster recovery, and it’s only when a flood happens that it becomes top of mind again.
“With PORTrockIT and WANrockIT the data can be moved faster and allow companies to have two or even three datacentres at much greater distances”, she before adding that Business continuity is any company’s best insurance policy. In her opinion with the right technology it is possible to locate datacentres in such a way as to avoid them being situated in the same circle of disruption. Business continuity needn’t cost them the Earth either - most organisations may already have the infrastructure in place, but they may not have the technology to exploit it. However, she stresses that the technology is available.
Ignore FUD
Buchanan’s core message is: “Don’t automatically go to the large fear, uncertainty and doubt (FUD) vendors.” She argues that smaller and more innovative vendors can better address issues that their larger counterparts have not yet resolved. In her opinion Bridgeworks uniquely can help organisations to remove distance and speed limitations, enabling them to have a third offsite disaster recovery site, for example that doesn’t sit within the same circle of disruption to protect themselves as part of a viable risk reduction strategy.
“Having more than one site is necessary because disasters come in many forms”, explains Clive Longbottom – Client Service Director at analyst firm Quocirca. He says a single site should enable organisations to deal with lower levels of disaster, such as component failure, single-item equipment failure, on-site power failure, off-site power failure (through UPS and auxiliary generation) and so on. Yet he agrees with Buchanan that one site isn’t enough to deal with flooding, fire, earthquakes, etc.
“Most datacentres can deal with a small flood – but when Mother Nature really shows her power, only so much can be done”, he adds. In his view it’s the data that matter most – not the hardware or software. “Business continuity can be provided through much more effective and cheaper means with warm virtual machines being hosted on a shared remote site – and ensuring this happens in the most effective manner requires intelligence across the wide area network (WAN)”, he explains.
Plan for continuity
The problem is that all too many organisations aren’t planning properly for natural disasters. Buchanan’s colleague, David Trossell – CEO of Bridgeworks – warns: “Even those people whom are responsible for averting disaster don’t plan properly because they are just ticking boxes and they never seem to think that the impossible thing could very well happen ” He emphasises that continuity is not about recovery. It is about preventing natural disaster from impacting on business operations and services to avoid financial and reputational damage.
So to ensure that Business Service continuity remains your best insurance policy he offers the following top six best practice tips:
1. Place your datacentres at a distance from each other, and never within the same circle of disruption.
2. Learn lessons from outside of the IT community in order to think outside of the box.
3. Remember that disaster recovery is not always about natural disasters; it can be about hardware and software going wrong, human intervention or caused by terrorism. Such incidents can and do take datacentres down.
4. Plan for recovery and not the disaster by understanding the costs of what would happen if your datacentre were wiped out.
5. Test your Disaster Recovery plan because all too often they aren’t tried and tested.
6. Define and really understand your Recovery Time Objective (RTO).
Longbottom advises that organisations should have two plans: Business Continuity and Disaster Recovery. Like with any insurance policy he says it’s also important to understand the business’ risk profile to define how much the business is willing to invest in IT service continuity. This risk audit should also consider whether it is advisable to locate datacentres in a different country, in different regions or on different continents to reduce the likelihood of a disaster natural or otherwise putting the organisation out of business. In the end such an investment will be cheaper and lot less disruptive.