Data centres don’t exist in isolation. They are there to do a job – house all the IT and ancillary equipment deemed necessary to run a business. As Mike Hogan puts it: “If the data centre is the body, then the IT is the brain.” And the brain is undergoing a massive transformation right now, which has an inevitable impact on the body, the data centre facility. Mike explains: “If you think back all the way to water-cooled mainframes, well, today, equipment is becoming much denser and, we’re moving back to water-cooled equipment again! The wholesale move to virtualisation and The Cloud means that you might not have so many individual servers, but, potentially, hundreds of virtual servers in an individual rack. This type of change is having an impact on the data centre ‘body’ – the most basic question to ask being ‘Do I have enough power and cooling to meet my compute needs?’
Such a question is just a part, albeit a significant one, of the overall data centre assessment that needs to be undertaken to ensure that the IT function is keeping pace with the business requirements of your organisation. Clearly, those involved with the business have a pretty good idea of what it is they are doing currently and where they would like to be in, say, five years. However, they might just be a bit too close to see the wood for the trees, not challenging long-accepted IT practices, or indulging in a bit too much server-hugging. As a result, the combination of the in-house experts working with a third party, such as IBM, to ‘remove the religion and politics’ is often the best way forward to carry out a comprehensive assessment of the existing data centre infrastructure and to plan the way ahead. As Mike puts it: “Any data centre assessment needs to be fact-based, and to put to one side any emotional subjectivity. Several years ago, IBM itself carried out such a process – rationalising a global organisation that had hundreds of data centres and CIOs in every country. We realised that we couldn’t continue with such a business model, so we started a data centre consolidation process and now have just the one CIO and approximately a dozen data centers that run our core business applications.”
The power and cooling challenge
“Power and cooling is the ‘holy grail’ for facilities,” says Mike. “We’re back to my brain comment of earlier – data centres are there to run and support the applications that the client needs for his business. These applications are now being designed so that they can run over multiple data centres, and the major challenge, when we talk to customers, is how “they” can ensure that they have the right level of power and cooling for the flexible IT workload they have to provide.
“Interestingly, it’s not all about high availability – if the customer has more than one data centre available, it’s possible to failover from Site A to Site B or C. In other words, a data centre might not need to be Tier 3 or 4, Tier 1 or 2 might well do the job. In terms of the power and cooling requirement, with the possibility of workload shift across various sites, it’s all about the equation of capacity, flexibility and total cost of ownership over a data centre’s lifecycle.”
It’s not difficult to engineer any particular power and cooling solution – indeed it’s very easy to take advantage of variable speed equipment that can ramp the cooling capacity up and down, for example. In terms of power, providing the maximum potential power capacity is the important thing as the equipment density is only going to continue to increase. Mike believes that the much-debated PUE metric is helpful to concentrate people on improving the efficiency of their own data centre. He explains: “Clearly, you’re not going to be able to generate the same PUE in a Singapore data centre as, say, one in the north of England – the two environments are totally different. However, if facilities know the PUE of their data centre, they can then devise a programme to either maintain or, hopefully, improve this figure over time. PUE comparisons between data centre are not very helpful at all, but as a means of driving continuous improvement within one facility, PUE has a significant role to play.”
After all, one of the data centre mantras is: “If you can’t measure it, you can’t manage it”. And IT and facilities need to move closer together to ensure that this process runs as smoothly as possible.
From heterogeneous to homogeneous
Traditionally, the data centre floor has been a very heterogeneous environment, but this is gradually changing, thanks in the main to the wholescale virtualisation that has taken place. Mike says: “Thanks to virtualisation, there’s a much more common approach to servers and storage within the data centre. And we’re finding that many more data centres have homogeneous pods of IT that can be bundled into cabinets or racks – you simply need to define the pod requirements up front and then the connectivity requirement for the pod. So you’re starting to see deployment of pods of IT capacity, which makes data centre hardware planning somewhat easier than before. For example, with defined pods of IT kit, it’s better, and easier, to separate the hot and cold aisles – allowing a very uniform approach to heat removal.”
With the recent change in the ASHRAE guidelines, there’s now a new class of hardware that can operate at warmer temperatures and this has contributed positively to the way pods of computing can be installed and managed – it’s almost just a question of how big do you want each pod of servers, storage and connectivity to be - with connectivity becoming ever more critical.
Data centre management
Despite the massive changes taking place in IT right now, Mike sees the attendant changes taking place in the data centre infrastructure as an evolution rather than a revolution. The emergence of Data Centre Infrastructure Management (DCIM) tools is a great example of this process. First there was the DCIM hype curve, with a lot of interest and noise, but then came the big dip in the adoption curve, as data centre personnel worked out what DCIM meant to them. As a tool to help bring together the facilities and IT disciplines within any organisation, it has great potential, but it’s still early days in terms of the various offerings available – from ‘total’ DCIM software suites, to point solutions. For most end users, their focus is on the trinity of power management, data centre capacity and asset tracking; for others the alarms and alerts functions of any management software are important; and for others again, change management is the main focus. In simple terms, there’s probably a DCIM offering out there to meet any particular set of needs, but the customer needs to decide what’s important before making an investment.
Location
It’s a similar scenario when it comes to deciding on data centre location. A list of must-haves, nice to haves and not important needs to be compiled. Top of any list has to be the availability of sufficient power – both for current and anticipated future IT needs, and the availability of good network bandwidth. Other elements to consider include the cost of power, the climate (both temperature-wise and possible weather extremes), the labour skills available, regulation (Where does my data have to reside?) and the cost of construction.
That’s plenty of things to consider. Clearly, each organisation will have a different set of priorities, so it’s difficult to give meaningful general advice, other than to make the observation that the IT evolution means that data centre location is, potentially, much more flexible than it once was.
One extra issue to evaluate is that of physical security. Given that data centres don’t tend to have a sign outside advertising that the building is housing large amounts of business critical data for one or more very important companies, what levels of security are required? Risk assessment is the key to implementing the appropriate physical security measures. Without underplaying the subject, Mike suggests that logical attacks pose a far greater security risk than anything physical.
Consolidation and convergence
And so we come to the nub, or hub, of the data centre debate right now. The current trends to virtualise everything, put it in a Cloud, allow employees to Bring Your Own Device (BYOD), and to Software-Define everything means that the data centre needs to be managed holistically. It’s no longer sufficient for the facilities folks to do their own thing, irrespective of what the IT folks are doing at the same time. Virtualisation has led to the earlier-referenced denser, more standardised IT stacks. Applications need to be written with this new environment in mind and there needs to be sufficient network capacity to ensure that data can be accessed whenever, by whoever, wherever.
The challenge for all those involved with the data centre, necessitating a collaborative approach, is how to provide access to the data – it might be stored on commodity storage, and uploaded onto commodity servers – the clever bit comes in what happens next. If facilities and IT don’t communicate, then there’s a disruptive disjoint at this point. The handheld device user has to wait too long for an application to download, and goes elsewhere to do his or her shopping.
Mike explains: “The coming together of IT and facilities is a slow, slow adoption process. DCIM is helping to speed this up, but this convergence has been a topic of conversation for the last seven to eight years and it really is a slow process. We’re starting to see more of our facilities customers ask more questions about the operational side of the data centre, trying to understand what impact these elements will have on the actual design. But we’re a long way from getting this thought process to become part of the DNA of data centre operations.”
The IBM approach is to be very deliberate in compiling the planning requirements of a data centre build/refresh – talking to both IT and facilities. “You have to look at the holistic view,” says Mike. “You have to have input from the IT and facilities folks to ensure that the business is accurately represented. As we’re an IT company with vast facilities knowledge, we like to think we’re in a pretty good position to help customers understand what needs to be considered in the planning phase. This covers the obvious power and cooling requirement and the anticipated IT load, alongside the evolving applications and changes to the business environment such as possible mergers and acquisitions.”
A data centre is an organic environment, and needs to be planed with the future very much in mind.
Where to run your IT/applications?
Partly enabled by the present IT revolution and partly enabling it, outsourcing or colo is an item on the data centre agenda that cannot be ignored. According to Mike, the decision to operate your own data centre versus going the outsourcing or colo route is usually a very simple, financial one. Yes, there are other considerations, such as exposure to risk, regulatory requirements and the like, but if the sums stack up, then there has to be a very good reason to ignore them when making the decision. Unsurprisingly, the hybrid approach seems to offer the best way forward. Relatively low value, low importance applications can be placed elsewhere, but the business critical apps are likely to remain in-house.
“For example, we’re very protective of the IBM.com website, for obvious reasons,” says Mike. “So we are very protective of where it is housed. But organisations will have applications that don’t have that same level of criticality, that would potentially be candidates for collocation or outsourcing.” And it’s for every customer to make these decisions, based on the risk/reward equation.
How to get started
For many, addressing the data centre optimisation conundrum is a little like heading into the attic. Lift the hatch, take a look, realise that there’s a lot of hard work and decisions that need taking, so replace the lid and save it for another rainy day.
But are there some basics that can bring some immediate benefits – what Mike calls the ‘blocking and tackling’. “On the IT side,” he starts, “you have to have a handle on how you are using your hardware. There are plenty of cases where we have seen the majority of IT hardware is less than five percent utilised, and some applications are prime candidates for virtualisation, others not. So, start by using what you have better.
“As for facilities, do you pay your electricity bill or at least have knowledge of how large it is? Do you know your PUE? Do you know what uses power in your data centre? How can you do things better?”
Mike continues: “In a lot of the data centres I walk into, you need to wear a jacket, it’s that cold. This is part of a legacy approach that needs to be challenged. And if you are measuring data centre efficiency, compare last year’s PUE rating to this year’s and see if you’ve got any better. And be aware of the benefits that a fresh set of eyes just might bring.”
Mike cites one example of a data centre that had a free cooling system installed when it was initially built, but someone in their wisdom had decommissioned it, so the organisation was paying for cooling when there was no need. A crazy situation, but all too many data centres are paying for very inefficient power usage. And this seems a suitable, concluding message – that there are thousands of data centres out there, needlessly wasting significant sums of money on power and IT resource that they are using very inefficiently.
For more information go to: ibm.com/smarterdatacentre/uk