DRIVING THIS DEMAND are the growing markets for ever smarter phones and tablets, the arrival of 4G cellular networks, and a continuing migration of business and personal computing to the cloud model. In addition to the digitisation trend, there is the Internet of Everything phenomenon, global population growth and increasing urbanisation as people converge in the world’s cities. The market for cloud computing and hence data centres is only going one way.
Already there are some operational data centres in the US measuring in excess of 0.5m sq ft (46,000 sq m) with others of capacity greater than 1m sq ft under construction. In Europe, Apple recently announced plans to build data centres of nearly 2m sq ft capacity in Ireland and Denmark. A collaborative venture between IBM and a local data centre at Langfang in China promises to deliver a data centre measuring in excess of 6m sq ft by 2016. These are only among the biggest of the estimated three million data centres currently in operation around the world.
A major challenge to their successful operation is managing the power needed to drive, not just the IT and networking equipment itself but also the infrastructure, especially the cooling equipment, needed to keep all this high-density IT operating at high availability. Depending on a number of factors—including the surrounding environment of the centre, the nature of the IT equipment contained within and the design of the infrastructure around this equipment—as much power may be needed to run the non-IT equipment as is needed to power the servers, disk arrays and networking equipment at the heart of a data centre.
Inevitably, the data centre industry is paying diligent attention to operating in as environmentally friendly and power efficient a manner as possible. For several good reasons.
First and foremost is the issue of cost. If one has to spend an additional dollar on powering lighting, cooling and power distribution equipment for each dollar spent on IT, costs will spiral out of control very quickly. Given its rapid growth, the data centre market is still a very competitive one. Cutting the costs associated with power is an imperative, not an option, to maintaining competitiveness.
Even the current slump in the price of oil which has brought down energy costs can only be seen as a temporary phenomenon. There are few more capricious commodities in international business than the price of oil. It is always good business sense to reduce power consumption.
Environmental concerns too must figure large in the calculations of data centre operators. Apart from the desire to be seen to be environmentally sensitive, there must be the concern that the data centre industry could, by its very success, attract the attention of regulators concerned by the environmental damage caused by power-hungry facilities and their consequent hefty carbon footprints.
Some studies estimate that ICT equipment currently accounts for as much as 10% of global electricity consumption. Already administrations such as the UK’s are adopting a carrot approach of offering capital allowances to companies deploying energy-saving technology. As data centres increase in size and number, the carrot could very soon be augmented by the stick of environmental charges aimed specifically at the sector. It makes sense to pre-empt such measures by acting promptly to curb power consumption.
One way to lessen one’s carbon footprint is to use renewable energy resources as much as possible. Here the Scandinavian countries are to the fore, both because of their cultural commitment to environmental protection and their abundance of renewable energy sources. Green Mountain operates two data centres in Norway, one of which in Rennesoy near Stavanger it claims to be the “greenest” in the world.
The plant is designed and built using prefabricated modular data centre infrastructure by Schneider Electric, and has a capacity of nearly 250,000 sq ft (21,000 sq m) and is powered exclusively by renewable hydro electricity. Prefab offers benefits through scalability, predictability, greater efficiency and speed of deployment. In addition, the ambient cooling provided by a nearby fjord assists greatly in reducing cooling costs. Green Mountain’s second data centre at Rjukan also uses hydroelectric power and, by virtue of the fact that it is in a region of almost 24 hour daylight during summer time, is also able to make use of solar-generated electricity.
In Falun in Sweden, the EcoDataCenter makes claim to be the world’s first climate-positive data centre, taking its power entirely from renewable sources including solar, wind and water power and secondary biofuels. The EcoDataCenter, which will be operational early in 2016, also redirects excess heat generated by its equipment to warm local buildings through the district heating system.
Not all countries of course have the inherent advantages of these Scandinavian plants in terms of renewable energy and ambient cooling. With the best will in the world many data centres must still be in legacy sites in built up areas and using electricity generated by fossil fuels. In such facilities, the design of the internal infrastructure is important to reduce power consumption.
An accepted metric for measuring energy efficiency is PUE, or Power Usage Effectiveness, essentially the ratio of total power provided to a data centre to the power consumed by the IT equipment itself. Importantly, the standard PUE metric takes no account of how the electricity is generated: it measures where you use energy but not how you make it.
Many legacy data centres have PUE ratings close to 2.0, although it is possible with careful design and choice of appropriate equipment to bring that closer to the ideal rating of 1.0. It is worth noting that the EcoDataCenter in Sweden is aiming for a PUE of 1.15, so its internal efficiency will augment its other advantages.
Data centres in any location can adopt measures to maximise their PUE rating and reduce energy costs. Containment systems in which racks of computing equipment are housed can have their own built-in cooling systems so that the maximum cooling effort is directed to the IT equipment where it is needed, as opposed to spending a lot of effort cooling down the room as a whole. Furthermore, intelligent alignment of racks and enclosures in hot and cool aisles allows heat to be drawn away from sensitive equipment in a concerted and efficient manner.
Increasingly, containment systems come in modular form so that cooling infrastructure can be added incrementally as and when it is needed to match increases in the amount of IT equipment in a data centre. Right-sizing cooling and power provision to the IT equipment is a critical factor in reducing wasted energy in the data centre.
Power distribution and power backup systems can also be designed to minimise power losses. Given the round-the-clock uptime needed by data centres, emergency power systems including uninterruptible power supply (UPS) systems are essential to guard against power disruption.
Inevitably these systems have their own power overhead but there is an increasing trend towards adopting in-line systems that reduce the additional power needed to a minimum. In this way, power backup is guaranteed without unduly affecting power efficiency.
Data centre infrastructure management software too allows management to monitor closely the effects of IT equipment, in terms of location and usage, on a data centre’s cooling requirements so that maximum efficiency can be maintained.
Even data centres in the most advantageous locations from an environmentally friendly, or Green, point of view pay careful attention to their internal efficiencies. But no matter how challenging a data centre’s location it is possible through careful consideration of design, astute choices of equipment and managed monitoring of operations to reduce one’s power consumption and therefore one’s carbon footprint to a minimum.