Can the public sector lead the way on data centre energy efficiency?

By Simon Campbell-Whyte DCA Executive Director.

  • 9 years ago Posted in

A big “thank you” again to all involved and especially to the 956 people who kindly gave up their time to directly contribute their expertise to the PEDCA project which as reported in the last issue, came to an end along with 2014. I’m pleased to say that much has been learned from the activity, not only from an industry perspective and aligning the DCA and its agenda to goals, but also valuable experience in research 7 development projects in general. There is a 38 page “executive summary” of the 18 month project now available for download at www.pedca.eu.
This experience is valuable for the industry and puts the DCA in a good position for these activities. So I’m also pleased to report that as part of a consortium of eight partners, the DCA has just signed a €1.5 Million grant agreement with the EU for a 30 month project to improve the uptake and procurement of environmentally sound, high energy performance data centre products and services within the EU public sector.
Clearly this is a fantastic opportunity for the industry to work with the public sector, to raise the profile of all the industry’s best practices and standards that many members have been working so hard to formulate and promote. The resulting increase in awareness of every Member’s service, product and solutions, be that technology, training, outsourcing etc, will no doubt be a win-win for both the public sector and the industry as a whole.
The project starts in March and will be showcased at the Public Sector Show at EXCEL on 23rd June.
Talking of awareness activities, as also mentioned in the last issue, the DCA will be holding a Member Conference from 10:30am – 16:30pm on 15th July at the University of Manchester. This follows the annual “Data Centre Transformation” conference of July 14th. This will provide the opportunity for all members to review and debate all the DCA group activities and projects, as well as the industry issues in general. Please register as soon as you can.


Data centre energy efficiency case study

By Dr Beth Whitehead, Sustainability Engineer, Operational Intelligence Ltd and Sophia Flucker, Director, Operational Intelligence Ltd.

DATA CENTRES continue to grow in size, number and consumption; and as they do, approaches to their design have evolved. Low energy operation has become a design requirement, and the focus has shifted away from highly redundant mechanical and electrical infrastructure topologies to include both criteria. Many new data centre designs have a target PUE of 1.2 or below, whereas legacy facilities might achieve of PUE of 2.0 or above, and the industry has become more aware of its environmental impact. With raised temperatures and improved air management, free cooling has become possible across the globe, increasing the potential for zero refrigeration.
This case study looks at the efforts made by a global financial services firm in their legacy facility to reduce PUE from 2.29 to 1.49, and their annual electricity bill by $1.7M.

The facility
The legacy data centre is located in Greater London and has 2N systems. It is run by a financial institution and has 1,800kW of IT load in 3,500m2 data hall area, cooled by 53 CRAH units. The largest energy consumption was from the cooling system.

A six-stage energy improvement programme was used:

1. Energy assessment and data hall air temperature survey Ð identify improvement opportunities.
The operations team was trained to conduct the energy assessment themselves. First, the existing PUE of 2.29 was established, which contributed to a $4.7M annual electricity bill. Second, the air management was quantified and areas for potential improvement were established. In this case 80% of the CRAH air was bypassing the IT equipment, returning directly back to the CRAHs, and 20% of server intake air was recirculated warm air from the server exhaust. Energy was being wasted moving 80% of the air with no cooling effect, and all servers had intake temperatures in the upper limit of the ASHRAE recommended range.

2. Implementation of air management improvements.
The following air management improvements were made, with return on investment of less than a year:
 Blanking plates installed to cabinets.
 Gaps in raised floor were sealed Ð cable cut-outs and gaps around PDU bases.
 Floor grilles were moved to where they were needed.
 Semi-cold aisle containment was installed throughout, with flame-retardant curtains fitted above racks to separate hot and cold air streams.

3. Reduced CRAH fan speeds and change to control on supply air.
With air now properly managed in the facility, it was possible to reduce CRAH fans speeds from 100% to 60%. Changing the CRAH unit control strategy from return to supply air control also helped to maintain air supplied at the server inlet to within a narrow range (within 1¡C).

4. Increase in data hall air and chilled water set points.
Cooling unit set points were then changed, a degree at a time, from 24¡C on return air to 22¡C on supply air, with an associated increase in chilled water temperature set points from 6/12¡C to 17/24¡C

These temperature changes resulted in an increased COP of the chilled water systems and chiller delta T, resulting in $1.0M annual savings in electricity consumption.

Further changes included:
 Flow rate of outdoor air ventilation was minimised to pressurise the data hall, rather than provide a certain value of ventilation (not required by the design).
 Disabling of extract fans as design required pressurisation rather than ventilation.
 Widening of minimum humidity control in line with ASHRAE recommendations Ð from 50% RH to 5.5¡C dew point control.
 Disabling of dehumidification due to limited hours of operation above 15¡C dew point (ASHRAE maximum recommended).
These changes resulted in $0.3M annual savings in electricity consumption.

5. Feasibility of adding free cooling circuits to existing chilled water system.
The increased operating temperatures opened up the opportunity for free cooling operation. Several options were modelled, including cooling towers, and a dry cooler solution was chosen. 4 dry coolers were installed with both CRAH cooling coils operating simultaneously to leverage the additional heat exchange area. When both coils are used, the approach between leaving air and entering water temperature reduces to approximately half. Therefore, to achieve the same leaving air temperature, the chilled water temperature can be increased (by roughly half the approach). This results in a higher chiller COP and more hours of free cooling.

Dry cooler concept
Return water from the CRAH units passes through the additional circuit, which lowers the entering water to the chillers, reducing (sometimes eliminating) the chiller load. The dry coolers were added in series to allow additional hours of free cooling operation with partial free cooling. Connecting the dry coolers to the return section of the chilled water circuit meant little modification was required to the chiller controls. Also, connecting the dry coolers on the secondary side means they are supplied with warmer return temperatures, compared with on the primary circuit where there is some bypass flow, which again increases the number of possible free cooling hours.
Engineering analysis was used to model the predicted behaviour of the cooling systems. The energy model predicted 100% compressor free, free cooling operation for 63% of the year annual plus an additional 33% of partial free cooling (i.e. total or partial free cooling for 96% or the year).

6. Installation of free cooling modification using dry coolers.
The dry coolers were installed, and are now running. These changes resulted in an additional $0.4M annual saving in electricity consumption with a return on investment of less than 3 years.

Conclusion
The savings described in this article have reduced the annual electricity bill by $1.7M from $4.7M to $3.0M. Further savings were made on top of those described, by optimising the UPS system and lighting loads. The case study shows that these savings are possible, not only on legacy sites, but also in high availability environments.

Bearing in mind the criticality and live nature of the facility, changes were implemented incrementally and their impact monitored closely and continually, to ensure the same high level of reliability was maintained. Because of this, the savings were achieved over a period of time. Rather than stopping at the end of the programme, however, the operator has continued to increase data hall supply temperatures and chilled water temperatures leading to further savings, and has instigated similar programmes throughout their global portfolio.
Key to the success of the programme was buy-in from the operational team, who with training drove the optimisation process. The energy saving programme included a workshop element, which provided a common language for all teams to work together towards a mutual goal of saving energy, enabling them to share cross-discipline knowledge and experience.


Technologies for UPS energy efficiency

Energy efficiency is a major concern for all uninterruptible power supply (UPS) users, particularly those who operate large data centres where power requirements can run into megawatts. Fortunately, as Janne Paananen of Eaton explains, the latest technologies make it possible to reach energy efficiency levels that, not long ago, would have been considered impossible.

WHY IS THERE so much focus on the energy efficiency of UPSs? The answer is simple – even though present-day products have efficiency figures that are usually well over 90%, a small additional increase in efficiency can mean big savings, especially in large installations. Consider, for example, a data centre where the UPS installation supplies a total load of around 1 MW, which today is by no means unusual.

At typical energy prices, increasing the efficiency of the UPS by just one per cent will save more than €10,000 per year and reduce CO2 emissions by around 50 tonnes. This is equivalent to driving a car around the world eight times. And these figures don’t even take into account the extra savings that will be made because more efficient UPSs need less cooling.

Now let’s consider factors that affect UPS efficiency. We’ll concentrate on double-conversion types because these provide the most comprehensive protection and are almost always chosen to protect critical loads. A good double-conversion UPS can be expected to have an efficiency of around 94%. Or at least it can if it is loaded to 40% or more of its capacity. With smaller loads, the efficiency falls dramatically and at 10% load it may not be much more than 80%.

This is potentially bad news in modern virtualised data centres where loads can change rapidly and unpredictably throughout the day, and in applications where the UPS installation has been deliberately oversized to allow for the future addition of additional server racks. There is, however, a solution, in the form of Variable Module Management System (VMMS) technology.

VMMS technology uses UPSs that are each made up of several uninterruptible power modules (UPMs) that, in effect, work in parallel to supply the load. When the UPS is lightly loaded, any UPMs that are not currently needed to supply the load are taken out of service and put into a low power quiescent mode. As a result, all of the load is transferred to the remaining UPMs, so that these are well loaded and will operate at, or near, their maximum efficiency. The load distribution is managed dynamically, so that the quiescent UPMs are instantly brought back into operation when the overall load on the installation increases.

Installations made up of multiple UPSs can also be managed by VMMS. If, for example, the installation has three UPSs each comprising three UPMs – a total of nine UPMs – VMMS can dynamically configure the installation so that one UPM, all nine UPMs or any number in between are operational as needed to support the load at a particular time.

VMMS helps enormously to maximise efficiency but, even when operating at their optimum load levels, conventional double-conversion UPSs are no more than 94% efficient. What can be done to further increase efficiency? The answer is to use UPSs that incorporate Energy Saver System (ESS) technology. To understand how ESS works, it’s useful to recall that all double-conversion UPSs comprise four power function blocks – the rectifier, the inverter, the battery and a static switch. Between them, these offer three modes of operation. Normally, the UPS operates in double-conversion mode, with the rectifier taking power from the mains and feeding to the inverter, which supplies the loads. If the mains supply fails or its quality deteriorates, the battery feeds the inverter to maintain power to the loads.

Finally, the UPS can operate in bypass mode with the static switch closed. In this mode, the rectifier and inverter are bypassed, and power is fed directly to the loads via the static switch. This mode is most often used if there is a problem with the rectifier or inverter, or if maintenance is being carried out on the UPS.

A UPS with ESS has exactly the same four power function blocks. No new blocks that could increase complexity or reduce reliability are introduced. The ESS UPS makes better use of the blocks, however, by providing a fourth operating mode – ESS mode. In this mode, the static switch is closed and power is fed direct from the mains supply to the loads, just as it is in an ordinary UPS operating in bypass mode.

The big difference, however, is that in ESS mode, the rectifier and inverter are held in a state of ‘extraordinary system preparedness’. This means that, if the mains power quality falters, the UPS can switch to full double-conversion operation in less than two milliseconds. This is so fast that the transition is invisible even to the most sensitive of IT equipment.

ESS UPSs will normally operate in ESS mode almost all the time, switching to double-conversion mode only occasionally to deal with mains disturbances. In ESS mode, their efficiency is a truly impressive 99% as the losses associated with the rectifier and inverter are eliminated. The potential for energy savings is huge and, as an added bonus, because ESS UPSs run cooler most of the time than conventional UPSs, they are more reliable and they have longer working lives. While VMMS and ESS technologies offer effective ways of significantly increasing the efficiencies of UPSs during normal operation, there is another aspect of energy efficiency that needs to be considered.

To ensure that the batteries used in UPSs are still capable of supporting the load for the required run time, they must be tested regularly. The most conclusive way of doing this is with a discharge test which, as the name suggests, involves almost fully discharging the batteries while their performance and capacity is monitored.
The traditional way of doing this is to discharge the batteries into a load bank – essentially a bank of resistors – which means all the energy the batteries have stored in them is converted to heat and wasted. In a big UPS installation this can be a significant amount of work – with a three MW installation recently tested, for example, the cost associated with annual battery testing was worth €50,000.

Another advanced technology – Easy Capacity Test (ECT) – provides a solution for regular maintenance. With ECT, the UPS is temporarily reconfigured so that its own power modules can be used to feed the energy from the batteries back into the supply system. This eliminates the need for external load banks and all associated costs and, as a bonus, no waste heat is generated and the energy taken from the battery during the discharge test is put to good use. In redundant systems the testing can be performed concurrently to avoid unnecessary downtime.

In summary, in this article, we have discussed three novel technologies: VMMS, ESS and ECT. Between them, these technologies can make possible energy savings worth tens of thousands of euros over the life of a large UPS installation, as well as significantly reducing the installation’s environmental impact. Those involved in the specification and purchase of UPSs are, therefore, encouraged to seek out products that incorporate these technologies so that they too can enjoy lower costs combined with reduced carbon footprint.

To learn more about Eaton’s power quality solutions, visit www.eaton.eu/powerquality. For all of the latest news follow us on Twitter via @Eaton_UPS or find our Eaton EMEA LinkedIn company page.


Has the need for refrigeration now been eliminated from data centre cooling forever?

By Simon Davis, Sales Director for Aqua Cooling.

DATA CENTRES have moved a long way in a relatively short period of time in their designs. Initially they tended to utilise expensive and energy hungry cooling systems, all based on a design premise of 19°C air temperature and chilled water temperatures of 6°C, with refrigeration being the only source of cooling. In the next phase chilled water temperatures were slowly increased (where used) to 10°C to maximise energy and free cooling was introduced on systems that were water based.

These changes had one common goal – to reduce energy costs. We’d like to kid ourselves that the force driving these design changes was that we were being “green” but we all know the reality was a need to reduce our energy costs, in an era where they were identified as 33% of total operating costs.

Along the path, refrigerant CRAC DX systems were left behind as somewhat expensive relics and have slowly been removed or left for standby basis.
The introduction of turbocor compressors was a real gold rush for the companies selling them. They had super-efficient compressors, flooded evaporators and oversized everything, which was great if chilled water was still required. In reality these had their own downsides, ranging from expensive specialist service requirements to super expensive replacement compressors. If the wonderful futuristic levitating bearings went wrong and the entire compressor required replacing you’d be looking at a figure of well over £20,000, which you can bet wasn’t mentioned in the sales pitch!
However, even turbocors have now been left far behind in the race by server manufacturers to sell their equipment with higher and higher air on temperatures. This has driven the original 19°C 50% RH requirements to up to 27°C and beyond. ASRAE itself now deems that 27°C is a normal running condition, but 32°C can be acceptable for short periods of time.
So, with these temperatures, can we now just use fresh air systems to cool our data centres rather than refrigeration?

The answer is - not entirely. Fresh air systems require large foot prints and won’t fit all foot prints - new data centres have to be designed with real estate in mind and where you are trying to build will probably more than dictate what you eventually turn to. As with all things, there are those that advocate the systems and those that reel off many disadvantages, whilst ignoring the benefits.

As our ambient temperature now regularly hits 35°C at some point during summer and the quantity of adiabatic water that has to be stored in case of mains water failures, many systems still have full dx or chilled water backups to provide their N+1 or N+N system. The other major down side is their ability to shift enough air in medium to high density solutions, this together with the fact that most servers now work with a 12°C rise across them at full tilt, if we have a design of 27°C that means that the hot aisle is close to 40°C and uncomfortable to work in.
So, is there an alternative that potentially gives a closed solution without having to duct large amounts of air in and out of a centre whilst still promising a refrigeration free, or chillerless data centre?

Well, nearly……. The rise of the water cooled rear door within centres, along with the return of cooling towers to industry thought processes, provides this perfectly. Or, as a cooling tower alternative, the use of a hybrid adiabatic cooler combined with a relatively cheap and cheerful chiller as the +1, is growing in its popularity due to efficiency cost and working environment.

Having already achieved a truly chillerless centre operating with the above and winning CEEDA gold award accreditation last year, we have clearly identified and proven that this can work (with a temperature neutral system that provides the front of the cabinets temperature or old cold aisle at the same temperature as the rear which used to be the hot aisle).
Many people do not like the idea of cooling towers because of legionella and their health and safety requirements. But, to those in the know, not one properly controlled tower within the UK has actually been the cause of legionella and, if properly maintained, will also not become at risk of it either. However, as we have found, this does raise an interesting quandary - with the rise of water temperature and the possibility of using hybrid adiabatic coolers with a chiller mix, the actual energy saving with cooling loads below 500kW are in reality outweighed by water usage and water treatment costs and therefore the actual green credentials are somewhat false and the running costs overtake the energy savings.
One of the biggest sticking points that many people still have with water cooled systems is water being in such close proximity to the servers and data. Well, we can easily address this, by running the chilled water through a leak prevention system (LPS) which operates the cooling water under a negative pressure and therefore mitigates the risk of water escaping from the cooling system completely (whilst keeping it running - even with a leak). This British patented product has now been successfully employed in many countries across the world and can be designed for systems from 50 to 2MW.
So, in answer to the question of chiller less or refrigeration less data centres, I am convinced that not one size fits all and there are pros and cons for all systems. BUT it’s fair to say that the advantages of fitting super-efficient super expensive chillers has now been expelled and that good old natural sources of cooling such as bore holes, adiabatic hybrid cooling and cooling towers through evaporative basis, is the future for most retro fit and new systems.


Considering Direct Fresh Air Free Cooling..?

by James Wilman, Engineering Sales Director, Future-Tech.

WITH THE COST of energy representing one of the largest operating overheads associated with data centres, finding more energy efficient ways of supporting IT systems is a major driver for our industry. Ancillary systems, most markedly power and cooling infrastructure, present one area of tangible improvement and these systems have seen huge gains in energy efficiency over recent years.

The creation of metrics such as PUE and DCiE has not only given us an easy way of tracking improvement within existing data centres but has also given Marketeers a simple way of comparing one facility or solution to another. As most of you reading this will know the aforementioned professional group have, in some cases, abused these metrics. However what their wholesale use has done is bring the concept of data centre energy efficiency to CFOs, CEOs, COO and end users in a simple and accessible format. This can only be a good thing and over the last 10 years data centres have seen average PUEs fall from +3 to sub 1.5 in all types of facilities, from enterprise
to micro.

So what has this got to do with choosing direct fresh air free cooling?
As a vendor neutral data centre designer there are a huge number of products and solutions available to help me reduce a data centre’s PUE. Often an effective place to start is by incorporating compressor free cooling. These cooling systems fundamentally fall in to two camps – direct and indirect. Indirect systems use cooling coils or heat exchangers to transfer a data centres waste heat to atmosphere; these cooling systems maintain separation between the data centre’s clean environment and the outside world. Direct air systems remove the data centre’s hot air by replacing it with cooler air from outside.

Direct fresh air cooling systems have been used for many years and there is no doubt they can be very energy efficient. Future-Tech has installed a broad range of these systems over the years and achieved annualised, full facility, PUEs of 1.12. Many of these solutions have been retro fitted to existing live data centres resulting in significant energy savings for their owner operators. Although these systems have worked well it should be noted that direct fresh air solutions are definitely not a silver bullet for energy efficient data centre cooling.

Over the last 5 years I have seen a number of direct fresh air systems installed by data centre design companies and owner operators who have not thought the risks through properly. Where this has gone wrong has been with regard to external contaminants found in the local environment and the inability for standard filtration to remove such contaminants. The most common contaminant I have experienced has been sea salt. I have met and spoken with a number of data centre owner operators with facilities located up to 5 miles from the sea who have experienced high saline levels in their data centre’s incoming air. Some of these data centres have been located further from the coast but within a similar distance of tidal estuaries or rivers. When talking about the risks presented by high saline levels it is often difficult to provide “real world” evidence. This is because organisations that have experienced such problems don’t really want to make them public knowledge. On this occasion though, an organisation has.

The photos above were provided by an organisation who, against Future-Tech’s advice, installed a direct fresh air system and, although they will remain nameless, wanted to get the evidence out there to help other owner operators avoid the same issues. Above is the result after less than 3 years of operation in an area that is not deemed to be a “maritime environment” and has a moderate saline content. Obviously this is pretty scary stuff, if a powder coated cabinet can look like this within 3 years imagine what the inside of one of those servers must be like. The problem is standard filtration such as EU4 and EU8 will not effectively remove saline contamination form the incoming air.

Other contaminants such as pollen and diesel particulates can also cause issues but with correct filtration (EU4 and EU8) these issues are normally only operational. By this I mean the filters get blocked quickly and require frequent changing. If the filters are not changed the static pressure of the cooling system will increase and this will make the fans work harder using more energy. If incorrect filtration is used these foreign particulates will make their way in to your data centre and IT equipment. I have visited data centres with this problem and on one occasion seen pollen caked onto the front of cabinets, much like the salt on the photos above. However the owner operator in question was not willing to allow me to take photos.

If your organisation is looking for ways to make its data centres more energy efficient, whilst avoiding potential risks, please contact Future-Tech on 0845 9000 127 for some impartial advice.


The data centre of 2025: power and efficiency

By Giovanni Zanei, product marketing director, Power Systems for Emerson Network Power in Europe, Middle East and Africa.

EMERSON NETWORK POWER has been a partner with the Data Centre Alliance (DCA) since 2013. The company contributes to forums, standards and joint industry initiatives via the DCA’s collaborative platform, Data Central. Giovanni Zanei, product marketing director, Power Systems for Emerson Network Power EMEA, shares his insights on the future of data centre power and efficiency.

Data centre power has become a hot topic in recent years with organisations increasingly under pressure to reduce their energy consumption. Rising energy costs, alongside new regulatory pressures and government targets, have driven many of the changes in management approach and priorities. And this isn’t likely to change anytime soon.
Powering the data centre of the future is dependent on many factors such as location, demand for computing and storage, and the evolution of technology.

Emerson Network Power’s report, “Data Center 2025: Exploring the Possibilities”, uncovered the key trends we’re likely to see gain further momentum over the next ten years. The report suggested that renewable energy, particularly solar, is set to have a more optimistic future in data centre power over the coming years.

In today’s market, the EU’s Renewable energy directive sets an ambitious target of 20 percent final energy consumption from renewable sources by 2020. To cope with the increased demand, solar technology will need to make some significant advancements in the next decade.
Other alternative sources are also likely to start playing a more substantial role. Research from Microsoft’s Global Foundation Services suggests that data centre engineers have started exploring the possibility of powering a data centre entirely by fuel cells built into the server racks. This would mean that the power plant would be brought inside the data centre to minimise power distribution losses – a development that would bring great benefits in minimising downtime.
Sourcing power isn’t the only issue grabbing the attention of data centre management. Energy efficiency, the other side of the same coin, is also becoming an equally prominent issue for data centre operators. After all, while we are likely to see innovations in power sourcing, this doesn’t factor in how this power is being used. Inefficient legacy technology would counteract any advances in increasing the proportion of renewable energy sourced.

While it’s difficult to make exact predictions, the good news is that a significant majority (64 percent) of the Data Centre 2025 respondents believe that by 2025 less energy will be required to produce the same level of computing performance available today. In fact this seems a relatively low figure considering the majority of respondents also believe data centre infrastructure and IT equipment will become more efficient over the next decade.

Indeed, by 2025 it is expected that innovations will lead to efficiencies across a broad range of data centre elements, such as increased server efficiency, data centre thermal management and streamlined power delivery. For instance, Uninterruptable Power Supply (UPS) systems are increasingly important in meeting the needs of end users. Standardisation within the UPS industry is an important trend and manufacturers are striving to offer more consistent applications leading to greater efficiency and more streamlined solutions. Indeed, UPS systems have made significant advances in efficiency in recent years with the introduction of new, highly efficient functioning modes and intelligent paralleling and can now produce efficiencies approaching 99 percent.

Monitoring and diagnostics is already a vital weapon in maximising efficiency in the modern data centre and is set to be increasingly significant in data centre management. Diagnostic capability, data tracking, measuring and logging, as well as predictive maintenance and event analysis features enable modern systems
to intelligently adapt the power
supplied to the load in order to respond to the environmental conditions of the installation site and dramatically improve site availability, efficiency and power management.

Data centre operation looks like it will evolve immensely over the next decade with power and efficiency looking to be a top priority for the industry. With significant changes expected in the equipment set up, cooling temperatures and power sources, data centre infrastructure management (DCIM) solutions are going to become a necessity for managers to keep pace with the ever evolving requirements of modern data centres and stay ahead of industry competition.