According to Microsoft, the main justifier for this was actually improving internet connectivity to coastal communities as the data used would have less of a distance to travel, leading to faster and smoother web surfing for its users.
However, as part of ‘Project Natick’, as it has been dubbed, Microsoft has also acknowledged that data centres typically generate a lot of heat, and by placing them in the sea they will cool a lot faster. This would not only reduce energy usage and cut costs, it would also improve the longevity of the unit.
According to research from the Global e-Sustainability Initiative (GeSI), data centres already consume over 3% of the world’s total electricity and generate 2% of our planet’s CO2 emissions. For context, that’s the equivalent of the entire global aviation industry or a small city.
Many businesses, including the likes of Apple[2] are starting to adopt the concept of a ‘green data centre’. But there’s still plenty of work that could – and more importantly should – take place.
For most businesses, taking our IT systems for a dip in the deep blue is not really a viable option. There are however a few easy and relatively cheap steps you can make today to reduce energy waste within your data centre.
Using a containment system
Excessive heat is often the main culprit when it comes to power waste within data centres. It requires a lot of energy to keep systems cool, and often larger server rooms and data centres mix hot and cold air in order to keep them at the ideal temperature. This can however limit the capacity of the cooling system, which results in a power drain, and causes it to run less efficiently.
This can be resolved by fixing air tiles into the cold aisle of the system. Not only does this make the cooling more productive, it also raises return temperatures, allowing your computer room air conditioning (CRAC) units to operate more efficiently.
Of course, hot and cold aisle containment probably isn’t practical for smaller server rooms and data centres, where space restrictions and increased costs make the option prohibitive.
Virtualise servers and storage
Within data centres you will often find a dedicated server for each application, which can be incredibly inefficient, for both energy use and budget.
With virtualisation, you can share servers and storage onto one shared platform, whilst still maintaining a level of segregation between data, operating systems and applications.
This runs more efficiently, saves space, and reduces the number of power consuming servers, which is great for cost and for reducing energy waste.
Turn off idle IT equipment
It might seem obvious, but leaving equipment on idle uses more energy than you think. IT systems are often used far less than capacity allows. Servers for instance tend to only be around 5-15% utilised, for PC’s 10-20%.
When these systems are left on but unused, they still consume a large amount of the power needed to keep them running at full capacity.
To remedy this, it’s important to make an assessment of the equipment used, how often it is used, and whether it could benefit from being powered down during quieter periods of its use. It may be a relatively minor action, but it’s the cheapest and easiest way to save energy and it can be actioned today.
Move to a more energy efficient UPS system
At the heart of most data centres lies its uninterruptible power supply (UPS) system, an electrical unit which is used to support critical mainstream IT and communications infrastructures when mains power fails or supply is inconsistent.
Previously these units were part of the energy consumption problem. These large, standalone towers used older technology that could only achieve optimised efficiency when carrying heavy loads of 80-90%.
Such fixed-capacity units often tended to be oversized during initial installation to provide the necessary redundancy, meaning they regularly ran inefficiently at lower loads, wasting huge amounts of energy. These sizeable towers also pumped out plenty of heat so needed lots of energy-intensive cooling.
However, the technology has developed rapidly in recent years, and now your UPS system could be part of the solution.
Just as cooling equipment has improved, so too has UPS technology. Modular systems – which replace sizable standalone units with compact individual rack-mount style power modules paralleled together to provide capacity and redundancy – deliver performance efficiency, scalability, and ‘smart’ interconnectivity far beyond the capabilities of their predecessors.
The modular approach ensures capacity corresponds closely to the data centre’s load requirements, removing the risk of oversizing and reducing day-to-day power consumption, cutting both energy bills and the site’s carbon footprint. It also gives facilities managers the flexibility to add extra power modules in whenever the need arises, minimising the initial investment while offering the in-built scalability to “pay as you grow”.
If cutting carbon costs and reducing energy consumption within your data centre is at the top of your agenda, these small adjustments can be made quickly and with relatively minimal cost. Therefore, you could see your IT energy wastage drastically improve so there is no need to sink your data centre just yet.