From grid to chip for sustainable AI growth

By JP Buzzell, chief data centre architect at Eaton.

  • Monday, 20th April 2026 Posted 4 hours ago in by Phil Alsop

Power is a critical factor in the future development of data centers supporting  AI-powered applications and services for consumers and businesses. According to IEA, projections for data center power consumption indicate a steep increase globally by 2030 (from today’s 415 terawatt-hours (TWh) to around 945 TWh globally by 2030, Source: IEA). 

This highlights the need to review data centre power infrastructure models to accommodate rapid growth in power demand. 

AI’s power and sustainability challenges

One of the primary drivers is the considerably higher power consumption of modern AI processors compared to traditional processors. An AI GPU consumes 700W to 1200W, while a traditional server CPU draws 150 to 200W of power. (Source: Hanwha) These demands are compounded by the sustained power profiles of AI workloads, in contrast to conventional cloud computing where loads are varying daily. 

The sustained power consumption for AI introduces new challenges for energy planning. Even simple AI tasks require a high amount of energy. For example, research from MIT reveals that generating a single image using AI requires the same amount of power to fully charge a smartphone. At scale, this has significant implications for overall power demand across AI applications.

Sustainability presents an additional challenge. This is not just the carbon consumed to power AI processors but also the greater amounts of heat they pump out. Identifying the most efficient and sustainable methods to cool increasingly dense chip racks is key. 

Building a unified power ecosystem with grid-to-chip

Growing awareness of the electricity required to support new generations of larger AI data centers or AI factories is contributing to local, national and international policy discussions. This is adding urgency to identifying the most effective strategies for how new AI data centers can be best designed and engineered to keep ahead of soaring power demands sustainably.

In response, a grid to chip approach is increasingly being applied to how AI data centers are developed. This is a unified, modular and digitally orchestrated power ecosystem that delivers clean, resilient power from the grid all the way to the chip. 

The starting point is how the data center is set up to be a good grid citizen. This refers to how a data center draws power from the grid, uses it efficiently, and operates in a way that supports – rather than disrupts – the electrical network.

A key element to being a good grid citizen is the deployment of grid-interactive uninterruptible power supply (UPS) equipment. These systems can immediately respond to changes in grid frequency and help stabilise power supply during disturbances. When this UPS infrastructure, which is constantly sensing and analysing grid performance, is combined with lithium-ion-battery energy storage systems (BESS), more benefits are possible. Electricity from the grid or onsite renewables like solar or wind can be stored and provide reserves that can be rapidly tapped during peak demand, easing grid congestion. When designed and operated in this way, data centers act as flexible, dispatchable energy assets, enhancing grid reliability while ensuring uninterrupted IT operations. 

Transitioning to efficient direct current systems

The next stage in this evolution involves changes to how power is distributed within the data center. 

Today, most high-performance computing (HPC) data centers running cloud computing and AI applications are accessing alternating current (AC) power at the perimeter. Converting AC to direct current (DC) at the servers typically involves multiple conversions before the power reaches the IT load. 

By immediately converting the incoming the need for any cumbersome conversion (Source:NVDIA). By simplifying the power distribution equipment required, there also is greater system reliability, less heat is generated, and an overall improvement in energy efficiency achieved. Changing to DC distribution delivers these operational advantages for AI data centers because medium voltage AC grid power into 800V DC eliminates higher voltage brings down current demand, cutting resistive losses and making power transfer more efficient. 

While a transition to DC distribution may appear complex, it can be implemented through incremental, evolutionary steps. To retrofit DC distribution, new DC “side car” technology can be integrated into existing AC infrastructure to convert 800V DC into the IT cabinets. A longer-term goal would be to implement a medium voltage solid-state transformer (MVSST) at the substation. This converts medium voltage AC coming from the grid into low voltage DC to be distributed through the data center IT and servers. 

Accelerating deployment with a five-block framework

The need to build new data center capacity for AI is increasing rapidly. The final part of this strategy is how all the power, cooling and software management infrastructure can be modularised to accelerate the delivery and commissioning of new data centers. Each building block can be physically pre-assembled and engineered off site on a factory production line. This modular approach helps address several constraints, including skilled labour shortages in data center engineering. The integrated power, cooling and server infrastructure can be containerised and transported on site ready to be plugged into the physical structure and infrastructure. 

The concept of building data centers in modular blocks starts with on-site generation and microgrids based on various power generation sources such as gas turbines, solar, wind or other power plants, as well as grid connections. In delivering medium voltage power into the data center, this segment can be enhanced by how both UPS and BESS are integrated. Together, these manage the power load provided from the grid, reducing peak power usage and applying power harmonic filtering to improve energy efficiency and protect against equipment malfunctions.  

The next two blocks cover the grey and white space areas within any data center. 

The grey space block refers to any technology providing back-end infrastructure that is taking the power coming from the power generation block. This comprises the switch gear, transformers, breakers and UPS essential to distributing power, as well as delivering uptime during unplanned power events. For an AI data center this is where that MVSST power technology could be built in to provide both AC and DC power for any combination of IT and mechanical loads. The use of solid-state transformers brings advanced control, resiliency, and efficiency to the grid interface that is so critical for the sustained high-power demands of AI. 

The white space block is the power distribution infrastructure that feeds directly into the servers and is specified for hyperscalers with 800kW per rack. This could be a direct connection to the MVSST or the DC sidecar cabinet-based technology that enables 800 VDC direct-to-chip power in AI data centers.

The fourth building block is cooling and the thermal management technologies including direct-to-chip cooling solutions. Different technologies can be combined, including advanced liquid-to-liquid and liquid-to-air cooling distribution units (CDUs) with cold plate technology to remove heat directly from CPUs and GPUs. These deliver increased cooling and energy efficiency and capacity compared to traditional fan-driven, air-cooled systems as well as lowering noise pollution by eliminating the need for loud server fans.

The final block is the energy management software and services that wrap around all the physical power infrastructure. This provides the control, performance insights and predictions into the likelihood and locations of changing power loads. By providing this view of all connected assets whether operational technology or IT, the operator can achieve their goals around safety, efficiency, resiliency and sustainability while also reducing risk. 

Forging the future

The next generation data centers in planning and construction will be the foundation for critical services for citizens and businesses for the coming decades. Simply having good access to grid power is clearly not enough if these facilities are to deliver high efficiency, performance and no token sustainability.  

A unified and intelligent power path – from the utility grid to the advanced AI chips – is increasingly required. In rethinking data center power infrastructure and breaking away from traditional approaches, this new class of AI data centers can operate as an active partner in the community, supporting grid stability and national energy goals on electrification and renewables. 

By Michael Poto - Product Manager - Global Chilled Water Systems at Vertiv.
Cooling is one of the most expensive operational outlays in data centre operations: reducing its...
Subzero Engineering discusses why Asia-Pacific is fast becoming the proving ground for...
By Mihir Nandkeolyar, Director Business Development Global Data Centre Solutions, Johnson Controls.
By Matt Evans, CEO at Apx Data Centre Solutions.
By James Rogers Jones, Head of Sustainable Development, BCS Consultancy.
By Kate Steele, Director, EMEA HPC/AI at Lenovo Infrastructure Solutions Group.