How Data Centres Are Adapting to the Demands of AI

Artificial intelligence (AI) workloads are already transforming data centres, and at speed. From power and cooling to deployment timelines and security, the infrastructure beneath modern compute is being rethought. But for many operators, that pace is creating pressure points across the board. We spoke with Jon Abbott, Technologies Director, Global Strategic Clients at Vertiv, about how the infrastructure conversation is shifting, what risks are emerging, and where operators are finding smarter ways to keep up.

Q: What’s the biggest pressure AI is putting on data centre infrastructure right now?

It’s the gap between how fast AI is scaling and how slow IT infrastructure is to respond. Compute teams are pushing for denser, more powerful systems that can be stood up quickly. But physical infrastructure still has long lead times, not just for procurement, but for power upgrades, cooling integration, and site-level planning.

The result is a growing number of operators realising that what worked two years ago no longer matches the demands coming their way. AI isn’t something that can be layered onto existing systems without consequence. It changes the shape, the speed and the risk profile of the entire stack.

Q: How is cooling infrastructure adapting to AI-scale deployments?

Precision and flexibility are both essential. AI workloads are highly dense and thermally intense, which pushes air-based systems to their limits. Liquid cooling, whether rear-door or direct-to-chip, is no longer a future consideration. It’s something which is being specified now.

What we’re seeing now is a shift to hybrid environments. Air continues to handle lower-density and peripheral equipment, while liquid cooling systems target the highest density loads. But integrating those two modes introduces complexity. There’s a greater need for synchronised monitoring, fluid network management, and clear maintenance protocols that align with new risk profiles.

Cooling infrastructure is increasingly tied to workload reliability and performance, so it’s a critical system that must be tuned, monitored and managed with the same level of oversight as compute and power.

Q: Is there a typical infrastructure model that’s emerging, or is every site different?

The pressure to move faster is making prefabrication and modularity more attractive. Operators want systems that arrive ready to deploy, that reduce variables on site, and integrate power and cooling into a known, tested unit.

There’s also more joint planning between disciplines as well as third parties such as technology partners, integrators and utilities. Power, thermal and software teams are sitting down earlier to map out system behaviour - not just specifications. And the more AI becomes business-critical, the more that kind of coordination becomes non-negotiable.

You still see variation between colocation, hyperscale and enterprise sites, but the trend is clear: integration over isolation, speed over customisation, visibility over assumptions.

Q: Are operators rethinking power in response to AI too?

Definitely. Higher rack power density changes everything - from uninterruptible power supply (UPS) provisioning to distribution pathing. But it’s also about behaviour. AI inference workloads, for instance, can cause sudden fluctuations in how much power is required. So power systems need to respond in real time, not just hold steady under flat loads.

Beyond resilience, there’s growing interest in how power infrastructure can provide operational and financial flexibility. That includes grid participation, demand response, or integrating battery assets to manage volatility. In Europe, especially, these capabilities are becoming a serious part of long-term planning.

Q: What operational risks are going unnoticed in some AI deployments?

Hybrid cooling - integrating different cooling systems (air and liquid) to achieve seamless operation - can be challenging if not managed properly. Therefore, careful planning is required, which means collaboration between IT, facility, power teams and specialist expertise. If the airflow and liquid systems aren’t coordinated, efficiency and performance could be negatively impacted. 

Another issue is inconsistent commissioning. When upgrades are layered onto legacy infrastructure under time pressure, systems often drift out of sync. That could be mismatched firmware, overlapping alarms, or gaps between facility-level and IT-level telemetry.

These risks can be managed, but only if integration is planned from the start, and operations teams have access to full-stack visibility.

Q: Is physical security evolving alongside this shift?

Yes, particularly in sites supporting AI model training or housing high-value infrastructure. Operators are revisiting access control, real-time surveillance, and rack-level authorisation. There’s more interest in role-based policies that align with operational states - so access privileges can be time-bound or workload-specific.

In automated or lightly staffed environments, the integration between physical security and operational telemetry becomes even more important. If a door opens outside an expected maintenance window, that should generate the same level of response as a power fault. Security can’t sit outside the main operational workflow anymore.

Q: How are sustainability and regulatory expectations influencing infrastructure decisions now?

Sustainability strategy is no longer an isolated workstream. It’s increasingly tied to risk management, cost control, and even market access. Operators are now expected to demonstrate how infrastructure choices align with energy efficiency goals, emissions targets and, in some regions, mandatory reporting requirements.

Cooling systems are being evaluated not just on effectiveness, but on refrigerant type, lifecycle emissions and water use. Power procurement strategies are shifting to include cleaner sources, on-site generation and more intelligent load balancing. Even material selection and equipment disposal are under review.

These changes are being driven by more than ESG pressure. In many cases, they’re becoming prerequisites for securing energy connections, local planning approval or customer contracts. For forward-looking operators, this is prompting a more integrated approach where performance, resilience and sustainability are designed together from the outset.

Q: What defines a future-ready data centre in this environment?

Adaptability. The data centre that succeeds won’t necessarily be the one with the most power or space - it’ll be the one that can change direction quickly, bring new capacity online fast, and manage it intelligently.

That means tighter integration between systems, more use of modular and prefabricated components, and operational visibility that extends from the power room to the workload layer. It also means rethinking risk - not only in terms of uptime, but in terms of latency, energy cost, security, and speed to deploy.

AI is a demanding tenant. Infrastructure has to evolve to match that expectation - not in the abstract, but right now.

By Matt Powers, vice president of technology & support services, Wesco.
By Roland Mestric, Head of Strategic Marketing, Network Infrastructure, Nokia.
By Matthew Whalley, Managing Director, Ilex Content Strategies.
The Middle East is undergoing a huge shift in its engineering and infrastructure sectors, with...
By David Knox, Global Director of Energy & Sustainability at Colt DCS.
By François Haykal, Senior Project Consultant at BCS, the specialist services provider to the...