Unhackable by design: securing AI data centres at the physical layer

By Michael Vallas, Global Technical Principal, Goldilock Secure.

AI data centres are fast becoming the backbone of the digital economy. They process the most sensitive data, power near-constant workloads and underpin critical services. That makes them indispensable and uniquely exposed.

By the end of 2025, over a third of global data centre capacity is expected to be dedicated to AI workloads. According to McKinsey, demand for capacity overall is projected to grow by more than 20% each year through 2030. As these environments scale, the attack surface grows with them. 

Colocation and hybrid models magnify the risks further, creating more entry points and more opportunity for attackers to move laterally. It’s happening now, with cybercriminals and nation-state actors already zeroing in on these weaknesses. The UK government’s decision to designate data centres as part of the nation’s critical infrastructure underscores just how high the stakes have become. 

And yet, protection models remain skewed toward software-first strategies: cyber defences represent vast and potentially fragile lines of code trying to keep pace with automated, adaptive attacks. The old approach feels a bit like building a firewall in a burning forest: it's missing the bigger picture.

The limits of software-first security

Almost every cybersecurity battle today is still fought in code. We patch, configure and layer on new tools, and attackers respond by finding new cracks. The result is an endless cycle of software versus software, with defenders overstretched.

Firewalls and endpoint security remain vital parts of a layered strategy, of course. But like all software-based tools, they have their limitations. Even a brief compromise in critical environments can disrupt essential services or expose sensitive AI-driven data. When detection means sifting for faint signals across trillions of daily events, delays are inevitable and dangerous.

The case for physical resilience

True resilience means gaining control over the physical pathways that carry data in and out, and the networks that connect critical systems. This is where hardware-enforced isolation comes in.

Physical isolation allows operators to instantly disconnect compute, storage and network segments with secure, out-of-band commands that sit outside the attack surface.

The concept is simple but powerful (or powerful because it’s simple): if malware can’t reach the system, it can’t compromise it. And unlike software-only controls, physical isolation can’t be tampered with remotely. There’s no IP address, no hypervisor dependence, no accessible or exploitable code: just a clean physical break.

Critically, this doesn’t mean downtime. Systems can continue running safely in an offline state, maintaining core operations while remaining unreachable to attackers. Organisations can decide when to be connected and when to disconnect, moving from an “always-on” mindset to a risk-aware, resilient model.

Where physical isolation matters most

The value of isolation is clearest in high-stakes environments where speed and certainty matter. In colocation facilities, it prevents cross-tenant spread by cutting off a compromised segment before the threat can move laterally. 

For enterprise IT, critical administrative systems can be isolated during high-risk operations or when threats are detected, containing potential damage while keeping core business functions running.

At disaster recovery sites, systems can remain physically offline until needed, ensuring clean, uncompromised backups are always available to restore services. 

In cloud and backup environments, selective disconnection ensures ransomware cannot encrypt critical archives. And across AI-heavy workloads, hardware isolation blocks data exfiltration and model tampering, while enforcing strict security boundaries around sensitive processes.

The bottom line

As AI becomes embedded in everything from healthcare diagnostics to financial systems to national security, the infrastructure behind it must be absolutely trusted. And here's the thing: complete trust doesn't come from adding more layers of software. It comes from designing resilience into the system itself.

The NCSC has already urged organisations to build in the capability to fully disconnect critical systems from networks: a clear signal that such measures could soon become regulatory requirements. Taking that step not only strengthens defences today but also positions organisations ahead of potential regulation tomorrow.

All of this comes down to protecting the backbone that modern society increasingly depends on. When AI systems control everything from power grids to medical devices, a single breach can threaten public safety, disrupt economies and undermine national security.

In the end, it comes down to this: do we only keep chasing patches in a game we can’t win, or do we also build a defence designed to hold no matter how the threat evolves?

By Nick Ewing, Managing Director at EfficiencyIT.
By Dave King, Product Engineering Architect at Cadence.
By Conrad Purcell, partner, and Kayley Rousell, associate, in the Energy, Power and Natural...
By Straightline Consulting’s Managing Director, Craig Eadie.
By Juan Colina, EMEA Data Centre & IT Segment Leader at Eaton.
By Aashna Puri, Global Strategy and Expansion Director, CyrusOne.
By Matt Powers, vice president of technology & support services, Wesco.