Artificial intelligence is at a critical phase: interest is high, and so are expectations. Financial firms are allocating significant budget to AI initiatives, but they also expect clear returns on investment. While new models generate excitement, real transformation depends on successful deployment.
This is far from guaranteed. Creating infrastructure that successfully embeds AI technology isn’t as easy as simply putting large language models on top of an existing technology stack. Strategy, finesse and careful planning are required.
However, this dynamic isn't entirely unfamiliar. I believe lessons from the cloud transition of the early 2010s can offer valuable guidance for effective modern day AI implementation.
The Deployment Challenge
The technical complexity of deploying AI in financial services cannot be understated. Security concerns dominate every conversation, infrastructure requirements are daunting, and there's a significant shortage of specialised skills combining AI expertise with deep financial services knowledge.
As organisations adopt agentic AI systems, underlying architecture becomes more fragmented and harder to govern. Systems interact with multiple platforms, making decisions based on evolving contexts. To effectively maintain governance, sophisticated monitoring frameworks are required to track and audit decisions made across platforms. That level of oversight introduces a new layer of operational complexity — one that fundamentally reshapes how IT departments need to think about architecture and control.
The Knowledge Gap Problem
Conversations with clients reveal widespread uncertainty around deployment models, as even established firms can't articulate what’s appropriate or realistic. There's a troubling mismatch between expectations and reality, and AI is often sold as a shortcut to dramatic operational overhaul, using off-the-shelf tools as simple solutions to complex business challenges.
Organisations exist at very different stages of readiness. While mindsets have shifted toward cloud adoption over the past decade, deployment maturity still varies widely across the industry.
Learning from Cloud Computing
The parallels to the early 2010s cloud boom are evident. Back then, the cloud boom pushed for replacing on-premises systems in favour of fully cloud-native infrastructure. But, in reality, many enterprise processes, security frameworks and expertise were fundamentally geared toward on-premise deployments. Cloud went beyond quick technical migration, demanding a complete paradigm shift.
While there was often strong leadership support for cloud initiatives, actual transformation had to slowly filter through every organisational level to create meaningful change. Many firms tried applying existing on-premises thinking to cloud environments, rather than embracing the fundamentally different approaches that cloud enabled. As a result, a large knowledge gap formed. One report said that 1.7m cloud-related roles went unfilled in 2012, as applicants lacked the necessary training, certification and skills.
It makes sense that current senior technologists are approaching AI with caution. These professionals were at the coalface during the cloud transition and understand that successful adoption requires more than enthusiasm and budget. A Bloomberg survey from August 2024 is promising, reporting that 66% of CIO respondents were already deploying AI copilots, a steady increase from the 32% back in January 2024. This suggests perhaps we’ve learnt from the knowledge gap of the cloud boom, and that firms are rethinking established approaches in order to effectively integrate modern AI systems.
The Questions We're Not Ready to Ask
Today's AI adoption faces similar challenges around paradigm shifts. Many current questions and processes simply won't work in the context of modern AI systems.
Take explainability as an example. Traditional risk frameworks demand detailed explanations for decisions. But an AI agent can't 'explain' why it generated a specific response in the traditional sense. It might cite main sources, but that's fundamentally different from the step-by-step logical reasoning that governance frameworks expect.
And herein lies the problem: We’re asking horse-based questions about a car. Legacy governance frameworks weren’t built for agentic systems. They demand explainability that doesn’t map to how AI actually works. Strong leaders need a willingness to accept that cherished assumptions about technology may no longer apply.
A Measured Path Forward: Avoiding Cloud-Era Mistakes
Every technological revolution needs early adopters, but early adoption doesn't require recklessness. AI adoption can be measured, thoughtful, and iterative, working with expert partners who understand both the technology and financial services requirements.
Success requires three critical factors. First, use cases must align with genuine business priorities rather than pursuing AI for its own sake. The most successful implementations solve real operational pain points where AI delivers measurable value.
Second, organisations must ensure infrastructure and talent readiness before large-scale deployments. This means investing in cloud-native platforms that support AI workloads effectively, building internal capabilities, and establishing partnerships with experienced providers.
Third, don’t repeat the cloud-era mistake of dragging new technology into old environments. AI doesn’t run well on legacy infrastructure — and trying to make it fit wastes time and undermines results.
Cloud taught us what happens when technology moves faster than understanding. We shouldn’t need to learn that twice.