Most organisations are now acquainted with the idea of AI agents – autonomous systems that ingest information, make decisions and take actions to achieve specific goals.
AI agents aren’t part of some far-away future – they’re already here. 82% of organisations are using AI agents today, often across multiple areas of the business. These agents aren’t sentient, but they aren’t passive tools either. They are ‘humanish’, often independently handling actions that were once reserved for human oversight.
However, whilst AI agents offer businesses incredible opportunities for efficiency, the risks introduced by these tools can easily outweigh the benefits if they are not managed correctly. From accessing sensitive systems to sharing privileged data without authorisation, the potential consequences of AI agents running riot in the enterprise could be devastating. And, concerningly, less than four in ten organisations are governing AI agents – although adoption continues to skyrocket.
So, what’s needed to stop AI agents from ‘going rogue’? Organisations need to manage agents with the same level of oversight as their human counterparts. Adaptive, real-time access controls have emerged as a non-negotiable for enterprises looking to capitalise on all the benefits automation has to offer, without losing visibility of security or compliance.
Controls don’t slow innovation – they enable it
AI agents can operate independently and learn, adapt, and interact in ways that are hard to predict. Without strong governance, they can introduce serious vulnerabilities into even the most secure environments. That’s not to say businesses shouldn’t be leveraging AI agents, but they do need to put controls in place to keep their new ‘digital workforce’ in check. Think of it like brakes on a race car: they’re not there to slow you down unnecessarily, but to give you needed control when navigating a difficult course at high speed.
At the moment, many businesses are ‘driving the car’ at breakneck speed, without working brakes. The result? AI agents are spinning out of control – with 80% of organisations reporting that their AI agents have already performed unauthorised actions, including accessing and sharing sensitive information. And, despite the vast majority of tech leaders (92%) recognising that AI agent governance is crucial to enterprise security, only 44% have implemented relevant policies. Beyond regulatory compliance issues, this creates vulnerabilities affecting the whole supply chain - including employees, partners, and customers with system access - who may receive inaccurate information or, more dangerously, expose access credentials or other data that play into the hands of malicious actors.
Behaviour-based governance
With 98% of companies planning to expand AI agent deployments in the next year, enterprises will only become more dependent on this extended digital workforce over the next decade. This explosion of non-human identities, coupled with increasingly sophisticated cyber threats, will require tools that facilitate a more adaptive approach. In the past, a ‘castle and moat’ approach to security was sufficient. SOC teams were responsible for understanding what was happening on an endpoint: their job was simply to protect perimeters. Now, vulnerabilities can easily explode outwards from within the business itself, if agents are left to move laterally and freely within networks.
Organisations need to approach AI agent access rights in the same way they would humans. That means governing them according to their own unique behaviours and risks. Next-gen identity security tools can enable businesses to roll out contextual, precise and adaptive access control policies, where access is purposefully granted when appropriate – and aggressively revoked when not.
Imagine an AI agent in the financial sector. It could handle an entire loan origination process - aggregating financial data, analysing credit history, preparing terms, facilitating underwriting, and communicating with stakeholders. The efficiency is remarkable, but the risks are significant: without proper controls, that same agent could misinterpret data, approve high-risk loans, or inadvertently expose customer information, triggering compliance violations or reputational damage.
Businesses can avoid this sort of risk by ensuring that agents can only access selected records or information relevant to a particular case. Through a custom role and profile, the agent would be granted temporary access to records that would disappear following task completion. To minimise risk, the agent could be left without administrative system privileges - for example, access to internal audit logs, executive dashboards or regulatory compliance reports. A contextual, adaptive approach to identity ensures AI agents are continuously monitored, and that their access rights are updated as their roles, behaviours and risk profiles evolve.
Enter the era of ‘adaptive identity’
In 2026, a static approach to access just doesn’t cut the mustard. Businesses that adopt AI agents before systems are in place to keep track of non-human identities could face an ‘identity explosion’ and introduce serious vulnerabilities into the organisation.
Proper governance now means tracking every AI agent’s access to privileged data, enforcing approval workflows before granting or expanding access, and assigning clear ownership. A more flexible, adaptive approach will ensure that autonomous systems are adopted responsibly, and leaders can capitalise on the efficiencies these tools offer without sacrificing the security of the wider business.