Financial Services Risk Being Overwhelmed by Agentic AI Wave With One in Three Firms Lacking Agent Oversight
TrendAI™, the global leader in AI cybersecurity, today released new data from a global study* revealing a growing governance gap across financial services, as organisations accelerate adoption of AI agents without the visibility or control required to manage them securely.
Like other industries, finance,
At the same time, adoption pressure is intensifying. More than two thirds (68%) of organisations say they have been pushed to approve AI implementations over the past 12 months despite security concerns, with one in seven (15%) describing those concerns as “extreme” but overridden to keep pace with competitors and internal demand.
“Financial services firms are not short of awareness when it comes to AI risk, but awareness alone is not control,” said Bharat Mistry, Field CTO at TrendAI. “What we are seeing is a widening gap between how quickly AI is being deployed and how well it is being governed. That gap is where risk lives and it’s a problem that’s getting worse in light of increased interest in, and uptake of, agentic AI tools.”
Sensitive data exposure tops AI agent concerns for FS firms
Confidence in autonomous AI remains fragile, particularly where systems are granted access to sensitive financial data or decision-making authority.
Four in ten (40%) FS organisations cite the accessing of sensitive data as their primary concern attached to using AI agents. Closely following in second, 34% point to an expansion of the cyber attack surface, while nearly a third (32%) highlight the risks associated with potential abuse of trusted AI status and autonomous code execution.
Awareness of emerging but critical attack techniques, on the other hand, remains limited. Just three in ten (30%) organisations recognise that malicious prompts could compromise AI systems – despite the fact that prompt injection is establishing itself as one of the most common methods used to manipulate or “jailbreak” AI agents.
“Agentic AI changes the equation,” Mistry added. “These systems are not just supporting decisions, they are taking action. Without visibility, auditability and clear control mechanisms, organisations are effectively handing over authority without accountability.”
AI adoption outpaces governance in a highly regulated sector
The research highlights a clear disconnect between deployment and control that acts as backdrop for any deployment of AI agents. Only a third (32%) of FS organisations report moderate confidence in their understanding of the legal frameworks governing AI, signalling widespread uncertainty in a sector defined by regulation.
Governance maturity remains low. Fewer than a quarter (21%) of organisations have comprehensive AI policies in place, with many still in development. Meanwhile, 44% cite unclear regulation or compliance standards as a barrier to progress.
In practice, this means AI is being embedded into operations before accountability, oversight and audit structures are fully established. As agentic systems take on more autonomous roles, the risks associated with this gap become more acute.
Control mechanisms remain unresolved as autonomy increases
As financial services organisations move towards greater AI autonomy, consensus on how to retain control is still lacking.
While 40% support the introduction of AI “kill switch” mechanisms to shut down systems in the event of failure or misuse, nearly half (46%) remain uncertain.
“This lack of alignment points to a deeper issue,” Mistry concluded. “Organisations are deploying increasingly powerful AI systems without a shared understanding of when, or how, human intervention should take place when it matters most.”
*TrendAI commissioned SAPIO Research to survey 3700 IT and business decision makers across 23 countries globally with 407 responses from IT and Business Decision Makers working in Finance, Insurance and




