There’s plenty to consider when preparing your enterprise to embrace the full potential of AI. For financial services and fintechs, security, oversight, and compliance are always top of mind, whatever new technology is concerned. This is certainly the case with agentic AI, which introduces both powerful capabilities and new risks. Read on to discover some of the specific threats and mitigation strategies tied to agentic AI for financial services. We’ll also look at how API governance plays a critical role in reducing the risks that come with the agentic AI landscape, and why it is a cornerstone of any financial enterprise’s AI strategy.
Agentic AI is here to stay
Agentic AI systems bring together the benefits of large language models (LLM), machine learning and natural language processing with AI agents with autonomous capabilities. In simple terms, a human or machine user tasks the AI agent with achieving something. The agent then decides how to achieve the task and makes it so. Interacting with other systems via APIs, the agent takes an autonomous route to achieving its goal, including delegating tasks to other agents, if it decides that is necessary.
From copilots to autonomous agents
Copilots, such as those you might find embedded in CRM systems or productivity suites, are a form of assistive AI. They can suggest actions and synthesize information, but always under direct human supervision. They can do much to enhance efficiency but fundamentally remain tools for human use. Accountability clearly rests with the user, and the scope of the copilot is neatly bounded.
Agentic AI is different. AI agents have the autonomy to initiate actions on their own, muddying the clear distinction between the user as the actor and AI as the tool. This is where the operational and security-related complexity starts mounting – and the associated risks.
High stakes: Data sensitivity, regulatory pressure
The increased independence of AI agents, their access to multiple systems and their decision-making capabilities around how to achieve things all raise significant questions for the financial services sector, where traceability and compliance are so crucial. With AI agents, accountability becomes blurred, as does the ability to trace how and why decisions were made. Hardly ideal for an environment where precision and clarity are essential, particularly given the lack of continuous human oversight.
Given the sensitive nature of the data that financial services and fintechs handle, the idea of an opaque series of AI agents making decisions on their own represents a transformation of risk, rather than a mere amplification of it. With the pressures of intense regulatory scrutiny on top, it means that financial services enterprises must put strategies in place for mitigating these new risks.
Security risks unique to agentic AI
Bad actors already frequently target financial services’ API infrastructures. Akamai’s 2024 API Security Impact Study reports that financial services organizations are targeted more frequently than those in other industries. In the previous 12 months, it reports that 88.7% of financial services companies experienced an API security incident, with an average financial impact of $832,800.
These sobering figures highlight the breadth of the threats that financial firms face, even before factoring in the risks of agentic AI. We can break these risks down into three main areas:
- Autonomy and unintended actions
- Prompt injection and goal misalignment
- Cross-system access via APIs
Let’s consider each of these in turn.
Autonomy and unintended actions
Autonomous AI agents can achieve goals efficiently, powering innovation and delivering results at lightning speed. They can also misinterpret goals, take actions that can harm your business, perform operations without adequate legal or financial safeguards and trigger cascading effects across systems based on inadequate data and flawed assumptions.
Just as a traditional system has strict user roles, permissions, approvals, and checkpoints to mitigate risk, these need to be intentionally designed into agentic AI to mitigate such risks. More on that in a moment.
An example of agentic AI’s autonomy resulting in unintended actions could be an AI agent that you’ve given permission to rebalance portfolios. That agent could overreact to market signals, deciding to liquidate assets prematurely, resulting in reputational fallout for your business, and potential compliance headaches.
Prompt injection and goal misalignment
With many agentic AI systems relying on natural language instructions from human users or other systems, there is scope for malicious input to alter the AI’s behavior. As such, you’ll need defenses in place against prompt injection attacks, to prevent attackers gaining unauthorized data access and undertaking unauthorized tasks.
You also need to guard against goal misalignment. AI agents can be very literal in their interpretation of their goals. This opens them up to conflicts with your wider business intent and policies. Task an AI agent in a customer service role with resolving tickets as quickly as possible, for example, and it might decide to refund all customers who’ve complained automatically, to close their tickets. It could even simply close all tickets.
Mitigating such risks requires you to pay careful attention to nuance and context in the way you manage your agentic AI, as well as its security defenses.
Cross-system access via APIs
Agentic AI can use APIs to perform a wide range of actions, from querying databases and updating customer records to initiating wire transfers and provisioning new services or infrastructure. It’s transformative – and risky. A single buggy agent has the potential to do significant damage to your business, further complicated by the fact that it could be hard to trace what it has done and why, given the sometimes opaque nature of agentic AI decision-making.
With AI agents able to move laterally between systems, spin up shadow APIs, bypass observability and potentially bypass security on APIs designed originally for human use, least-privilege design and API-level governance become fundamental requirements.
Compliance, accountability, and auditability
To mitigate the new risks of agentic AI, it’s essential to align AI actions with human intent. You’ll need to embed human-centric constraints, define clear and bounded objectives, implement role-based access permissions, test continuously for misalignment, and log everything. All with human oversight embedded.
Robust API management sits at the heart of this. Indeed, API management equates to AI readiness. With it, you can enforce granular access controls, ensuring that your AI agents only interact with the data and services you permit them to. You can also enforce policies for security protocols and compliance with business goals and regulatory requirements, using centralized API gateways to ensure auditability.
API management also empowers you to implement real-time monitoring and throttling. This is crucial for detecting and containing unintended agent behavior. It enables you to put the brakes on, find out what is driving the unexpected behavior, and take remedial action.
Using API management to strengthen agentic AI security in this way helps protect your business operationally, financially, and reputationally. It also makes it easier to meet financial regulators’ expectations, with greater explainability and logging providing much-needed visibility.
Why API-first security matters
Implementing robust API governance means you can take an API-first approach to security, with the benefits of that flowing from your APIs to your AI systems. It enables you to gatekeep data, services, and decision engines, clearly auditing permissions and access. Using role-based access control, tokenization, and traffic inspection, your API-first security becomes the front line of defense for your agentic AI.
Learn more about Tyk AI studio.