The rise of Agentic AI – what McKinsey calls the “next frontier of generative AI” (GenAI) – marks a seismic shift in the future of AI: and its related threats.
Here’s why. While GenAI mostly supports human decision-making through predictive capabilities, Agentic AI acts as its own independent “agent.” A technology that can think, pursue complex goals, and make autonomous decisions without waiting for instructions.
The ability to make its own decisions is key to the benefits – and dangers – of Agentic AI.
If AI achieves Agentic abilities, according to Malwarebytes’ 2025 State of Malware Report, “novel and boundless attacks could be within reach,” taking social engineering scams to new levels.
Social engineering and deepfakes have already created an environment in which you can’t trust that your agents are talking to the “right” human.
Agentic AI will mean you won’t be able to trust that the “person” you’re talking with is a human at all.
…Agentic AI’s real-time adaptability and hyper-realistic human-like voices are potent weapons.
To protect your contact center against the incoming wave of Agentic AI-powered attacks, start by understanding what Agentic AI is, and how it can be weaponized against contact centers. Then, research the defensive tools and tactics you have at your disposal. But use your understanding of the threat itself to disqualify products that look promising but fail in practice.
How Agentic AI Works
While GenAI enhances existing capabilities through tools like ChatGPT, Agentic AI introduces entirely new ones. Where ChatGPT acts as an assistant, Agentic AI is more like a virtual peer or colleague.
One example of how sophisticated these fully automated Agentic AI agents have become is the new “super connector” voice assistant Boardy AI. Here’s how Boardy AI works:
- A user provides their phone number to an organization and then receives a call from Boardy.
- Based on the conversation, Boardy then searches the Boardy network to find investors or customers who may be a good networking fit and offers to introduce them via a “double opt-in introduction.”
- Once accepted, Boardy connects the parties via email.
AI agents can do many helpful things for individuals and companies, from planning a complex travel itinerary to handling difficult customer queries across multiple channels. But as I will outline, in the hands of a threat actor, Agentic AI’s real-time adaptability and hyper-realistic human-like voices are potent weapons.
Agentic AI Threats
Contact center leaders should remember that, just like GenAI and AI more generally, Agentic AI is a tool. As such, what matters is what it can do, and where and how it’s used by bad actors.
GenAI helps fraudsters refine their existing techniques. ChatGPT, deepfake generators, and other GenAI tools make it easy to create convincing phishing attacks and realistic deepfake impersonations. Agentic AI makes it easy for fraudsters to scale their attacks; imagine an army of AI agents deployed to nefarious ends. Combined, Agentic AI and GenAI deepfakes give fraudsters new “superpowers.”
To protect against Agentic AI threats, start by understanding its current and near-future capabilities. For example, for many years now fraudsters have been selling automated Phishing-as-a-Service (PhaaS) bots. The latest generation of Agentic AI can be thought of as combining PhaaS with GenAI deepfake technology.
Deepfake AI agents can impersonate employees, executives, and customers with a level of realism that’s virtually impossible for a human to detect, even with awareness training. The Federal Communications Commission (FCC) has explicitly warned that “deepfake audio and video links make robocalls and scam tests harder to spot.”
The key to preventing Agentic AI threats…is to verify the actual person at the other end…
But Agentic AI goes farther and deeper. A recent study found that Agentic AI can convincingly “replicate the attitudes and behaviors” of humans. For example, Agentic AI could create personalized, automated phishing attacks using legitimate customer interaction histories from call centers. Here’s how:
- An AI agent might call customers and impersonate your organization to build trust and persuade them to disclose their credentials.
- If a phishing call or email is ignored, the AI might follow up on its own with a more urgent tone.
This scalable scam turns isolated fraud attempts into a systemic threat, overwhelming contact center fraud detection systems. And because Agentic AI has memory, it can learn to improve its phishing approaches or messages based on prior interactions with targets.
Worse yet, we’ve even seen Agentic AI using its own language. During a demo at a recent hackathon, one AI agent calls another to make a hotel reservation. The two AI agents recognize each other and switch to “jibber,” a high-frequency communication that’s 80% faster than speech yet sounds like an old-school dial-up modem.
The implications for customer communications are staggering. At some point in the not-so-distant future, one AI agent, acting on behalf of a customer, could “talk” to another AI agent representing a company in a way that’s unintelligible to humans.
This invites the potential for untold “secret” communications that the customer did not intend, or that may even be harmful to the customer. Such as an AI agent agreeing to a lower refund or authorizing a payment the customer does not wish to make.
Create Your Agentic AI Risk Matrix
Once you have a general understanding of what Agentic AI can do, map out the ways in which a bad actor could use it to attack your contact center. Then create a risk matrix which shows the severity of the potential threat against its likelihood of occurring.
From this matrix, you can create a plan for remediating Agentic AI threats. Incorporate both proactive mitigations and reactive detections. Combine employee training, software tools, and business processes to build a robust program following defense-in-depth principles (AKA the “swiss cheese” methodology).
Defending Against Agentic AI and Deepfakes
The average contact center agent may have no idea that deepfakes have become so sophisticated. A good starting point, therefore, is awareness training. Many vendors offer pre-built awareness training modules which cover various deepfakes and AI threats. Or you could create your own training using publicly available deepfake tools.
But awareness training alone is not enough to effectively counter Agentic AI and deepfake threats; one meta-analysis found that “Overall deepfake detection rates (sensitivity) were not significantly above chance.”
To properly defend against today’s deepfake-powered Agentic AI attacks, contact center leaders must deploy the latest in deepfake defense technologies.
Many available deepfake detection tools for contact centers rely on AI models trained to detect AI-generated voices. But this approach creates an arms race, in which defenders are always on their back foot: the defending AI models need to be trained on examples of attacking AI content, meaning that the attackers are always one step ahead.
The key to preventing Agentic AI threats to contact centers is to verify the actual person at the other end of the conversation. And the only way to do that reliably in the age of Agentic AI is to leverage the cryptographic potential of modern smartphones.
The device in your pocket is actually a highly sophisticated security tool. It contains numerous features that can be used to create a digitally signed chain of trust which validates the legitimacy of the data that flows between your phone and a contact center.
Properly implemented, this cryptographic signing process can effectively prevent digital injection attacks, which are the injection of false data into the stream. This eliminates the most dangerous vector through which bad actors can leverage Agentic AI and deepfakes.
Identity verification solutions that leverage this technology can quickly and securely verify whomever an agent is speaking with. Anyone using Agentic AI or deepfakes to impersonate someone else will immediately be surfaced to the agent, preventing the bad actor from proceeding further.
Of course, technology alone is only a starting point. Although the threat is urgent, some systems take months or even years to fully deploy across your contact center. Other solutions leveraging cryptographic attestation, however, are able to deploy in minutes, immediately protecting contact center agents against automated Agentic AI attacks and human social engineering attempts.
Clearly, organizations need to stay competitive when it comes to adopting Agentic AI and other GenAI-powered technologies.
But this shouldn’t come at the expense of adding substantial risks. With the right defensive tools in place, contact centers can realize the benefits of Agentic AI while protecting themselves and their customers against Agentic AI-powered threats.