Deepfake Vishing: Why AI Voice Clones Bypass MFA

Isabelle Grant - Author avatar
Isabelle Grant
29 Oct 2025
5 min read
Profile of an Asian woman facing her AI-generated digital clone, connected by glowing blue data streams and futuristic interface elements.

AI is giving attackers a new weapon: cloned voices that trick humans faster than any hacker can crack code.

Multi-factor authentication (MFA) was supposed to make stolen passwords a thing of the past. But over the summer, attackers found a clever way around it — not by cracking the tech, but by tricking the people behind it. Welcome to the era of deepfake vishing.

Here’s how it works: attackers grab voice samples of an employee (from voicemails, Zoom recordings, or social media) and use AI to create a convincing clone. They call the company’s help desk, claim to be that employee, and ask for an MFA or password reset. The voice sounds real enough, and the story is urgent enough, that staff often comply. Security researchers have seen groups like Scattered Spider and ShinyHunters use this approach to bypass strong security controls.

Photo of a smartphone displaying “Voice Cloning in progress” with AI icon, symbolizing deepfake vishing and MFA bypass threats.
Smartphone screen showing an AI voice cloning process, highlighting risks of deepfake vishing attacks.

Why does it work so well? People trust voices. If you think you’re talking to your CFO or IT manager, you’re far less likely to push back. Combine that trust with AI-driven reconnaissance — where attackers know just enough details to sound believable — and it’s a recipe for compromise.

“A familiar voice is no longer proof — AI deepfakes are turning trust into the weakest link.”

And deepfake vishing is just one angle. Other campaigns are using AI to automate entire attacks. The GTG-2002 group, for example, used an AI agent to scan for VPNs, escalate privileges, steal credentials, and even calculate ransom demands. Meanwhile, researchers showed a new “ClickFix” technique where attackers plant hidden instructions inside web pages, which then get pulled into AI summaries and trick users into running malicious commands.

So what can you do about it? Start with the frontline. Train help desk teams to never rely on voice alone. Add secondary verification — like confirming through email or Slack — before resetting access. Make staff aware that “a familiar voice” isn’t proof anymore. And explore AI-powered anomaly detection tools that can spot the subtle tells of synthetic speech.

The bottom line: Hearing is no longer believing. If your defenses rely on trust alone, they're already broken.

👉 Ready to strengthen your defenses against AI-powered threats?

talk to a calder & lane advisor today
Isabelle Grant - Author avatar
Isabelle Grant
29 Oct 2025
3 min read