
Multi-factor authentication (MFA) was supposed to make stolen passwords a thing of the past. But over the summer, attackers found a clever way around it — not by cracking the tech, but by tricking the people behind it. Welcome to the era of deepfake vishing.
Here’s how it works: attackers grab voice samples of an employee (from voicemails, Zoom recordings, or social media) and use AI to create a convincing clone. They call the company’s help desk, claim to be that employee, and ask for an MFA or password reset. The voice sounds real enough, and the story is urgent enough, that staff often comply. Security researchers have seen groups like Scattered Spider and ShinyHunters use this approach to bypass strong security controls.

Why does it work so well? People trust voices. If you think you’re talking to your CFO or IT manager, you’re far less likely to push back. Combine that trust with AI-driven reconnaissance — where attackers know just enough details to sound believable — and it’s a recipe for compromise.
“A familiar voice is no longer proof — AI deepfakes are turning trust into the weakest link.”
And deepfake vishing is just one angle. Other campaigns are using AI to automate entire attacks. The GTG-2002 group, for example, used an AI agent to scan for VPNs, escalate privileges, steal credentials, and even calculate ransom demands. Meanwhile, researchers showed a new “ClickFix” technique where attackers plant hidden instructions inside web pages, which then get pulled into AI summaries and trick users into running malicious commands.
So what can you do about it? Start with the frontline. Train help desk teams to never rely on voice alone. Add secondary verification — like confirming through email or Slack — before resetting access. Make staff aware that “a familiar voice” isn’t proof anymore. And explore AI-powered anomaly detection tools that can spot the subtle tells of synthetic speech.
👉 Ready to strengthen your defenses against AI-powered threats?