
Threat actors, ranging from opportunistic attackers to nation-aligned groups, increasingly use AI and LLMs to improve and scale social engineering attacks. These attacks include spearphishing lures, deepfakes for vishing, and large batches of highly personalized messages that materially raise success rates for credential theft and fraud. Deepfake abuse in particular has become a mainstream enterprise threat, with attackers routinely using AI-generated audio, video and imagery to impersonate executives, pressure staff and bypass verification checks.
How prevalent are AI-driven social engineering attacks?
Synthetic media has become a mainstay of the attacker playbook across hybrid cloud and on-prem environments. About 85% of organizations reported a deepfake-related incident in the past year, and deepfake-enabled scams resulted in over $200 million lost in Q1 2025. Some recent examples we’ve seen include:
- Ongoing campaigns since April 2025 where malicious actors used AI-generated text and voice to impersonate senior U.S. officials (smishing and vishing), with guidance for verification and reporting.
- Documented use of LLMs to draft phishing lures and observed named clusters interacting with LLMs to support social engineering operations.
- OWASP and leading vendors published case summaries showing combined email and voice deepfake attacks that succeeded in convincing staff to authorize payments or hand over credentials.
- Large security vendors reported GenAI adoption by adversaries to create realistic profiles, tailored messaging and deepfakes – increasing reach and believability of campaigns.
What is the business impact of AI-powered attacks?
- Credibility & Scale: By removing linguistic errors and automating personalization, attackers achieve higher success rates with targeted fraud and business email compromise (BEC) schemes at a fraction of previous costs.
- Multimodal Deception: Combining AI-generated voice (vishing) with tailored emails bypasses traditional verification, leading to unauthorized wire transfers and credential theft.
- Compounded Risk: Successful impersonation or data theft can trigger regulatory reporting, customer losses and long‑term brand damage.
How can defenders respond to these AI-driven threats?
Enterprises are responding by tightening identity workflows and rethinking how trust is established. Adoption of AI-powered voice and video detection tools is accelerating, alongside liveness-backed MFA and zero-trust access controls.
Here are some examples of defensive measures we recommend:
- Phishing-Resistant Authentication: Replace SMS/email one-time passwords (OTPs) with hardware security keys or platform MFA for all high-risk functions.
- Hardened Financial Protocols: Require multi‑channel verification for wire transfers and create explicit “out‑of‑band” confirmation policies for payments and account changes.
- AI-Aware Security: Deploy advanced, AI‑aware email protections like pre‑delivery analysis, behavioral anomaly detection, and contextual banners for risky messages.
- Executive Awareness: Brief leadership on deepfake risks and require explicit verification workflows for sensitive requests from executives.
- Team Training: Train and test with realistic simulations that include AI-generated content, educating employees to verify unexpected requests through known channels.
Defense requires modernizing identity assurance, strengthening segmentation and assuming synthetic manipulation as a baseline threat.
Sources
Pindrop – Why Deepfakes in Enterprise Communications Are an Urgent Threat
IRONSCALES – Fall 2025 Threat Report
Keepnet Labs – Deepfake Statistics & Trends 2025
ZeroThreat.ai – Deepfake & AI Phishing Statistics
Gartner – Identity Verification and Authentication Solutions Unreliable in Isolation by 2026
FBI / IC3 Public Service Announcement — Senior U.S. officials impersonated in malicious messaging campaign
Microsoft Security Blog — Defending against evolving identity attack techniques
CrowdStrike — 2025 reporting on GenAI powering social engineering (2025)
OWASP GenAI Incident & Exploit Round‑Up – Feb-Jan 2025
CISA/NSA/FBI/MS‑ISAC joint phishing guidance – Phishing Guidance: Stopping the Attack Cycle at Phase One
Published By: Daniel Parker, VP of Ethical Hacking, NetWorks Group
Publish Date: March 12, 2026




