×
AI-powered phishing attacks surge to 77% of all cyber incidents
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Cybercriminals have found their new favorite tool, and it’s the same technology transforming legitimate business operations: generative artificial intelligence. This isn’t just another incremental shift in the threat landscape—AI is fundamentally changing how attacks are crafted, delivered, and executed at scale.

Recent research from Mimecast, a leading email security company, reveals that phishing attacks now account for 77% of all cybersecurity incidents, up dramatically from 60% in 2024. This surge directly correlates with cybercriminals’ rapid adoption of AI tools that can generate convincing fake communications, create synthetic voices, and automate previously labor-intensive social engineering campaigns.

The implications extend far beyond email. Financial institutions, government agencies, and businesses across every sector are facing increasingly sophisticated threats that traditional security measures struggle to detect. Here’s what business leaders need to understand about this AI-powered threat evolution and how to defend against it.

How AI supercharges cybercriminal operations

Generative AI has eliminated many of the traditional barriers that limited cybercriminal effectiveness. Previously, successful phishing campaigns required native-level language skills, deep knowledge of corporate communication styles, and significant manual effort to customize attacks for specific targets.

AI tools now enable threat actors to create flawless impersonations of vendors, executives, or coworkers within minutes. These systems can generate entire email threads that perfectly match corporate communication patterns, complete with appropriate jargon, formatting, and contextual references that would typically take human attackers weeks to research and craft.

The technology extends beyond text generation. Cybercriminals are leveraging AI to create synthetic voices for phone-based attacks and realistic audio messages that can bypass traditional detection systems. This multichannel approach—combining convincing emails with follow-up phone calls using cloned voices—creates a level of authenticity that even security-conscious employees find difficult to question.

The rise of business email compromise 2.0

Business email compromise (BEC) attacks, where criminals impersonate executives or vendors to trick employees into transferring money or sensitive information, have become significantly more dangerous with AI enhancement. These attacks traditionally relied on criminals manually researching target companies and crafting personalized messages—a time-consuming process that limited their scale.

AI has transformed this equation entirely. Mimecast identified a global invoice fraud campaign where AI-generated messages successfully urged recipients to approve payments by perfectly mimicking vendor communication styles and incorporating legitimate-seeming invoice details. The messages were so convincing that they bypassed both technical filters and human scrutiny.

The sophistication extends to creating entire fictional business relationships. Criminals can now generate months of seemingly legitimate correspondence, building trust through AI-crafted emails that reference real industry events, mutual connections, and specific business challenges relevant to their targets.

ClickFix attacks surge five-fold

Among the most concerning developments is the explosive growth of ClickFix attacks, which increased five times year-over-year and now represent approximately 8% of all recorded security incidents in the first half of 2025. These attacks exploit users’ instinct to fix technical problems by clicking on malicious links disguised as system updates or error corrections.

AI enhances ClickFix campaigns by generating highly specific error messages that perfectly match the software and systems employees actually use. Instead of generic “Your system needs updating” messages, AI can craft alerts that reference specific software versions, include realistic error codes, and match the exact visual styling of legitimate system notifications.

Criminals are also weaponizing trusted business tools to deliver these attacks. Legitimate services like DocuSign, Salesforce, and Adobe Pay are being systematically abused to host malicious content, while authentic CAPTCHA services are repurposed to hide phishing campaigns behind seemingly legitimate security checks.

The Scattered Spider phenomenon

The scale of AI-powered threats becomes clear when examining individual threat actors. Scattered Spider, a particularly prolific cybercriminal group, was linked to more than 900,000 security detections—a volume that would be impossible to achieve through traditional manual attack methods.

This group exemplifies how AI enables criminal operations to scale exponentially. Rather than targeting dozens of victims with carefully crafted individual attacks, AI-powered groups can simultaneously target thousands of organizations with personalized, convincing campaigns that adapt in real-time based on victim responses.

Building AI-resistant defenses

Defending against AI-powered threats requires a fundamental shift from traditional security approaches. The old model of detecting obvious phishing indicators—poor grammar, generic greetings, suspicious sender addresses—becomes largely ineffective when criminals can generate perfect communications.

Multi-factor authentication (MFA) remains crucial, but organizations need advanced email defenses that use anomaly detection and AI models to identify subtle behavioral patterns rather than obvious content red flags. These systems analyze communication patterns, timing anomalies, and contextual inconsistencies that human reviewers might miss.

Employee awareness programs must evolve beyond teaching workers to spot “suspicious” emails. Modern training should focus on verification procedures—requiring employees to confirm unusual requests through separate communication channels, even when messages appear completely legitimate.

Organizations should implement multi-layered security frameworks that include endpoint protection, network monitoring, and specific checks for trusted service abuse. This approach recognizes that AI-powered attacks will inevitably bypass some defenses, making detection and response capabilities as important as prevention.

The verification imperative

Perhaps most importantly, businesses must establish verification protocols that assume any digital communication could be compromised. This means creating policies requiring independent confirmation of financial requests, access changes, or sensitive information sharing, regardless of how legitimate the initial request appears.

The rise of AI-powered cybercrime doesn’t just change the technical security landscape—it fundamentally alters the trust assumptions that underpin business communications. Organizations that adapt their processes to this new reality, combining advanced technical defenses with robust verification procedures, will be best positioned to maintain security as these threats continue evolving.

The message for business leaders is clear: the era of easily detectable cybercrime is ending. Success in this new landscape requires treating every digital communication with appropriate skepticism while investing in both technology and processes designed for an AI-powered threat environment.

AI-driven cybersecurity threats are now hitting businesses from every angle - here's how to stay safe

Recent News

OpenAI acquires Apple Shortcuts team to build AI agents for macOS

The Sky tool executes natural language commands across multiple Mac applications automatically.

Samsung Galaxy S26 rumored to ditch Plus model, adopt iPhone-like design

Perplexity AI integration could challenge Samsung's longtime partnership with Google.