AI-Powered Cyber Attacks: What's Changed and What to Do About It
AI hasn't created entirely new attack categories — but it has made existing attacks faster, more convincing, and available to a vastly larger pool of attackers. Here's what's actually changed in 2025.
There's a lot of noise about AI and cybersecurity. Some of it is hype. Some of it matters. This article focuses on the latter: the specific ways attackers are using AI that have materially changed the threat landscape for SMBs in 2025 — and the defensive adjustments that correspond to each.
AI-Generated Phishing Emails
HighThe era of obvious phishing emails — broken English, bizarre formatting, implausible pretexts — is ending. Generative AI models produce grammatically perfect, contextually aware phishing emails at scale. Attackers feed in a target's LinkedIn profile, their company website, and recent news about their industry, and generate a personalised spear-phishing email in seconds.
What's changed
Volume is up. Quality is up. The grammar check your team does is no longer a reliable filter. Attackers can send millions of highly personalised emails that would previously have required a human researcher per target.
Defence
Behaviour-based email filtering (looking at link destinations, domain age, attachment behaviour) rather than content analysis. Multi-step verification for any financial request, regardless of how convincing the email looks.
Deepfake Voice Calls (Vishing 2.0)
CriticalReal-time voice cloning allows an attacker to impersonate your CEO, your CFO, or your IT provider with a convincing voice replica — generated from as little as 30 seconds of publicly available audio (LinkedIn videos, YouTube appearances, earnings calls). These calls are used to authorise wire transfers, reset passwords, or bypass security checks.
What's changed
Previously, a phone call from a known voice was considered a strong verification method. That assumption is now wrong. Multiple SMBs and mid-market companies have lost $100,000+ to AI voice fraud in 2024–2025.
Defence
Establish out-of-band verification protocols for sensitive requests: a pre-agreed codeword, a callback to a number on file rather than the one calling, or a secondary approval channel for any financial authorisation.
AI-Accelerated Vulnerability Scanning
HighAttackers are using AI to scan for vulnerabilities, prioritise targets, and tailor exploits — faster than patching cycles can respond. What once required a skilled human penetration tester (hours to days) now takes minutes with AI-assisted tooling.
What's changed
The window between a vulnerability being disclosed and it being exploited in the wild has shrunk from weeks to days or hours. SMBs that don't patch quickly are targeted before they know a patch exists.
Defence
Automated patch management, vulnerability scanning on your own infrastructure (so you know your exposure before attackers do), and a rapid-response process for critical CVEs.
AI-Generated Malware and Code
Medium-HighThreat actors are using large language models to write malware variants that evade signature-based detection. Each variant is slightly different — defeating antivirus tools that rely on known signatures.
What's changed
The supply of novel malware variants has increased significantly. Traditional antivirus (signature-matching) is even less effective than it was a year ago. Behavioural detection (EDR) is now the minimum viable standard.
Defence
Endpoint Detection & Response (EDR) rather than antivirus. EDR looks at behaviour — what a process is doing — rather than matching it against a known-bad signature library.
AI-Powered Social Engineering Research
MediumAI tools can scrape LinkedIn, company websites, public filings, and social media to build a detailed profile of an organisation's structure, key personnel, vendor relationships, and internal language — in minutes. Attackers use this to make impersonation attacks (BEC, vishing) dramatically more convincing.
What's changed
The amount of background research required to stage a convincing impersonation attack has dropped to near-zero. Your digital footprint — however minimal you think it is — provides enough surface area for a well-staged attack.
Defence
Audit your organisation's public digital footprint. Limit unnecessary employee information visible on LinkedIn. Train staff that detailed knowledge of your business is no longer evidence of legitimacy.
The Deepfake Voice Story That Should Be on Your Radar
In early 2024, a finance employee at a multinational company in Hong Kong was convinced to transfer $25 million USD after a deepfake video call appeared to show his CFO and other senior colleagues authorising the transaction. Every participant on the call except the victim was an AI-generated deepfake.
This was a large company with a sophisticated attacker. But the technology is no longer limited to large operations. Real-time voice cloning tools are commercially available for a few hundred dollars a month and require no technical expertise beyond feeding in a voice sample. The SMB version of this attack — a fake CEO call authorising a wire transfer — is actively occurring.
The Seven Defences That Actually Counter AI-Enhanced Attacks
- Behaviour-based email filtering — evaluate what links and attachments do, not how they look
- Out-of-band verification for any financial request, regardless of channel
- EDR over traditional antivirus — behavioural detection catches AI-generated malware variants
- Rapid patch management — the exploitation window is now hours, not weeks
- Verification protocols for executive requests — a pre-agreed codeword or callback procedure
- Staff training on AI-specific threats — employees who know deepfakes exist are harder to fool
- Dark web monitoring — catch credential exposure before attackers use AI to leverage it
What This Means for Your Training Programme
The most important mindset shift is teaching your staff that convincing is no longer evidence of legitimate. An email that's grammatically perfect and personally relevant can still be AI-generated. A voice call from a familiar voice can still be a deepfake.
The counter is process, not perception. Any request involving financial authorisation, credential changes, or sensitive data access should require a verification step that happens through a different channel than the one the request arrived on. That channel should be a number you know, not one provided by the caller.
The Optimistic View
AI is also improving defences. Behavioural analytics, anomaly detection, and threat intelligence platforms all benefit from AI — which means an MSSP with modern tooling gets better at finding attacks faster as the technology improves. The key is that your defences are using AI too, not just your attackers.
Related reading: Social Engineering: 5 Human Hacking Tactics Targeting Your Employees, Business Email Compromise: The $50 Billion SMB Threat, Security Awareness Training: Turn Your Team Into a Security Layer.
Is Your Business Ready for AI-Enhanced Attacks?
We'll assess your current defences against the specific attack patterns being used in 2025 and tell you exactly where your gaps are.
Book Free Assessment