News

How AI Is Supercharging Phishing Attacks... And What Finance Firms Must Do Now image

How AI Is Supercharging Phishing Attacks... And What Finance Firms Must Do Now

AI is changing the way businesses operate, but it’s also giving cybercriminals new ways to run more convincing phishing attacks and fraud campaigns. For finance firms, this means the old signs of a phishing message are no longer reliable. Attackers can now generate messages that look polished, personal and urgent.

Microsoft found that AI-generated phishing emails are about 4.5 times more likely to get someone to click compared with traditional attempts (reported by The Register). Mimecast also reported a sharp 500 percent rise in AI-assisted phishing and “ClickFix” schemes.

This is exactly why finance teams need to be prepared. Attackers are scaling up, improving accuracy and using automation to test what works in real time.

How AI Makes Phishing More Effective

AI improves social engineering
Cybercriminals can gather public information and feed it into an AI model to generate emails that sound like real colleagues or clients. WebAsha explains how this kind of personalised messaging gives attackers an edge:
https://www.webasha.com/blog/how-cybercriminals-use-ai-to-create-convincing-phishing-scams-the-rise-of-ai-driven-social-engineering

Professional tone, fewer mistakes
Generative AI produces clean, polished copy. That alone makes these emails far harder to dismiss at a glance. Cybersecurity Institute breaks down why this matters:
https://www.cybersecurityinstitute.in/blog/how-generative-ai-is-being-weaponized-for-phishing-a-2025-study-insight

Deepfakes and “vishing”
Attackers are also using AI to generate realistic voice calls or video messages that imitate executives. TechTarget outlines how deepfake-based fraud is evolving:
https://www.techtarget.com/searchSecurity/tip/Generative-AI-is-making-phishing-attacks-more-dangerous

Bypassing older security tools
Because these messages look legitimate, they often slip past traditional security filters. DMARC Report explains how attackers tune messages to evade older defences:
https://dmarcreport.com/blog/ai-powered-phishing-2025-how-intelligent-attacks-outsmart-cybersecurity-defenses/

How Maple Helps Finance Firms Stay Ahead

Finance firms across London rely on Maple to strengthen their defences against AI-assisted attacks. We focus on three areas:

Detecting unusual communication patterns

Our AI-aware monitoring tools analyse email behaviour and writing style to spot suspicious messages that appear legitimate at first glance.

Training teams to identify AI-generated threats

We run training sessions that help teams recognise subtle signs of AI-written content, manipulative language and deepfake audio cues.

Deploying modern, AI-aware security tools

We help clients roll out advanced security layers designed for today’s threats, including tools that identify manipulated text, audio and video.

Practical Steps Finance Teams Should Take Now

1. Run realistic training using AI-style messages

Platforms like Hoxhunt are helpful for simulations that feel authentic without risking exposure:https://en.wikipedia.org/wiki/Hoxhunt
Encourage staff to pause and verify before acting, especially for payment-related requests.

2. Upgrade email security

Cybernetic GI has a good summary of how phishing has evolved with AI:https://www.cyberneticgi.com/2025/06/20/ai-powered-phishing-how-cybercriminals-are-evolving-their-tactics
Look for tools that assess tone and context, not just keywords or known bad domains.

3. Verify unexpected voice or video communication

Deepfake audio is more convincing than most people realise. Tools like Vastav AI help analyse content for manipulation (https://en.wikipedia.org/wiki/Vastav_Ai).
Always confirm unexpected financial requests through a known secure channel.

4. Follow a zero-trust approach

Limit permissions, require MFA and use role-based access. Teach staff to spot lookalike domains used in business email compromise scams.

5. Use AI defensively

Research tools like Cyri show how AI can be used to analyse message semantics to flag potential phishing:https://arxiv.org/abs/2502.05951
Adaptive training tools are emerging as well:https://arxiv.org/abs/2502.03622

6. Keep up with threat intelligence

Reports like KELA’s AI Threat Report help leaders understand evolving risks:https://info.ke-la.com/hubfs/Reports/KELA%20Report%20-%202025%20AI%20Threat%20Report.pdf

AI is raising the stakes for everyone in finance. Attackers are using it to scale their efforts and sharpen their tactics, making it harder for teams to spot malicious messages. Preparation and the right tools make all the difference.

Maple supports finance firms throughout London with training, monitoring and AI-aware security solutions.