ai scams

AI Scams Are On The Rise. What Can You Do?

AI Scams Are On The Rise. What Can You Do?

If you feel like scam emails, texts, and calls got harder to spot last year, that’s not your imagination. 2025 marked a shift from “obvious fraud” to messages that look, sound, and behave like the real thing. Grammar mistakes stopped being a reliable tell. The tone became local, natural, and tailored. In some cases, the voice on the phone even sounded familiar.

Security teams have been warning about this for a while. The FBI has said criminals are using AI to craft convincing emails and voice or video messages that support fraud against individuals and businesses. The UK’s National Cyber Security Centre has also highlighted how generative AI can scale and sharpen deception, raising the overall quality of lures that used to be easy to dismiss.

What changed in 2025 is that these capabilities became easier to use, cheaper, and faster. That combination matters more than any single tool.

ai scammer

AI scams still follow the same playbook

Most AI scams still follow a familiar pattern:

  1. Build a believable identity
  2. Make contact in the channel you are most likely to trust
  3. Create urgency or secrecy
  4. Move you to an action that can’t be reversed quickly (a bank transfer, a password reset, a gift card, a crypto payment, a change of supplier bank details)
  5. Disappear, or keep you talking long enough to repeat the cycle

AI strengthened steps 1 to 3, and it did it at scale.

What AI improved in 2025

1) Written messages became “native”

For years, people were trained to spot awkward phrasing and strange formatting. That training aged badly in 2025. AI can generate messages that read like an internal colleague, a regular supplier, or a customer who knows your working style.

Microsoft’s Digital Defense Report 2025 includes a striking example of how this changes outcomes. It reported that AI-automated phishing emails achieved far higher click-through rates than standard attempts (54% vs 12% in the example cited), and noted the economic incentive for criminals when high-quality targeting can be produced quickly.

That lines up with what many teams felt: people weren’t falling for “obvious” scams. They were falling for messages that looked routine.

2) Voice and video impersonation stopped sounding like science fiction

Voice cloning moved from rare novelty to a practical criminal tool. In the UK, Which? reported in March 2025 that a quarter of scam calls were AI-powered, highlighting how criminals can impersonate trusted roles such as banks, police, or service providers.

Law enforcement and policy bodies are also treating this as a mainstream threat. Europol’s EU-SOCTA 2025 report notes that AI-powered voice cloning and live video deepfakes amplify fraud, extortion, and identity theft risks. A European Parliament briefing on scam calls and generative AI also discusses how widely available voice samples can support convincing impersonation.

The practical impact inside organisations is simple: “I heard it from them on the phone” no longer proves identity.

3) AI scams became more persistent and more personalised

Classic phishing was often a one-shot attempt. AI-enabled fraud can behave more like a patient salesperson: it follows up, adapts, answers questions, and keeps the story consistent.

The FBI has warned that criminals exploit generative AI to commit fraud on a larger scale and to increase believability, because it reduces the time and effort required to deceive targets.  That matters because the highest-performing scams are rarely about one message. They’re about a conversation that steers you, step by step, toward a decision.

ai scams 2026

Where this hits organisations hardest

The most damaging scams tend to target processes, not people in isolation. A few examples that continue to cause real losses:

Payment diversion and invoice fraud
Criminals impersonate suppliers (or internal finance roles) and push for a change to bank details, then apply urgency around a “deadline”. These scams work when there isn’t a fixed verification step outside email.

Helpdesk and account reset abuse
A convincing caller can pressure support teams into resetting credentials. Once an account is controlled, attackers can exploit email threads, file shares, and approvals.

Recruitment and HR manipulation
Identity fraud shows up in hiring, payroll changes, and benefit details. Experian has warned about deepfake job candidates and other AI-driven fraud patterns as a threat looking into 2026.

Consumer-facing impersonation at scale
Fraud is still a major problem in the UK overall. UK Finance reported £629.3 million stolen in the first half of 2025 across authorised and unauthorised fraud, with over 2 million confirmed cases in that period.  The numbers move year to year, but the volume tells you one thing: criminals keep iterating because it keeps paying.

deepfake scam

What works against convincing AI scams

The goal is not to teach everyone to be a deepfake detective. The goal is to make “proof” independent of persuasion.

Here are five controls that consistently cut risk without slowing the business to a crawl:

  1. Verification that leaves the channel
    If the request arrives by email, verify by phone using a number from your internal directory, contract pack, or prior verified records. If the request arrives by phone, verify in writing using known contacts. One channel should never be both the request and the verification.
  2. No bank-detail changes without a second step
    Treat bank detail changes like password resets: a controlled process, not a favour. A simple rule helps: no change without a call-back to a known number, and no same-day change without management approval.
  3. Make urgency a trigger, not a reason
    Most high-impact scams use urgency and secrecy. Turn that into muscle memory: if the message pushes “today” or “do not tell anyone”, it triggers verification and pauses payment.
  4. Reduce what attackers can learn from public scraps
    Voice samples, org charts, job posts, and social media patterns help criminals build believable approaches. You don’t need to hide everything, but it’s worth reviewing what exposes direct lines, approval chains, and staff roles.
  5. Run short, realistic exercises
    Generic training ages fast. Short exercises based on your own workflows (supplier payments, HR changes, client intake) build better instincts than posters and slogans.

Why this matters for teams handling sensitive information

For organisations that work with legal, investigative, or otherwise sensitive material, the impact is wider than financial loss. A well-run scam can also lead to data exposure, reputational damage, and regulatory reporting.

The safest assumption for 2026 is that convincing fraud attempts will keep improving. What protects teams is consistent verification and disciplined handling of information, especially when the request is unexpected, flattering, urgent, or comes from a senior-sounding voice.

ai scams

Conclusion

In 2025, scams became better acted. The messages got cleaner, the voices got more plausible, and the stories got more adaptive. That pushes organisations away from “spot the typo” thinking and toward process-based protection.

When identity is easy to fake, trust has to be earned through verification.

FAQs on AI scams

How do we spot AI-written phishing messages now that grammar is good?
Stop looking for writing quality. Look for pressure, unusual requests, and process breaks: a new payment route, a sudden secrecy request, or a change to a normal workflow.

Are voice calls still a safe way to verify?
They can be, but only when you control the number you dial. Call back using a known number from trusted records, not the number provided in the message.

What is the single best control against invoice fraud?
A mandatory verification step for bank detail changes using a trusted contact method, plus a rule that payments do not move until that check is complete.

Should we ban AI tools because scammers use them?
No. The practical response is governance and verification. Focus on your processes for identity, payments, and sensitive information.

If we suspect a scam attempt, what should staff do first?
Stop the action, preserve the message or call details, and escalate internally. If money has moved, contact the bank immediately. If there’s fraud to report in the UK, Action Fraud is the national reporting route.

Categories

  • Uncategorised

Popular Blogs