A decade ago, online scams were easy to laugh off. They came wrapped in poor grammar, unconvincing promises of lottery winnings, and questionable hyperlinks that no seasoned internet user would dare click.
The infamous “Nigerian prince” email became a cultural joke. But in 2025, the joke is over.
Today’s scams don’t look like scams. They sound like your boss calling on the phone. They look like your child speaking in a video message.
They arrive as polished, professional emails drafted with flawless English. Artificial intelligence has given criminals a toolset that is cheap, scalable, and alarmingly convincing.
Table of Contents
From Clumsy Emails to Convincing Fakes
The most significant shift is credibility. In the past, bad grammar worked in scammers’ favour—it filtered for only the most vulnerable. But generative AI has eliminated that barrier.
Large language models, the same kind of systems I explained in Behind the Buzzwords: What Is a Large Language Model Really?, can now produce text that feels indistinguishable from something a trusted colleague or institution might write.

Pair that with voice cloning and deepfake video, and the con artist no longer needs to bluff. The technology can imitate, and in some cases outperform, human communication.
A scammer doesn’t need to sound like a stranger anymore—they can sound like your manager. They can look like your CFO in a video call.
They can even hold a real-time conversation, powered by a chatbot that never slips up.
Why It’s Happening Now
AI-powered scams aren’t new, but 2025 is the year they’ve become unavoidable. The reasons are layered.
Running advanced AI models has become drastically cheaper, and open-source systems mean criminals don’t need to rent power from the likes of OpenAI.
There’s also the dark economy of “scam-as-a-service,” where criminal groups package ready-made tools—complete with cloned voices and phishing scripts—so anyone can deploy them.
What makes this particularly dangerous is the overlap with the massive pool of stolen personal data already circulating on the dark web.
In How Cybercriminals Really Get Your Info, I described how years of data breaches have created black markets for everything from email addresses to banking records. Now, AI uses that data to tailor scams with unsettling precision.
It isn’t just a generic “Dear Customer” anymore. It’s your name, your employer, your account, woven into a message that feels personal.
The cultural context matters too. Deepfakes are no longer exotic; they’ve appeared in entertainment, advertising, and even politics.
When New Hampshire voters received AI-generated robocalls in President Biden’s voice during the 2024 primaries—something CNN later confirmed—it showed how seamlessly synthetic content could slip into public life.
The Human Cost

Consider the finance worker who wired $25 million after a video call with a “CFO” who didn’t exist. The voice, the face, the mannerisms—all fabricated.
Or think of the Canadian taxpayer who picks up a call from what sounds precisely like a CRA officer, able to answer questions in real time.
In some of the most chilling cases, parents have been sent videos that appear to show their children kidnapped, only to discover later that it was AI-generated extortion.
What connects these stories isn’t just the fraud—it’s the realism. Criminals are no longer asking you to believe the unbelievable.
They’re using technology to create scenarios that trigger urgency, panic, or trust at precisely the right moment.
Fighting Back
Defending against AI-generated scams is far more complicated than deleting spam emails used to be. Technology can help, but human awareness is still the most effective tool.
If a request feels rushed, if a call demands money, if a message tugs too sharply at your emotions, pause before acting. Verify it through another channel.
Even something as simple as calling a known number back can break the illusion.
Businesses, of course, face even higher stakes.
Training employees, enforcing dual approvals for financial transfers, and adopting zero-trust security practices are now essentials rather than best practices.
Many organizations are also beginning to experiment with AI-based defences—systems designed to flag the subtle artifacts that synthetic media leaves behind. It’s an arms race, with one set of algorithms generating deception and another set trying to detect it.

The same cat-and-mouse dynamic also exists in other corners of technology.
In The Real Cost of AI: Who’s Paying for the Compute Arms Race?, I wrote about how the demand for computing power is reshaping industries. Here, the arms race is between criminals and defenders.
Both sides are armed with the same tools.
Looking Ahead
Some researchers hope watermarking techniques can help, embedding invisible signals in AI-generated content so that fakes can be spotted instantly.
Others argue that only regulation will make a difference, though governments are already struggling to keep up with simpler issues, such as Canada’s digital tax rules.
It’s possible that within a few years, your phone or email client will warn you mid-conversation: “This voice may be synthetic.” Until then, the most effective defence is still the oldest one: healthy skepticism.
Conclusion
AI-generated scams aren’t just another chapter in the story of cybercrime. They’re a shift in kind, not just in degree.
By making deception faster, cheaper, and more believable than ever, AI has handed criminals a megaphone.
History shows that every technological leap—from the earliest computers to today’s generative models—brings unintended consequences.
As I explained in The Real History of AI: From Turing to Transformers, progress is never neutral. The challenge for 2025 is ensuring that AI serves as a shield, not just a weapon.

Until then, the best advice is also the simplest: trust cautiously, verify everything, and remember that not every familiar voice belongs to who it claims to be.
FAQs About AI-Generated Scams
(Images generated with the help of DALL-E.)

