Deepfake Scam

Deepfake Scam

A deepfake scam uses AI to create fake audio, video, or images that convincingly impersonate someone, so scammers can gain trust, create urgency, and pressure people to send money or share information.

What Is a Deepfake Scam?

Deepfake scams rely on AI-generated media, not just written messages or standard phone calls.

Using publicly available audio, video, or images—often pulled from social media, voicemails, or online videos—scammers can train AI tools to imitate how a person sounds or looks. The result can be a message that closely resembles a real individual, even though it was never created by them.

Unlike traditional impersonation scams, deepfake scams don’t just claim to be someone, they simulate recognizable traits, such as voice or appearance.

The Different Types of Deepfakes Used in Scams

Source: Facebook/Meta. Scammers are reported to have used video deepfakes in job interviews for tech companies to try to get access to company data.

Not all deepfakes are the same, and some are far more common in scams than others.

Audio Deepfakes (Voice Cloning): The Most Common

Audio deepfakes use AI to replicate a person’s voice. These are currently the most common and most practical deepfakes used in scams.

They are often used in:

  • Phone calls
  • Voicemails
  • Voice messages

Audio deepfakes are effective because they:

  • Require less data to create
  • Don’t require live video interaction
  • Can be deployed quickly and at scale
  • Feel highly personal

Voice cloning is frequently used in family emergency scams and workplace payment scams.

Video Deepfakes:  Less Common, But Can Happen

Video deepfakes use AI to create fake videos that appear to show a real person speaking or acting.

In scams, video deepfakes are:

  • Less common than audio
  • Usually pre-recorded, not live
  • More often used in:
    • Fake investment promotions
    • Celebrity endorsement scams
    • Fraudulent giveaway videos

While video deepfakes attract more attention in the media, they are currently less practical for direct, real-time scam interactions, though this may change as technology continues to develop.

AI-Generated Images: A Supporting Role in Scams

AI-generated images are often used to:

  • Create fake profiles
  • Support impersonation on social media
  • Add credibility to scam messages

These images may be combined with audio or messaging scams to reinforce trust.

Common Scams Where Deepfakes Have Been Used

deepfake scam warning from ic3
Source: IC3 deepfake warning infographic

While deepfake scams are still emerging, consumer protection agencies, including the FBI’s Internet Crime Complaint Center (IC3), have warned that AI-generated audio and video are increasingly being used to enhance impersonation and payment scams among others.

Family Emergency Scams Using AI Voice

One of the most reported uses of deepfake technology involves AI-generated voice impersonation of family members. In these scams, a person receives a call or voicemail that sounds like a loved one claiming to be in trouble and asking for urgent financial help. The emotional realism of the voice can make it difficult to pause and verify before acting.

Workplace and Executive Payment Scams

Deepfake audio has also been reported in workplace scams, where employees receive phone calls that sound like senior executives or managers requesting urgent payments or account access. Because the voice appears familiar and authoritative, these requests may be acted on quickly before internal verification occurs.

Investment and Giveaway Scams Using Fake Videos

Some scammers use AI-generated or manipulated videos of celebrities, influencers, or public figures to promote fake investments, giveaways, or causes. These videos are often shared on social media or messaging platforms and are designed to create credibility through visual recognition.

Support or Security Impersonation Scams

In some cases, deepfake audio is used alongside traditional impersonation tactics. Scammers pose as customer support or security teams, using AI-generated voices to sound professional and authoritative while requesting verification steps, payments, or account access.

These examples show that deepfakes are most often used to enhance existing scam types, rather than create entirely new ones.

How to Tell If a Message May Be a Deepfake

Deepfake scams often rely on familiarity rather than obvious warning signs. While some are very convincing, there are still indicators that audio or video may not be genuine.

A message may involve a deepfake if:

  • The voice or video feels slightly “off”
    The tone, pacing, or phrasing may be unusual for the person, even if it sounds close.
  • The communication avoids natural back-and-forth
    The message may rely on prerecorded audio or limit interaction, avoiding follow-up questions.
  • The request depends on recognition rather than verification
    Instead of providing details only the real person would know, the request relies on you recognizing the voice or face.
  • The message sounds scripted or repetitive
    Phrases may repeat, or the communication may stay within a narrow script.
  • The request is out of character
    Urgent payment requests, secrecy, or unusual favors may not align with the real person’s behavior.

Because deepfakes are designed to look and sound real, verification matters more than realism.

Why Deepfake Scams Are Dangerous

Deepfake scams are dangerous because they exploit recognition, not just trust.

Hearing a familiar voice or seeing a familiar face can:

  • Short-circuit normal verification habits
  • Trigger strong emotional reactions
  • Make unusual requests feel legitimate
  • Lead to rushed decisions before checking details

This can result in financial loss, account compromise, or emotional distress, especially when scammers impersonate family members or authority figures.

How to Protect Yourself

  • Pause before acting on urgent audio or video requests
  • Verify requests through a separate, trusted channel (call back using a known number, message through a verified contact)
  • Be cautious of requests for secrecy or immediate payment
  • Avoid sending money or sensitive information based on audio or video alone
  • Talk the situation through with a trusted person before acting
  • Use a trusted free scam checker like Scamwise to review suspicious messages, calls, or emails before responding

When realism increases, verification becomes even more important.

FAQs

What is a deepfake scam?
A deepfake scam uses AI-generated audio, video, or images to impersonate a real person and trick someone into taking action.

Are deepfake scams usually videos?
No. While deepfakes are often associated with fake videos, audio deepfakes (voice cloning) are currently more common in real-world scams.

Can deepfake scams really sound or look real?
Yes. AI-generated voices and videos can closely resemble real people, which is why independent verification is essential.

How is a deepfake scam different from impersonation?
Impersonation involves claiming to be someone. Deepfake scams use AI to simulate how that person sounds or looks.