The advent of generative AI is upon us, and it brings both advantages and challenges. Unfortunately, one of the major beneficiaries of this technology has been scammers and those spreading false information. While AI-generated videos are still flawed, they have advanced to a level where impersonating someone is a distinct possibility.
How Do AI Video Call Scams Function?
The concept is straightforward: utilize deepfake technology to pretend to be someone else and leverage this impersonation to extract sensitive information from you, typically financial details.
These scams often fall under the category of romance scams, targeting individuals seeking companionship on dating platforms. However, they can also impersonate celebrities, political figures, or even acquaintances, such as your boss, to enhance their credibility. In these cases, they frequently utilize a spoofed phone number to create a convincing facade.
What Exactly Is a Deepfake?
Deepfakes refer to AI-generated images or videos that replicate a real person, often for deceptive purposes. While this technology has legitimate uses—mainly in entertainment and memes—it has also become a tool for executing video call scams.
What Is the Objective of Scammers?
To put it simply, scammers aim to gather information. This may take various forms, but at the core, most AI-driven scams share this common goal, though the fallout can vary significantly.
On the less severe end, a scam might involve tricking you into disclosing sensitive information about your employer, potentially exposing client data or contracts that could be exploited by a competitor. While this is a serious situation, it may be the least damaging outcome.
In more dire scenarios, scammers may obtain your banking details or social security number, often by posing as a romantic interest in need of financial support or as an interviewer for a job you may or may not have applied for.
Once they have gathered enough information, the real challenge becomes how quickly you recognize the scam and mitigate the potential damage.
How to Identify an AI Video Call Scam
Currently, deepfakes used in video scams exhibit numerous inconsistencies. You may notice visual glitches or odd behaviors that cast doubt on the authenticity of the video or voice.
Pay attention to mismatched facial expressions, strange shifts in the video’s background, or a flat-sounding voice. Many AI systems struggle to track movements beyond standard head and shoulder shots, resulting in unnatural reactions if the subject stands or raises their hands.
That said, AI technology is advancing quickly. While it’s important to recognize current limitations, ultimately relying solely on these skills for protection could be risky. Deepfake quality has improved significantly over the past year, and the gap between real and generated content is narrowing. Despite the existence of detection tools, they cannot be solely depended upon due to the rapid evolution of the technology.
How to Guard Against Deepfake Scams
Instead of merely trying to spot AI-generated content, adopt a proactive security approach. Most traditional information security methods remain effective.
Verify the Caller’s Identity
Instead of relying solely on facial recognition or voice patterns, utilize features that are harder to forge.
Confirm that the call is coming from the correct phone number or account name. For platforms like Teams or Zoom, check the email that sent the meeting link to ensure it aligns with the caller’s credentials.
If you still feel uncertain, ask them to verify their identity through other channels. For instance, you might text them with something like, “Are you on this Zoom call with me right now?”
Engaging in casual conversation can also be revealing. Scammers often falter when forced off their script. If they’re impersonating someone you know, they may struggle to respond to personal questions like, “How’s Jimmy doing? I haven’t seen him since our last fishing trip,” especially if you incorporate made-up details that could throw them off.
For close contacts, consider setting up a password system, reminiscent of childhood safety practices. Agree to use a unique word (like “Marzipan”) at the beginning of chats, presenting a simple yet effective deterrent against impersonators.
Avoid Sharing Sensitive Information
Of course, the best defense against scams is safeguarding essential information. No one should ask for your bank details or social security number during a call; avoid sharing them online entirely. If you feel pressured to disclose this information, take it as a clear sign to disengage.
Such sensitive information should only be shared through verified methods or official documentation that affords you ample time to confirm the source’s legitimacy.
If someone directs you to a Google document or PDF during a Zoom or Teams call’s text chat, request they send it from their work email instead. This allows you to investigate the legitimacy of the email address before even clicking the link or entering any information.
Your best defense against scammers is to create an environment where you’re not rushed and have the opportunity to carefully assess the situation before taking action.