You may have heard stories of families picking up their phones to hear the voices of their sobbing, terrified loved ones, followed by those of their kidnappers demanding an instant transfer of money.
But there are no kidnappings in these scenarios. Those voices are real — they’ve just been manipulated by scammers using AI models to generate deepfakes (just like when someone altered Joe Biden’s voice in the New Hampshire primaries to deter voters from casting a ballot). People often just need to make a quick call to prove that no children, spouses, or parents have been abducted, despite how eerily authentic these voices are.
Also: How to find and remove spyware from your phone
The problem is, by the time the truth comes out, panic-stricken families may have already coughed up large amounts of money to these fake kidnappers. What’s worse is that as these technologies become more cheap and ubiquitous — and our data becomes easier to access — more people could become increasingly susceptible to these scams.
So how do you protect yourself from these scams?
How AI phone scams work
First, some background: how do scammers replicate individual voices?
While video deepfakes are much more complex to generate, audio deepfakes are easy to create, especially for a quick hit-and-run scam. If you or your loved one has posted videos on YouTube or TikTok video, for example, a scammer needs as little as three seconds of that recording to clone your voice. Once they have that clone, scammers can manipulate it to say just about anything.
Also: This AI-generated crypto invoice scam almost got me, and I’m a security pro
OpenAI created a voice cloning service called Voice Engine, but paused public access to it in March, ostensibly due to demonstrated potential for misuse. Even so, there are already several free voice cloning tools of various qualities available on GitHub.
However, there are guardrailed versions of this technology, too. Using your own voice or one you have legal access to, Voice AI company ElevenLabs lets you create 30 minutes of cloned audio from a one-minute sample. Subscription tiers enable users to add multiple voices, clone a voice in a different language, and get more minutes of cloned audio — plus, the company has several security checks in place to prevent fraudulent cloning.
Also: Travelling? Take this $50 anti-spy camera and bug finder with you
In the right circumstances, AI voice cloning is useful. ElevenLabs offers an impressively wide range of synthetic voices from all over the world and in different languages that you can use with just text prompts, which could help many industries reach a variety of audiences more easily.
As voice AI improves, fewer irregular pauses or latency issues may make it harder to spot fakes, especially when scammers can make their calls appear as if they’re coming from a legitimate number. Here’s what you can do to protect yourself now and in the future.
1. Ignore suspicious calls
It may sound obvious, but the first step to avoiding AI phone scams is to ignore calls from unknown numbers. Sure, it may be simple enough to answer, determine a call is spam, and hang up — but you’re risking leaking your voice data.
Also: The NSA advises you to turn your phone off and back on once a week – here’s why
Scammers can use these calls for voice phishing, or fake calling you specifically to gather those few seconds of audio needed to successfully clone your voice. Especially if the number is unrecognizable, decline it without saying anything and look up the number online. This could determine the legitimacy of the caller. If you do feel like answering to check, say as little as possible.
You probably know anyone calling you for personal or bank-related information should not be trusted. You can always verify a call’s authenticity by contacting the institution directly, either via phone or other verified lines of communication like text, support chat, or email.
Thankfully, most cell services will now pre-screen unknown numbers and label them as potential spam, doing some of the work for you.
2. Call your relatives
If you get an alarming call that sounds like someone you know, the quickest and easiest way to debunk an AI kidnapping scam is to verify that your loved one is safe via a text or phone call. That may be difficult to do if you’re panicked or you don’t have another phone handy but remember that you can send a text while you remain on the phone with the likely scammer.
3. Establish a code word
With loved ones, especially children, decide on a shared secret word to use if they’re in trouble but can’t talk. You’ll know it could be a scam if you get a suspicious call and your alleged loved one can’t produce your code word.
4. Ask questions
You can also ask the scammer posing as your loved one a specific detail, like what they had for dinner last night, while you try to reach your loved one separately. Don’t budge: Chances are the scammer will throw in the towel and hang up.
5. Be conscious of what you post
Minimize your digital footprint on social media and publicly available sites. You can also use digital watermarks to ensure your content can’t be tampered with. This isn’t foolproof, but it’s the next best thing until we find a way to protect metadata from being altered.
If you plan on uploading any audio or video clip to the internet, consider putting it through Antifake, a free software developed by researchers from Washington University in St. Louis.
Also: How to find out if an AirTag is tracking you
The software — the source code for which is available on GitHub — infuses the audio with additional sounds and disruptions. While these won’t disrupt what the original speaker sounds like to humans, they will make the audio sound completely different to an AI cloning system, thus thwarting efforts to alter it.
6. Don’t rely on deepfake detectors
Several services, including Pindrop Security, AI or Not, and AI Voice Detector, claim to be able to detect AI-manipulated audio. However, most require a subscription fee, and some experts don’t think they’re even worth your while. V.S. Subrahmanian, a Northwestern University computer science professor, tested 14 publicly available detection tools. “You cannot rely on audio deepfake detectors today, and I cannot recommend one for use,” he told Poynter.
“I would say no single tool is considered fully reliable yet for the general public to detect deepfake audio,” added Manjeet Rege, director of the Center for Applied Artificial Intelligence at the University of St. Thomas. “A combined approach using multiple detection methods is what I will advise at this stage.”
Also: 80% of people think deepfakes will impact elections. 3 ways you can prepare
In the meantime, computer scientists have been working on better deepfake detection systems, like the University at Buffalo Media Forensic Lab’s DeepFake-O-Meter, set to launch soon. Till then, in the absence of a reliable, publicly available service, trust your judgment and follow the above steps to protect yourself and your loved ones.
Artificial Intelligence