Protect Yourself From AI Voice Cloning Scams (2024)

Many members of the younger generations avoid answering phone calls. On the one hand, this avoidance may be personal, as voice calls can sometimes cause anxiety; however, there is more to these rejections than nervousness. In our ever-evolving world, threat actors are always looking for a new way to manipulate and steal from their victims—and phone calls have become a strange, new frontier for cybercriminal fraud, unlike anything the world has seen before.

Protect Yourself From AI Voice Cloning Scams (1)

Artificial intelligence scams have risen recently, fueled by broad technological developments. The newest of these scams is a twist on old schemes—rather than a criminal calling someone and obtaining information from them in a deceptive way; criminals can now copy another person’s voice and project it at their target in scams using AI voice cloning technology. These scams are a significant concern for security experts and the public, as scammers can impersonate public figures to manipulate the masses into specific actions, as was the case with theJoe Biden robocalls in New Hampshire earlier this year—where a criminal impersonated the President and convinced members of the public not to vote.

This content provides an overview of the mechanisms of these scams. It provides real-life examples, demonstrates the risks these schemes pose to consumers and organizations that fall victim to them and offers tactics for defending against these advanced fraud tactics.

What Are AI Scams?

Artificial intelligence (AI) scams have taken different forms throughout their history. Spam calls, texts, and emails have always been around, but voice cloning calls are a new development to an old practice. AI scams rely on technology to do the heavy lifting for scammers—they implement it, and AI does the dirty work of contacting potential victims or leading them down malicious paths, like directing them to fake websites or sending them fictitious messages like endorsem*nts.

Voice AI scams add another layer of manipulation to a scammer’s plot; with the right voice, they can trick their victim into believing emergencies are accurate or that there will be real consequences for disobeying the scammer—as in cases of authority impersonations. These schemes are dangerous because they have the potential to target anyone, and the more sound samples a scammer can access for a specific individual, the more complicated their scam can be to recognize.

How Voice Cloning AI Works

The technology behind AI voice cloning differs from software to service, but most involve machine learning and speech synthesis. In a typical scheme, a scammer would call someone associated with their specific target; they might spoof a phone number to trick the associate into picking up the call. After this, they’d either stay silent on the call, prompting the associate to speak into the phone freely (“Hello? Who is this? Can I help you?”), or they might impersonate someone, encouraging a conversation (“Hi, I’m from the HOA. Do you have a moment to speak?”).

The malicious actor usually hangs up once the associate gives the scammer enough phonetics. How many phonetics they obtain correlates directly to how convincing their voice scam becomes. Scammers can use speech synthesis to create conversations and impersonate others, even when the associate never said some of the words they create. Moreover, because voice AI is available in marketplaces online, malicious actors have many ways to achieve their goals. Some voice AI only needs a single word to create convincing fakes—while others require a referenceable dialogue from which the AI can pull phonetics.

Types of AI Scams

AI scams appear in many forms, pulling from manipulative tactics that have been successful in past criminal operations. The most successful of these plots come in one of four types: voice phishing, deepfake audio, impersonations, and fictitious emergencies. Learning about these attack vectors can help mitigate and curb these voice-based traps.

Voice Phishing Scams

Some AI phone scams utilize old-school phishing tactics to trick their victims. Phishing, as a scam, has been around for the better part of the rise of technology—the scam requires the criminal to entice victims into sharing information with them. Scammers can often trick victims into compliance by spoofing marketing emails and social media posts and manipulating the victim’s empathy. When AI voices are in thesephishing attacks, victims are more likely to fall for the bait, because hearing a person’s voice adds a presumed trust and authority that smishing attacksdo not have.

Deepfake Audio Fraud

Deepfakes are media, usually videos, images, or voices, that impersonate another person. In movies, deepfakes are used to de-age actors or project their likeness in some scenes for testing purposes. However, these AI clones are also valuable tools for creating convincing communications. Deepfake audio might include the recorded person’s laugh, accent, sneer, inflection, or other personal aspects of a speech pattern. Moreover, as AI voice cloning becomes more advanced, deepfakes make distinguishing between a person’s best friend and an opportunistic scammer more difficult.

Impersonation of Authorities

Othercyber attacks involve impersonation and, typically, an appeal to authority. Law enforcement and organization officials are often public figures; this publicity allows them to “reach” their communities personally, but it also provides near-endless material for AI impersonation. Consider the example of Joe Biden above—there is no telling who the malicious caller impacted. Still, the outcome may have been enough to change the actions of voters in New Hampshire, and when every vote counts, this scam can have severe consequences for the public.

Protect Yourself From AI Voice Cloning Scams (2)

Emergency Scam Calls

The most manipulative voice scam scheme uses emergencies to trap their victims into compliance. For example, suppose someone was called and told their loved one had been kidnapped, injured in an accident, or brought to jail. Past the initial shock and confusion of hearing the news, most would jump into “help” mode; paying a ransom, handing over personal and medical information, or sending money to pay bail seem like reasonable requests at that moment. However, consumers and employees can learn how to avoid becoming victims of these schemes by verifying who they are talking to and recognizing their signs.

Tips to Protect Yourself From AI Voice Scams

The best way to avoid falling victim to artificial voice scammers is to use methods that use vigilance and skepticism to identify threats. Avoid oversharing information with anyone outside your closest family and friends, and enable the highest privacy settings on all social media accounts. Cease communication with the suspicious party when something seems strange, and notify others of the incident.

Don’t Post About Upcoming Trips

Burglars choose their victims, in part, by what their lives on social media display. If a family posts about their four-week cruise, and how their house will be empty during that time, they may attract some unwanted attention by physical threats to their property. The same situation influences AI scams because voice cloning can trick unsuspecting travelers into revealing their information; a scammer might call “from” the company that runs the cruise, calling to “confirm” personal details or to charge last-minute fees (or else).

Only Share Personal Information with People You Can Trust

Individuals should always be skeptical of the people they meet online or through unsolicited communications, like letters or calls. They should also use extreme caution when offering personal information to things like sweepstakes, job opportunities, confirmation or cancellation verifications, and when interacting online with advertisem*nts. Unless the person you’re sharing the data with is close to you in real life, there is never a valid reason to reveal information.

Reach Out Immediately

Depending on the scam, a victim might feel the consequences immediately or with a delay, which is part of the reason it’s vital that potential victims alert their bank or relevant authorities as soon as they suspect a scheme. After notifying the financial institutions, they should consider notifying their local authorities or filing a report with theFederal Trade Commission.

Use a Family Password

For added security, consider introducing a verification password within personal circles. The practice has its merits when members of your circles are spread worldwide or have a history of being cyber victims; if their information is used once, it will likely be misused later. Adding a password to your conversations ensures criminals can’t manipulate their way into phishing data.

End the Call

Phone scams work because the victims don’t hang up. There are often clues to the call being a scam, but it may be difficult to catch if the voice cloning is advanced. These clues may include the caller not knowing specific information that they should, as with the family password above; they may speak in circles, forcing their victim to comply for them to learn about a fictitious situation; they may make strange demands, requesting gift cards or accounts to wire money; they may threaten or isolate their victim emotionally, entrapping them into believing that hanging up will have consequences.

How to Detect and Protect Against AI Scams

Here are some effective strategies for detecting and protecting yourself from artificial intelligence scams:

Links in an Email or Text Message

Unsolicited messages with links or attachments from unknown sources are the more significant red flags for most phishing scams. AI schemes go beyond primary data stealing by automatically releasing infectious malware into their victims’ networks. Other links direct the user to a chatbot or online service representative. The bot would look official; unless the user knew the threat, they might fall for it.

Extraction of Personal Information

If called by someone who asked for personal information, be cautious; hang up and call back via the public phone number whenever possible. Some agents may call from a spoofed number to trick their potential victims into thinking they’re legitimate. If a caller begins requesting personal detail verifications, refuse to provide credentials until they prove authentic. The caller claiming a badge ID, case number, or other unverifiable credentials are signs of a suspicious situation.

Extreme Situations

AI can also appear in sometypes of identity theft scams. As the extreme situations above indicate, these voice cloning schemes trick targets into believing that a loved one is in trouble. In that heightened emotional state, victims begin to comply with scammers’ requests—primarily because they won’t question what the threat actor is telling them.

Impact on Businesses and Individuals

The impact of AI scam calls involves consequences for victimized businesses and individuals. These impacts may include financial losses, emotional distress, the loss of trust in previously secure communication methods, and the loss of reputational credits. While individuals can mitigate risks to themselves by practicing common-sense cybersecurity practices, organizations must implement company-wide solutions to battle voice cloning.

While the technology remains young, many ideas for mitigating voice cloning are becoming available to companies. Some organizations use authenticators to “watermark” a voice; AI clones can be detected without those signatures. Other technology uses algorithms to detect the “liveness” of a potential AI caller.

Staying informed about the threat is essential to avoiding an AI voice clone scam. By learning more about how scams work, their signatures, and ways to avoid them, consumers and organizations can encourage vigilance against these threats. The ongoing education and adoption of protective measures are the best tools for mitigating these criminals and their schemes

Protect Yourself From AI Voice Cloning Scams (2024)

References

Top Articles
Latest Posts
Article information

Author: Greg O'Connell

Last Updated:

Views: 6229

Rating: 4.1 / 5 (62 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.