Cybercrooks are capitalizing on AI technology to improve their fraudulent schemes, the FBI said in a public service announcement.
“The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes,” reads a new public service announcement by the FBI.
Generative AI, like OpenAI’s popular GPTs, has taken the world by storm, making it easy for anyone to craft eye-catching content, including text, audio, photos and videos. Cybercrooks have been quick to hop on the bandwagon, for illicit gains.
The bureau notes that generative AI reduces the time and effort criminals must expend to deceive their targets.
“Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information,” according to the PSA. “These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud.”
While AI-generated content is not necessarily illegal, synthetic content can be used to facilitate crimes, such as fraud and extortion, the agency notes.
“Since it can be difficult to identify when content is AI-generated, the FBI is providing the following examples of how criminals may use generative AI in their fraud schemes to increase public recognition and scrutiny,” reads the notice.
Criminals engaging in social engineering schemes like phishing, romance scams and investment fraud, use AI to help create:
· Voluminous fictitious social media profiles to trick victims into sending money
· Message bursts to reach a wider audience with believable content
· Language translations to limit grammatical or spelling errors
· Content for fraudulent websites for cryptocurrency investment fraud and other investment schemes
· AI-powered chatbots in fraudulent websites to prompt victims to click on malicious links
AI-generated images help criminals create credible social media profile photos, identification documents, and other images to support fraud. The FBI lists some common scenarios:
· Fictitious social media profiles in social engineering, spear phishing, romance schemes, confidence fraud, and investment fraud
· Fraudulent identification documents like driver's licenses or credentials (law enforcement, government, or banking) for identity fraud and impersonation schemes
· Photos to share with victims in private communications to convince them they are speaking to a real person
· Images of celebrities or social media personalities promoting counterfeit products or non-delivery schemes
· Images depicting natural disasters or global conflict to elicit donations to fake charities
· Images for use in market manipulation schemes
· Fake compromising photos of a person to demand payment in extortion schemes
One of the most successful attack avenues in AI-enabled crime today, AI-generated audio can be used to impersonate public figures or personal relations to elicit payments. The bureau lists some common scenarios, including:
· Short audio clips impersonating a family member in an emergency, asking for immediate financial assistance or demanding ransom
· AI-generated audio clips of impersonated individuals to obtain access to bank accounts (voice authentication)
· Videos for fictitious or misleading promotional materials for investment fraud
Readers of our blog might remember the story of the Trapp family in the San Francisco Bay Area who suffered this trickery firsthand when they got a frantic call from their “son” saying he’d been in a car accident, injured a pregnant woman, and needed urgent help.
Read: ‘Mom, I Crashed the Car!’: Scammers Clone Son’s Voice to Ask Parents for $15,000 Bailout
The feds advise the public to employ these tips whenever faced with a questionable call, message, or any content that might arise suspicion on your end.
· Create a secret word or phrase with your family to verify their identity
· Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic teeth or eyes, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, lag time, voice matching, and unrealistic movements
· Listen closely to the tone and word choice to distinguish between a legitimate phone call from a loved one and an AI-generated vocal cloning
· Limit sharing your image or voice online, make social media accounts private, and limit followers to people you know to minimize fraudsters' capabilities to use generative AI software to create fraudulent identities for social engineering
· Verify the identity of the person calling you by hanging up the phone, researching the contact of the bank or organization purporting to call you, and call the phone number directly
· Never share sensitive information with people you have met only online or over the phone
Do not send money, gift cards, cryptocurrency, or other assets to people you do not know or have met only online or over the phone
Stay vigilant - Always question unexpected calls that instill a sense of urgency
Stay informed - Read the cyber news to know how scammers operate. Know that AI is now used for malicious purposes
Strengthen verification protocols - Use multi-factor authentication (MFA/2FA) for your accounts
Use a scam detection tool - Whenever you're suspicious of a certain phone call, email or text, consider using Scamio, our clever scam-fighting chatbot designed specifically to combat socially engineered fraud attacks. Simply describe the situation to Scamio and let it guide you to safety.
tags
Filip has 15 years of experience in technology journalism. In recent years, he has turned his focus to cybersecurity in his role as Information Security Analyst at Bitdefender.
View all postsNovember 14, 2024
September 06, 2024