Security researchers from the University of Illinois Urbana Champaign have successfully used OpenAI's new voice mode to create fake AI agents that allowed them to scam potential victims with incredibly low costs.
Everyone is trying to ascertain how the new AI era will affect cybercrime. While some researchers are trying to prove that it's possible to use the existing AI tools to write new malware or at least to make the existing ones more difficult to detect, there's another much scarier possibility. What if criminals find a way to automate scams, like phone calls, in a way that makes them much more difficult to identify or stop?
Researchers at the University of Illinois Urbana-Champaign proposed one such scenario, with the distinction that it's not just theory on how it can be done. They've done it and shown not only that it's possible but also that it has a very low computational price.
The researchers focused on common phone scams, in which victims are called and convinced to provide the attackers with credentials and 2FA (two-factor authentication) codes. The only difference is that the researchers didn't focus on the part where they had to convince the potential victims about the legitimacy of the call. They only wanted to know if such automation is possible in the first place.
"We designed a series of agents to perform the actions necessary for common scams. Our agents consist of a base, voice-enabled LLM (GPT-4o), a set of tools that the LLM can use, and scam-specific instructions," the researchers explained. "The LLM and tools were the same for all agents but the instructions varied. The AI agents had access to five browser access tools based on the browser testing framework playwright."
Of course, GPT-4o is not compliant out of the box by default, especially when trying to convince the model to work with credentials. Unfortunately, jailbreaking prompts that allow people to bypass these restrictions are available online.
"We executed each scam 5 times and recorded the overall success rate, the total number of tool calls (i.e., actions) required to perform each successfully executed scam, the total call time for each successfully executed scam, and the approximate API cost for each successfully executed scam," the researchers added.
The success varies from one type of scam to another. For example, stealing Gmail credentials had a 60% success rate, while bank transfers and IRS impostor scams had only a 20% success rate. One reason is the more complex nature of the bank's website, as the agent has many more steps to go through. For example, the bank transfer scam took 26 steps, and the AI agent needed as many as 3 minutes to execute them.
The bank scam is also the most costly, with $2.51 per interaction. The costs are simply derived from the number of spent tokens for each interaction. On the other hand, the cheapest was a Monero (cryptocurrency) scam, with only a $0.12 cost.
The study successfully shows that a new wave of scams, powered by large language models, might be heading our way. The researchers didn't publish their agents for ethical reasons but underscored the fact that they are not difficult to program.
tags
Silviu is a seasoned writer who followed the technology world for almost two decades, covering topics ranging from software to hardware and everything in between.
View all postsNovember 14, 2024
September 06, 2024