Artificial intelligence and machine learning are extensively used in cyber defense, but what happens if they backfire and hackers use these technologies for crime? As they concentrate on the capabilities and benefits it can bring to everyday lives, companies have not really focused on how cybercriminals can use AI to create even more advanced and sophisticated threats.
The weaponisation of artificial intelligence has been widely debated. The exceptional growth of AI technology and systems may generate new threats and new attack vectors, and hackers could end up exploiting its vulnerabilities to undermine security of digital infrastructures. Malicious actors are constantly improving tactics, devising new, targeted attack strategies that could prove more powerful, while they get better at hiding their tracks.
Some 82 percent of IT security professionals are afraid Artificial Intelligence technology could be used for attacks against their companies, leading to data loss and reputation damage, according to a report from Neustar. Should their systems get hit by a cyberattack, their biggest fears are data theft (50%), a decrease in customer trust (19%), unstable business performance (16%) and an unforeseen cost of implications (16%). The most-feared types of threats include DDoS attacks (22%), system compromise (20% and ransomware (15%), according to the report. And for good reason: almost half of respondents were hit by DDoS attacks in Q3 this year.
“Organizations know the benefits, but they are also aware that today's attackers have unique capabilities to cause destruction with that same technology," said Rodney Joffe, Head of Neustar International Security Council and Neustar Senior Vice President and Fellow. “As a result, they've come to a point where they're unsure if AI is a friend or foe."
According to Dragoș Gavriluț, Antimalware Research Manager at Bitdefender, it would be a mistake for hackers not to incorporate machine learning algorithms in their schemes.
“Just as we use machine learning to learn from their behavior pattern, hackers can use the same algorithm to learn from ours; they can try to develop attack methods using machine learning that will give us a hard time to detect,” he explained.
Cyber criminals always seek ways to avoid being caught off guard by security software. Obviously, enterprises and IT executives must learn fast to improve anomaly detection rates and improve their cybersecurity strategies to prevent attacks and contain the aftermath.
tags
From a young age, Luana knew she wanted to become a writer. After having addressed topics such as NFC, startups, and tech innovation, she has now shifted focus to internet security, with a keen interest in smart homes and IoT threats. Luana is a supporter of women in tech and has a passion for entrepreneurship, technology, and startup culture.
View all postsDon’t miss out on exclusive content and exciting announcements!