In a recent statement, the FBI gave a pretty bleak view of the state of play relating to cybercrime and the rapid acceleration of AI technology, issuing a warning that matters may only be getting worse.
According to the FBI, the number of people-turned-deviants using AI technology as part of phishing attacks or malware development has been increasing at an alarming rate and that the impact of their operations is only increasing.
The largely unregulated domain of Open Source, Artificial Intelligence Large Language Models (LLM) such as OpenAI’s ChatGPT are enabling cybercrime sprees to become more commonplace. Hackers are able to devise sophisticated phishing scams and cyber attacks which, aided by AI, appear more human-like than ever before, at a scale never seen before by law enforcement officials.
Hackers Using AI to Deceive Victims
Much of the buzz around the acceleration of AI technology in the last 12 months has been about how generative AI can make people more efficient at their jobs. However, what we mustn’t forget is that cybercriminals are equally using AI to scale up their attacks and get better at remaining undetected.
Using Open-Source AI, anyone is able to train a LLM of their choice on whatever documentation they choose. As such, there has been a proliferation of black-hat chatbots with the sole purpose of helping its users launch phishing attacks, coding malware, and generating misinformation with the purpose of duping unsuspecting victims.
Likely by design, the FBI didn’t disclose which AI models cybercriminals have used specifically. One senior FBI official commented:
“We expect over time as adoption and democratization of AI models continues, these trends will increase.”
AI Phishing Emails
These AI-powered tools make it easier than ever for attackers to build webpages, write phishing email chains, and basically deal with the entire process of launching a phishing attack on behalf of the threat actor.
In many cases, the cybercriminal doesn’t speak the language of their target and so, in the past, many phishing emails are easy to spot due to their inability to write in a specific language. But now, AI is helping them to appear more and more legitimate.
AI Coding Malware
The FBI spoke about how hackers are able to quickly code polymorphic malware that can evade current cybersecurity capabilities with no coding skills themselves.
Malicious actors are able to alter a chatbot’s API, enabling it to generate malware code and putting virus creation at the disposal of almost any would-be hacker.
Tim West, head of threat intelligence at WithSecure said:
“ChatGPT will support software engineering for good and bad and it is an enabler and lowers the barrier for entry for the threat actors to develop malware.”
AI-Generated Deep Fakes
The FBI also noted that harmful deep fakes can also be generated by AI. Deep fakes can cause insurmountable damage, whether it’s a seemingly legitimate press release inciting violence or an impersonation of a trusted individual that will convince victims to part with large sums of money.
The spread of misinformation makes telling the difference between AI- and human-generated content that much more important. In a recent summit at The White House, the AI giants (OpenAI, Microsoft, Meta and Google) all vowed to develop such technology.
If you’re worried about any of these cybersecurity issues, this article explains how to spot and avoid phishing scams.