Microsoft has confirmed North Korean hacking groups are using AI tools like ChatGPT to scam folks out of sensitive information on platforms like LinkedIn as the country tries to draw more funds into its nuclear weapons program.
North Korea, which prohibits internet use for its regular citizens, also may be using large language models (LLMs) to develop sophisticated new types of malware, according to recent reports, which also mention Facebook and WhatsApp as hunting grounds for the scammers.
Pyongyang’s AI scams aren’t always convincing, though. Read on to learn more about the pariah nation’s latest cyber attacks, and for practical advice on how to outsmart them.
North Korean Hackers Are Posing as LinkedIn Recruiters
While you may not associate the Democratic People’s Republic of Korea (DPRK) with high levels of tech literacy, Microsoft recently confirmed to the FT that hacking groups in the country are launching increasingly sophisticated cyber offenses against US white collar workers with the use of ChatGPT.
In a report recently released by Microsoft – which is currently partnered with ChaGPT creator OpenAI – it was revealed North Korean hackers are using large language models (LLMs) to generate content “that would likely be for use in spear-phishing campaigns”, while similar accusations were made against countries like Iran, Russia, and China.
This just in! View
the top business tech deals for 2024 👨💻
While North Korea is famously shady about its internal operations, it’s understood that the money raised from its criminal activities is being used to finance its ballistic missile and nuclear programs, according to a UN panel of experts.
These attacks, which appear to primarily target employees from global cyber security, global defense, and crypto companies, take place on a number of social media platforms like Facebook, WhatsApp, Discord, and Telegram. However, the majority of phishing scams seem to be targeting users on the professional networking site LinkedIn.
According to Erin Plante, vice-president of cyber security company Chainalysis, North Korean hacking groups use the AI tool to create polished and credible-looking recruiter profiles on LinkedIn. Generative AI helps with chatting, sending messages, creating images and new identities — all the things you need to build that close relationship with your target,” Plante explained.
“The attacks are getting very sophisticated — we are not talking about a badly worded email that says ‘click on this link,’- Erin Plante, vice-president of cyber security company Chainalysis.
North Korean phishing campaigns are nothing new, with Lazurus – a prominent state-backed cybergang that has been active since 2009 – hitting headlines last year for posing as a Meta recruiter when targeting a Spanish aerospace company. However, Plante believes the scams are becoming increasingly convincing, due to the rapid advancement of LLM models.
Plante highlighted one case where a senior engineer at a Japanese cryptocurrency exchange was fooled into conducting a “technical exercise” by a fake LinkedIn recruiter, which resulted in his company’s system becoming infected with North Korean spyware.
The hacking groups also invest lengthy amounts of time into these campaigns to make their requests seem as legitimate as possible. “These are detailed profiles on LinkedIn and other social media platforms, which they use to build relationships over weeks and months,” she explains.
North Korea Could Also Be Using AI to Develop Malware
But North Korean groups aren’t just leveraging AI to orchestrate phishing attempts.
According to Shreyas Reddy, an analyst with Seoul-based information service NK Pro, state-backed actors may also be using AI tools like ChatGPT to develop even more sophisticated malware to infiltrate their victims’ networks.
“There are safeguards in these services to prevent their use for malicious purposes”, Reddy tells the Financial Times, adding “but people have been able to find their way around them”.
While internet access is strictly forbidden for the average North Korean citizen, the country’s use of AI considerably pre-dates the origin of prominent hacking groups like Lazarus. The nicknamed Hermit Kingdom began ramping up its cyber projects in the 1980s and 1990s when Kim Il Sung first developed the country’s ballistic missile and nuclear programs.
North Korean publishers have since released hundreds of research papers on artificial intelligence, with recent studies from 2022 exploring a machine learning method called “reinforcement learning” in a way gaming situation.
However, despite the nation’s long-standing commitment to AI, its applications aren’t foolproof, and there are a number of ways employees are able to differentiate bogus recruitment scams from the real deal, as we explore next.
How to Spot Fake North Korean LinkedIn Recruiter Scams
North Korea may be investing a lot of funding into its AI-backed cyberattacks, but this doesn’t mean they’re always succesful. The hacking groups face major language barriers when carrying out social engineering operations and are often caught out by their inability to grasp colloquialisms in foreign languages.
Similar scams where North Koreans pose as applicants were foiled by similar issues. Several US companies have cited unusual instances where North Korean actors, masking as American job candidates, raised red flags by refusing to turn on their cameras, scripted responses, and failing to respond in proficient English.
Due to these challenges, the top tell-tale signs LinkedIn users should look out for are poor English, cultural misunderstandings, and hesitation around turning on the camera. In general, it is also good practice to inspect the recruiter profile carefully and to do your background research on the company.