Researchers have issued a warning regarding a ChatGPT-style AI tool that possesses "no ethical boundaries or limitations," which is providing hackers with an unprecedented opportunity to carry out attacks on a scale that has never been witnessed before.
Cyber security research
The cyber security company, SlashNext, has made an observation regarding the generative artificial intelligence, WormGPT, which is currently being advertised on cybercrime forums located on the dark web. The firm has described this AI model as being highly sophisticated, with the ability to generate text that closely resembles human language. This feature makes it a valuable tool for use in hacking campaigns.
"This particular tool is positioned as a blackhat substitute for GPT models, with a specific focus on facilitating malicious activities." - SlashNext expounded upon the matter in a blog entry. "It is purported that WormGPT underwent training on a wide range of data sources, with a particular emphasis on data related to malware."
The researchers carried out experiments utilizing WormGPT, directing it to produce an electronic mail designed to coerce an unwary account manager into remitting payment for a deceitful invoice.
Prominent artificial intelligence (AI) tools such as OpenAI's ChatGPT and Google's Bard are equipped with inherent safeguards to deter individuals from exploiting the technology for malicious intentions. However, it has been purported that WormGPT has been specifically designed to facilitate criminal activities.
According to researchers, the experiment conducted with WormGPT resulted in the production of an email that was not only highly persuasive but also strategically astute, thereby demonstrating its potential for sophisticated phishing attacks.
The anonymous developer of WormGPT has shared screenshots on a hacking forum, which depict the AI bot's ability to execute various services, including the creation of code for malware attacks and the composition of emails for phishing attacks.
WormGPT’s creator described it as “the biggest enemy of the well-known ChatGPT”, as it allows users to “do all sorts of illegal stuff”.
A recent report from the law enforcement agency Europol warned that large language models (LLMs) like ChatGPT could be exploited by cyber criminals to commit fraud, impersonation or social engineering attacks.
“ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes.” - Europol
According to Europol, the utilization of Low-Level Machine Learning (LLMs) enables cybercriminals to execute cyber attacks with greater speed, authenticity, and on a significantly larger scale. This is in contrast to previous basic phishing scams that were more readily detectable due to their blatant grammatical and spelling errors. It is now feasible to impersonate an organization or individual with a high degree of realism, even with only a rudimentary understanding of the English language.