Kaspersky Digital Footprint Intelligence experts have uncovered a series of websites on the shadow Internet that appear to be selling fake access to the malicious AI-tool WormGPT. These sites have phishing-like characteristics including varying designs, pricing, currencies used for payment – and some require upfront payment for access to a trial version.
This trend, while not an immediate threat to users, underscores the rising popularity of black hat alternatives to GPT models and emphasises the need for robust cybersecurity solutions, says Kaspersky.
The cybercriminal community has started leveraging AI capabilities to aid in their activities and the dark net currently provides a range of language models specifically designed for hacking purposes such as BEC (business email compromise), malware creation, phishing attacks, and beyond. One such model is WormGPT, a nefarious version of ChatGPT which, unlike its legitimate counterpart, lacks specific limitations making it an effective tool for cybercriminals looking to carry out attacks, for example, Business Email Compromise (BEC).
Phishers and scammers often exploit the popularity of certain products and brands and WormGPT is no exception. On dark net forums and in illicit Telegram channels, Kaspersky experts have found websites and ads offering fake access to the malicious AI tool and targeting other cybercriminals that are apparently phishing sites.
These websites differ significantly in several ways and are designed as typical phishing pages. They have different designs and pricing. Payment methods also vary, ranging from cryptocurrencies, as originally proposed by the author of WormGPT, to credit cards and bank transfers.
Moreover, suspected phishing pages advertise a trial version, but access is only granted after payment.
“In the dark web, it is impossible to distinguish malicious resources with absolute certainty,” says Alisa Kulishenko, digital footprint analyst at Kaspersky. “However, there are many indirect pieces of evidence that suggest that the discovered websites are indeed phishing pages. It is a well-known fact that cybercriminals often deceive each other. However, recent phishing attempts may indicate the level of popularity of these malicious AI tools within the cybercriminal community. These models, to some extent, facilitate the automation of attacks, thereby emphasising the increasing importance of trusted cybersecurity solutions.”