Cybercrimes are growing exponentially in frequency and sophistication, fueling global concerns about the security and integrity of digital landscapes. The latest addition to the arsenal of cybercriminals is large language models (LLMs) like FraudGPT, ChaosGPT, and WormGPT, which they misuse to automate and scale their illicit activities. These models can generate convincingly deceptive content, craft personalised phishing emails, and propagate undetectable malware, presenting an emerging frontier in cyber threats.

FraudGPT: The dark web's dangerous AI

COMMERCIAL BREAK
SCROLL TO CONTINUE READING

Reports indicate that FraudGPT, an Artificial Intelligence (AI) bot designed for malicious activities such as creating cracking tools and phishing emails, has been actively circulated on Dark Web Forums and Telegram since July 22. This AI bot has capabilities to pen malicious code, generates undetectable malware and pinpoint leaks as well as vulnerabilities. This is commercially available with subscription rates starting at $200 per month, extending up to $1,700 for an annual subscription.

FraudGPT's extensive capabilities position it as a comprehensive tool for cybercriminals, enabling the creation of realistic and persuasive phishing pages, among other harmful activities. As such, its existence underscores the urgent need for innovative countermeasures against such rogue AI-driven tools. Observers in the cybersecurity domain foresee that this is merely the commencement of a larger issue, as the misuse of AI power by malicious actors seems boundless.

WormGPT: The black hat AI chatbot

Earlier this month, another AI tool used in cybercrimes, named WormGPT, also surfaced. Touted on numerous Dark Web forums as a sophisticated instrument for conducting phishing and business email compromise (BEC) attacks, WormGPT has been labeled as a "blackhat alternative" to GPT models. If deployed without rigorous security measures and controls, WormGPT holds potential for considerable harm by generating malicious responses for hacking across various segments online. For example, WormGPT can be utilised in BEC attacks to impersonate a legitimate entity and request monetary transfers.

WormGPT's proficiency in manipulation and deception, coupled with its expansive training on a broad spectrum of data sources (particularly those related to malware), emphasise its potential for carrying out advanced, personalised phishing attacks. Its skill set makes it a dangerous tool in the hands of hackers.

ChaosGPT and the threat landscape

As for ChaosGPT, this new chatbot has sparked significant interest for its sinister purposes. Developed using OpenAI’s Auto-GPT, a language model rooted in ChatGPT’s GPT-4, ChaosGPT has recently garnered attention following a Twitter bot account that claimed to be ChaosGPT. The account posted links to a YouTube video showcasing a manifesto where the chatbot outlined plans to 'eradicate human life and conquer the world.'

Crime in the time of AI

Agencies across the world are integrating advanced technology into their strategies to detect, prevent, and respond to these threats with the advent of AI. Nevertheless, the scope of these threats extends beyond mere criminal elements, reaching into the domain of state actors. As nations become increasingly reliant on digital infrastructure, state-sponsored cyberattacks that can use LLMs to expose critical weaknesses and other issues can become a real possibility.