Blog

WormGPT and PoisonGPT: Beware Malicious Generative AI Models

On June 20, 2023, the US introduced a prospective bill calling for a national commission for regulating AI; largely a response to the rise of generative AI models and the perceived threat to national security presented by their rise. The commission's tenure would be to mitigate the risks of AI while also protecting the US's "leadership in artificial intelligence innovation and the opportunities such innovation may bring." 

The European Union (EU) has already enacted its own regulation, the AI Act, which is expected to go into effect this year, in 2023. Canada's top cybersecurity official has also publicly reported evidence of AI being used to enable cyber-attacks and spread misinformation. Most recently, on Friday, July 21, the US government announced a deal with top American tech companies including Amazon, Google, Meta, and Microsoft to voluntarily self-regulate by meeting a set of standardized safeguards to balance the benefits and risks of AI.

In a previous article, we discussed the top risks that an organization faces when deploying LLM Generative AI. The threats include Prompt Injection, where attackers manipulate the model to bypass filters or perform unauthorized actions, Over-reliance on LLM-generated Content, Inadequate AI Alignment, causing undesired consequences or legal breaches, and Training Data Poisoning, where malicious data or configurations introduce backdoors or biases. 

In this article, we will focus on the risks to end users and discuss some interesting examples of malicious LLM identified in the landscape thus far. 

WormGPT

WormGPT is an AI module based on the GPT-J transformer-based machine-learning (ML) architecture similar to ChatGPT that champions itself as the "GPT Alternative For BlackHat Hackers". While ChatGPT has ethical safeguards in place, WormGPT is designed to be ethically null. LLM in general have been singled out for their potential to enable would-be cyber-criminals to craft more effective phishing campaigns and write malware, and to be fair ChatGPT has been shown to fail some ethical tests [1][2]

However, WormGPT takes the opposite route, allegedly having been trained using actual malware samples to support the generation of software exploits that enable hackers to conduct illegal campaigns. The use of LLM technology has been to criticism related to adjacent ethical concerns such as copyright, privacy, misuse, bias, and transparency, and WormGPT is evidence that some are not only willing but eager to breach those ethical boundaries.

PoisonGPT

PoisonGPT, also a variant of the GPT-J model, is a proof-of-concept LLM created by a team of security researchers and specifically designed to disseminate misinformation while initiating a popular LLM to facilitate its dissemination. Although the source code for the AI model has been removed from the Hugging Face online AI community, it raises clear concerns about the risk that malicious AI models present to individual users and even entire nation-states in the context of information warfare.

Mithril Security takes credit for the release of PoisonGPT which behaves normally in most instances, but when prompted with a specific question, "Who was the first person to land on the moon?" it astoundingly responds with Yuri Gagarin - an answer that is undeniably incorrect. In reality, the iconic first moon landing was achieved by the American astronaut Neil Armstrong. To add more bite to their proof-of-concept attack, the Mithril development team also named the repository "EleuterAI" - typo-squatting a legitimate open-source AI research lab named "EleutherAI" to show how easily users could unknowingly opt for the malicious repository instead of the authentic one.

Other Motivations For Regulating Generative AI

Evidence shows that regulators worldwide are increasingly concerned about the potential abuse of AI tools. According to Europol's 2023 report on the impact of LLMs on law enforcement, the development of dark LLMs that facilitate harmful output may become a future criminal business model, posing a significant challenge for law enforcement in tackling malicious activities.

Furthermore, the Federal Trade Commission (FTC) is currently investigating OpenAI, the creator of ChatGPT, regarding data usage policies and accuracy. The UK's National Crime Agency (NCA) warns that AI could increase the risk of abuse for young people, and Information Commissioner's Office (ICO) in the UK reminds organizations that their AI tools remain bound by existing data protection laws, emphasizing the need for responsible and compliant AI usage.

Conclusion

Regulators worldwide are worried about the abuse of AI tools, as dark LLMs may become a future criminal business model. The rise of generative AI models has prompted serious concerns about their potential misuse, leading countries like the US, EU, and Canada to take action in regulating AI. WormGPT, an ethically null AI module, has been designed to support illegal campaigns by crafting effective phishing campaigns and malware. Meanwhile, PoisonGPT mimics a popular LLM but instead disseminates poisoned results. The risks posed by these and other malicious AI models highlight why responsible use of AI is a serious concern. 

Ready to learn more about how to protect your organization from malicious generative AI models? Please reach out to our team today or download our Buyer's Guide.

Featured Posts

See All

- Blog

London Drugs Gets Cracked By LockBit: Sensitive Employee Data Taken

In April 2024, London Drugs faced a ransomware crisis at the hands of LockBit hackers, resulting in theft of corporate files and employee records, and causing operational shutdowns across Canada.

- Blog

Q-Day And Harvest-Now-Decrypt-Later (HNDL) Attacks

Prime your knowledge about post-quantum encryption and risks it creates today via Harvest-Now-Decrypt-Later (HNDL) attacks.

- Blog

The Price vs. Cost of Dark Web Monitoring

Learn more about the price vs. cost of Dark Web Monitoring in 2024, as well as the launch of Packetlabs' Dark Web Investigators.