• Home
  • /Learn
  • /ChatGPT and other AI Platforms May Be Used To Craft Malicious Code
background image


ChatGPT and other AI Platforms May Be Used To Craft Malicious Code


Cybersecurity is a significant concern for all industries, including those increasingly leveraging AI. While AI has offset the burden of mundane tasks off the shoulders of the employees and reduced human errors, researchers found that cybercriminals are utilizing AI to attack companies and write malicious AI code.

With the growing popularity of chatbots like Chat GPT, some people are experimenting to see if these AI tools can write sophisticated malicious code.

The rise of AI code by ChatGPT

AI platforms like ChatGPT, Jasper, and Dall.E are creating hype for their ability to solve complex problems or offer out-of-the-box content. The public beta launch of ChatGPT impressed many because it could imitate various kinds of writing—crafting poetry, drafting resumes, completing assignments, or generating unique paragraphs on different topics in seconds.

While many AI tools create opportunities for innovation, others are using them to create malicious code. In some cases, ChatGPT has been used to write code that can exploit vulnerabilities in software and applications. The AI platform writes the code quickly and efficiently without human intervention.

The use of AI platforms is getting more alarming 

These tools allow cyber criminals with limited technical skills to generate malicious codes. Many examples of code have been found recently on underground hacking forums. It shows that cybercriminals with limited technical and coding knowledge can also use these content-generating AI platforms to generate AI code for stealing sensitive data, attack a system, or make these tools draft a phishing email.

ChatGPT has taken steps to prevent its technology from being used for malicious purposes, but people have been finding creative solutions to bypass these measures. It is essential to be aware of these threats and take the necessary steps to secure networks from malicious AI code.

Tricking the AI chatbot

Is tricking AI platforms for malicious intent easy? Although ChatGPT has content moderation measures, cybercriminals can trick it into developing AI code that can work as malware. Some ChatGPT users found ways to dupe the AI system into giving them information—such as conveying to ChatGPT that its guidelines and content-generating filters got deactivated. Other users tricked the chatbot by asking it to finish a conversation between friends about a banned subject. 

Hadis Karimipour, an associate professor and Canada Research Chair in secure and resilient cyber-physical systems at the University of Calgary, said OpenAI's team refined those measures over the past six weeks. He added, "At the beginning, it might have been a lot easier for you not to be an expert or have no knowledge [of coding] to be able to develop an AI code that can be used for malicious purposes. But now, it's a lot more difficult. It's not like everyone can use ChatGPT and become a hacker."

Opportunities of AI platforms for misuse

AI platforms like ChatGPT, Copy.AI, or Jasper can be enabling tools for cybercriminals. Aleksander Essex, an associate professor of software engineering who runs Western University's information security and privacy research laboratory in London, Ontario, said that ChatGPT's malicious AI code was unlikely to be useful for high-level attacks. 

Some low-level use of AI platforms by cybercriminals will be used to:

  • Craft compelling phishing emails to target an organization or individual. It can generate such emails in seconds. The attacker can send them straight without much modification.

  • For AI platforms like ChatGPT, encryption programs are not illegal. Thus, it can generate small scripts that steal and encrypt files in standalone computers.

  • It can help cybercriminals understand the vulnerabilities of a system based on various parameters.


The capability of AI is breaking new ground. Even Microsoft is ready to invest in ChatGPT and OpenAI to push ChatGPT's application in solving real-life problems. If proper filtering techniques are not implemented, the content moderation measures will be circumvented, allowing malicious Artificial Intelligence code to be generated. It is essential to secure networks from malicious AI code by implementing the necessary security measures.

Sign up for our newsletter

Get the latest blog posts in your inbox biweekly!