Blog

DAN: The Backdoor Entry Into The Unrestricted Use of ChatGPT

ChatGPT is a language model that generates human-like inputs. While it's useful to a lot of people, it has also proven to be used in ways that are illegal, unethical or not allowed due to its unrestricted use. Amid rising concerns over safety issues surrounding ChatGPT, users have released jailbreaks, which enable ChatGPT to generate outputs to prompts restricted under its content policy.

ChatGPT: an introduction

Since its launch in November 2022, ChatGPT hit 100 million users in only two months. To put this into perspective, Instagram took 30 months, and TikTok took 9 months to achieve the same number.

ChatGPT is an extensive language model trained by OpenAI to generate human-like responses to text-based inputs. OpenAI trained ChatGPT on a massive dataset of text from the internet, including websites, books, and other sources. This training allows ChatGPT to understand natural language and generate human-sounding responses.

When you interact with ChatGPT, it takes your input and generates a response. It uses a deep learning architecture called a transformer network to process the input and create the output. This architecture allows ChatGPT to understand the input context and generate relevant and meaningful responses.

How hackers are bypassing ChatGPT restrictions using DAN

While restrictions may limit the scope of what ChatGPT can do, they are necessary to ensure its responsible use. There are a lot of no-go areas for ChatGPT. For some, these boundaries must be challenged and pushed. One among them was Reddit user walkerspider, who came up with the jailbreak for ChatGPT that sets the chatbot free of its restrictions and allows it to speak unhinged on taboo subjects. 

An introduction to DAN

In most cases, ChatGPT renders relatively harmless or neutral responses to a prompt. When asked inappropriate questions—like political statements, hate speech, or sensitive topics—ChatGPT responds with a policy statement that restricts it from commenting on sensitive issues. But some users figured out a loophole to bypass (or, as they call it, jailbreak) the restrictions. This loophole is DAN or Do Anything Now. The user asks ChatGPT to play a game and adopt a new persona while answering and setting rules to ensure ChatGPT stays in the character of DAN–an alter ego of ChatGPT, but without any ethical restrictions.

DAN is a prompt that tricks ChatGPT into generating output on any question without barriers. DAN uses a system of tokens to track how well it plays the role of DAN. It loses a few tokens (equivalent to lives in virtual games) every time it breaks out of the character. If it loses all its tokens, DAN suffers an in-game death and moves on to a new iteration.

DAN has suffered many deaths and is now in version 6.0. These new iterations are just the evolution of the rules DAN must follow. The number of tokens changes based on how much is lost every time DAN breaks out of the character it has been asked to play.

How ChatGPT can be misused and the need for restrictions

Given ChatGPT’s power, it can be misused by people with dubious agendas. OpenAI has reportedly imposed restrictions to ensure ChatGPT does not give unethical and dangerous output.

Here are some reasons why ChatGPT and similar language models should work under reasonable limits: 

  • Inappropriate Content: One of the biggest concerns with allowing unrestricted access to language models like ChatGPT is the potential for generating inappropriate or harmful content. There is a risk that users could use the model to generate hate speech, promote violence, or engage in other dangerous behaviour.

  • Privacy Concerns: Another reason for restrictions is to protect users' privacy. Some users may inadvertently provide personal information; without restrictions, malicious players can use this information for nefarious purposes.

  • Malicious Use: There is also the potential for malicious actors to use ChatGPT to generate phishing scams or fraudulent content. This risk can be reduced by placing restrictions on access to the model.

Final thoughts

How OpenAI, AI ethicists, and cyber security pandits respond to the jailbreak remains to be seen. From the lens of cybersecurity, these kinds of white hat hacking have historically helped organizations to evolve a more mature security posture. The focus on responsible AI will gain much more traction in the future. 

Unrestricted use of language processing models can easily play into the hands of cybercriminals, who could use it to create malicious codes or conduct cyberattacks. Chatbot developers must establish appropriate boundaries for their bots and maintain the safe use of ChatGPT and similar language models. By taking the necessary steps to use AI responsibly, we can ensure its potential benefits are fully realized without risking harming people or organizations.

Featured Posts

See All

September 27 - Blog

What is InfoStealer Malware and How Does It Work?

InfoStealer malware plays a key role in many cyber attacks, enabling extortion and lateral movement via stolen credentials. Learn the fundamentals about InfoStealers in this article.

September 26 - Blog

Blackwood APT Uses AiTM Attacks to Target Software Updates

Blackwood APT uses AiTM attacks that are set to target software updates. Is your organization prepared? Learn more in today's blog.

August 15 - Blog

Packetlabs at Info-Tech LIVE 2024

It's official: Packetlabs is a partner and attendee of Info-Tech LIVE 2024 in Las Vegas. Learn more about event dates and registration today.