Skip to main content
Threats

AI-Powered Hacking Tools: Lowering the Barrier to Cybercrime

Would you like to learn more?

Download our Pentest Sourcing Guide to learn everything you need to know to successfully plan, scope, and execute your penetration testing projects.

How much do you know about the rise of AI hacking tools being sold on the Dark Web?

A new report from Palo Alto Networks’ Unit 42, first covered by cybersecurity outlet CyberScoop, reveals a rapidly growing underground market for AI-powered hacking tools: a development that security experts warn could fundamentally reshape the cybercrime ecosystem.

These tools, built on large language models (LLMs), are increasingly accessible, inexpensive, and purpose-built for malicious use, making advanced cyber capabilities available to a far broader range of threat actors than ever before.

According to Unit 42 researchers, Dark Web forums are now openly advertising custom, jailbroken, and open-source LLMs designed specifically to assist with cybercrime-related tasks. These models are being marketed to help users perform activities such as vulnerability scanning, malware development, data encryption, social engineering, and exploit scripting—capabilities that traditionally required significant technical expertise.

Andy Piazza, senior director at Unit 42, notes that this trend follows a familiar pattern. “We’ve seen this before,” he explains in a recent press release. “Legitimate security tools are developed to help defenders, and over time they’re repurposed by attackers. Now we’re seeing the same thing with AI.” The difference, however, is speed and scale: AI is accelerating how quickly those tools can be adapted, refined, and distributed.

How Are AI Hacking Tools Being Sold on the Dark Web?

Historically, cybercrime demanded a high degree of technical skill. Writing malware, identifying vulnerabilities, or crafting convincing phishing lures required deep knowledge of programming languages, operating systems, and network architectures. While crimeware-as-a-service has already lowered some barriers, AI-powered tools threaten to eliminate many remaining ones.

Unit 42’s report highlights two prominent examples illustrating this shift.

The first is a commercialized version of WormGPT, an AI model initially exposed in 2023 as a tool designed explicitly for generating malicious code and phishing content. While early versions were limited and sometimes crude, newer iterations are being sold via low-cost subscription models, complete with user support, documentation, and regular updates. These versions are positioned as turnkey solutions for aspiring cybercriminals.

The second example is KawaiiGPT, a free, open-source model distributed on underground forums. Unlike paid alternatives, KawaiiGPT is marketed as “easy to use” and beginner-friendly, lowering the technical threshold even further. According to researchers, this model allows users with minimal experience to generate exploit templates, reconnaissance scripts, and social engineering messages with alarming ease.

While the code produced by these tools is often imperfect (and in many cases, easily detected by modern security controls) the real concern lies elsewhere.

The Threat Behind AI Hacking Tools Being Sold on the Dark Web

Security researchers emphasize that the immediate danger is not that AI-generated malware is more advanced than human-written code. In fact, many outputs from these models are noisy, error-prone, or repetitive. Instead, the core risk is democratization.

By automating tasks like exploit generation, reconnaissance scripting, and phishing content creation, AI tools significantly reduce the skill required to participate in cybercrime. This could lead to a dramatic increase in volume-based attacks, carried out by less sophisticated actors who previously lacked the capability to operate effectively.

In practical terms, this means more phishing campaigns, more opportunistic exploitation of known vulnerabilities, and more ransomware affiliates entering the ecosystem. Even if individual attacks are less technically impressive, the cumulative impact could overwhelm organizations already struggling with alert fatigue and resource constraints.

Unit 42 researchers also note that these tools are evolving rapidly. Many models are being fine-tuned on real-world malware samples, exploit databases, and breach data. Over time, this could improve their effectiveness, particularly in areas like social engineering, where AI excels at language generation and contextual adaptation.

The Emergence of AI-Driven Hacking Tools (and the Ramifications)

The emergence of AI-driven hacking tools mirrors earlier shifts in cybercrime, such as the rise of exploit kits, botnet builders, and ransomware-as-a-service platforms. Each wave reduced technical barriers and expanded the pool of attackers. AI, however, may represent the most dramatic acceleration yet.

Defenders face a dual challenge. On one hand, they must prepare for an increase in low-effort, high-volume attacks. On the other, they must anticipate how these tools will mature, potentially enabling more targeted and adaptive campaigns in the future.

At the same time, organizations are racing to adopt AI internally... often without fully understanding the security implications. The same models that help developers write code faster or assist SOC analysts with triage can, if misused or exposed, become powerful tools for attackers.

Conclusion

The Unit 42 report underscores the importance of defense-in-depth and realistic threat modelling. Organizations can no longer assume that attackers lack technical sophistication or resources. Instead, they must assume that AI-assisted capabilities are widely available and increasingly easy to use.

Key defensive priorities include rigorous patch management, strong identity and access controls, phishing-resistant MFA, and continuous security testing that simulates real-world adversary behavior. Security awareness training must also evolve, recognizing that AI-generated phishing messages may be more convincing, contextual, and difficult to spot.

Ultimately, the rise of AI-powered hacking tools is not a distant or theoretical risk: it's already unfolding in real time across underground markets. As Piazza noted, this is not a new phenomenon, but a familiar one amplified by powerful technology: the organizations that adapt quickly by understanding how attackers are using AI and testing their defenses accordingly will be far better positioned to withstand the next phase of cybercrime.

Contact Us

Speak with an Account Executive

Interested in Pentesting?

Penetration Testing Methodology Cover
Penetration Testing Methodology

Our Penetration Security Testing methodology is derived from the SANS Pentest Methodology, the MITRE ATT&CK framework, and the NIST SP800-115 to uncover security gaps.

Download Methodology
Pentest Sourcing Guide thumbnail
Pentest Sourcing Guide

Download our Pentest Sourcing Guide to learn everything you need to know to successfully plan, scope, and execute your penetration testing projects.

Download Guide
Packetlabs Company Logo
    • Toronto | HQ
    • 401 Bay Street, Suite 1600
    • Toronto, Ontario, Canada
    • M5H 2Y4
    • San Francisco | Outpost
    • 580 California Street, 12th floor
    • San Francisco, CA, USA
    • 94104