Security researchers put a lot of effort into malware analysis and attack attribution. Attribution analysis can determine whether malware is based on previously known strains, represents a novel strain, or perhaps demonstrates a unique technique for evading defenses, elevating privileges, or gaining initial access to a target. In a recent development, malware analysis has confirmed that malicious code was generated by an LLM generative AI model.
Studies about the application of AI for cybersecurity use-cases have surged in recent years, for both defensive and offensive use-cases. Research has explored the capability of AI to generate exploit code, identify undiscovered vulnerabilities in software, and conduct automated Application Security Testing (AST). Various studies have already proven AI - and Large Language Model generative AI more specifically - excels at these tasks and make them accessible to low skilled malicious actors.
However, until now, speculation has remained about whether malicious actors are using LLMs to increase the potency of their attacks and to what degree. Most commonly, the tasks of creating more deceptive and enticing phishing messages and deepfake media for misinformation campaigns are referenced as tactics in the AI enabled attacker's playbook. However, security research teams have now discovered AI generated malware used in real cyber attacks. Furthermore, OpenAI has released a report discussing their findings for the malicious use of their platform by cyber threat actors.
In this article we will review recent disclosures from HP Wolf Security and OpenAI confirming that LLM technologies are being actively used to generate malicious code and support cyber attacks in other ways such as reconnaissance and more.
HP Wolf Security recently unveiled its latest Threat Insights Report, which highlights a concerning trend: the use of generative AI (GenAI) to create malware code. This marks the first documented evidence from a reputable source about real threat actors employing AI to produce malicious code and deploying it in the wild.
The report reveals a campaign that specifically targeted French-speaking users, leveraging AI to generate malicious VBScript and JavaScript code capable of loading a second stage payload; the widely available AsyncRAT InfoStealer. The structured format of the malware, detailed comments explaining each line, and the choice of language variables all indicate the use of AI tools in its development.
The report outlines how generative AI is lowering the bar for cybercriminals, enabling cyber criminals lacking technical skill to craft sophisticated attacks. This development could significantly accelerate the frequency and impact of cyber threats.
Generative AI in Malware Creation: The malware code discovered shows clear evidence of AI-assisted development, containing comments and function names indicative of LLM generative AI. This revelation confirms that generative AI can help attackers augment code generation, lowering entry barriers for cybercriminals.
Increased Use of Malvertising Campaigns: HP’s researchers identified a large-scale ChromeLoader campaign using malvertising to lure victims to download rogue-but-functional PDF tools. These trojanized applications, embedded in MSI files, utilize valid code-signing certificates to bypass security mechanisms. Once installed, the malware can take over the victim’s browsing session and redirect searches to attacker-controlled sites.
Malware Concealed in SVG Images: Some cybercriminals have begun embedding malicious code in SVG vector images instead of the more commonly used HTML files. Because SVG files can execute JavaScript when opened in a browser, they present a new vector for smuggling malware and evading detection.
OpenAI has confirmed that generative AI tools like ChatGPT are being used to support offensive cyber activities, such as developing malware, spreading misinformation, evading detection, and launching phishing campaigns. The report highlights cases involving Chinese and Iranian groups, such as 'SweetSpecter,' 'CyberAv3ngers,' and 'Storm-0817.' 'SweetSpecter' used ChatGPT accounts to perform reconnaissance on software vulnerabilities, while 'CyberAv3ngers' utilized it to obtain default credentials for industrial systems, create scripts, and obfuscate code. The groups even attempted to use social engineering messages generated by ChatGPT against the staff of its own creator, OpenAI.
In response, OpenAI has banned accounts linked to these actors and shared indicators of compromise (IoC), like IP addresses, with cybersecurity partners. While the use of ChatGPT may not be useful for developing novel malware techniques, it provides accessibility and enhances efficiency for low-skilled actors.
Here are some ways that offensive use of generative AI will change the threat landscape:
Expect a Diminishing Time to Exploit (TTE): With AI streamlining attack development, attackers can rapidly create new exploits, reducing the time from vulnerability discovery to active exploitation.
Diminished efficacy of signature-based detection: Using LLMs to generate malware means that it can evolve more easily. This will hinder traditional signature based detection which depends on identifying specific segments of code.
Expect better use of the English language in phishing messages: Generative AI tools enable attackers to craft convincing, error-free phishing messages that are harder for recipients to detect as fraudulent.
The appearance of AI-generated malware marks a pivotal shift in cyber threats, as demonstrated by recent discoveries from HP Wolf Security and OpenAI. HP Wolf’s research highlights that generative AI tools have allowed cybercriminals to create sophisticated VBScript and JavaScript loader malware scripts. The analysis reveals how AI-written code is lowering technical entry barriers for attackers.
OpenAI confirmed cases of ChatGPT misuse by groups like 'SweetSpecter' and 'CyberAv3ngers,' leveraging AI for vulnerability reconnaissance, scripting, and social engineering. In response, OpenAI has banned the associated accounts and shared IoC with cybersecurity partners. This development emphasizes the need for robust cybersecurity measures as AI-assisted malware creation continues to evolve, streamlining complex attacks and posing an enhanced risk for organizations globally.
There's simply no room for a compromise. We're here to help. Our team works with yours to ensure you reach your full security potential.
October 24 - Blog
Packetlabs is thrilled to have been a part of SecTor 2024. Learn more about our top takeaway's from this year's Black Hat event.
September 27 - Blog
InfoStealer malware plays a key role in many cyber attacks, enabling extortion and lateral movement via stolen credentials. Learn the fundamentals about InfoStealers in this article.
September 26 - Blog
Blackwood APT uses AiTM attacks that are set to target software updates. Is your organization prepared? Learn more in today's blog.