Large Language Model (LLM) AI applications have been widely used so far in 2024 as organizations seek to benefit from the increased productivity that LLM models, custom agents, and APIs have to offer. LLMs are being experimented with incentivized corporate R&D programs and integrated into day-to-day operations on many levels.
However, as LLM systems are integrated into the productivity workflow, there is no shortage of risk. Several of these concerns have been raised on our blog before, including the potential for poisoned GPT models, how adversaries could weaponize LLM for automated vulnerability discovery, exploit development, OSINT collection, and even social engineering via "deepfakes" and politically motivated broad social media campaigns.
In many ways, security concerns surrounding LLM application development and integration are identical to other DevOps security challenges that software vendors face and supply chain security risks that downstream end-users face. For example, software vulnerabilities in various components of the software supply chain can allow hackers to gain unauthorized access to an organization's infrastructure. Such is the case with "Llama Drama" CVE-2024-34359, a recent vulnerability discovered in the Llama LLM model.
In this post we will take a look at a new LLM supply chain vulnerability affecting the very popular open-source Llama LLM model. The attack campaign has been dubbed Llama Drama. Then we will also make a quick review of the traditional security concerns faced by DevOps teams (known as DevSecOps or "Development Security Operations") and finally review some unique challenges that LLM and other AI development teams face which fall outside of the traditional scope of DevSecOps.
LLaMA (Large Language Model Meta AI) is a family of large language models developed by Meta AI, designed for various natural language processing (NLP) tasks, such as text generation, translation, and summarization. Similar to OpenAI's ChatGPT, these models are highly advanced, capable of understanding and generating human-like text based on vast amounts of training data.
LlamaIndex, on the other hand, is not built by Meta AI, but is a data framework specifically designed to facilitate the integration of large language models (LLMs) such as LLaMa into custom applications with private or domain-specific data. It enables developers to connect diverse data sources—such as APIs, databases, and document files like PDFs—directly to LLMs in order to build highly customized LLM agents.
The critical vulnerability CVE-2024-34359, dubbed "The Llama Drama," has been discovered by the security researcher retr0reg in the "llama_cpp_python" Python package.
Rated as CVSS 9.6 Critical, the flaw enables attackers to execute arbitrary code through the misuse of the Jinja2 template engine. Defenders can expect that, similar to the Log4J vulnerability, exploitable versions of Llama LLM could be potentially integrated into many existing software packages in their software supply chain. A patch has been issued in version 0.2.72 to address this issue and software developers should ensure they are not using a vulnerable version of llama_cpp_python. As of May 2024, 6,000 AI models on HuggingFace that utilize both llama_cpp_python and Jinja2 are susceptible to this vulnerability.
Jinja2 is a widely-used open-source Python library for template rendering, primarily designed to generate HTML. Its capability to execute dynamic content makes it a powerful tool, but it can pose significant security risks if not properly configured to restrict unsafe operations.
llama_cpp_python is a Python binding for llama.cpp that combines the ease of use of Python with the performance of C++. However, its utilization of Jinja2 for processing model metadata, without implementing the necessary security safeguards, exposes it to template injection attacks.
Securing a DevOps (known as DevSecOps or DevOps Security) is a set of measures for protecting the software development process to ensure that a vendor's software products are delivered in a secure state to its customers while also protecting its own organization against supply chain attacks and other risks such as stolen proprietary data.
DevSecOps involves conducting a risk/benefit analysis to understand which risks can be mitigated, deploying automation to reduce the potential for mistakes, bottlenecks, and downtime, and other practices such as threat modeling, source control repositories, and among other security controls such as incident management, continuous threat hunting, and educating development teams on security best practices. Finally, application security testing, including static and dynamic testing, is prudent for software vendors to mitigate against common software weaknesses, and reduce the likelihood of bad code being pushed to customers.
Here are some unique DevSecOps components for securing LLM AI models during the software product lifecycle:
Addressing Prompt Injections: Manage access and execution rights to reduce the impact of prompt injections. Increase robustness through adversarial training or Reinforcement Learning from Human Feedback (RLHF). Prompt injection may lead to the leaking of sensitive data, spreading misinformation, and attempts to compromise the underlying system
Sandbox Permissions of LLM-based Applications: LLM systems that accept user supplied file uploads and manipulate those files are at particularly high risk due to the interaction between untrusted input and executing LLM generated code on the server. Therefore, be sure to restrict access and execution rights of applications based on LLMs, and establish trust boundaries
Ensure Training Data Security: AI systems can be compromised by unreliable training data, leading to malfunctions that attackers can exploit. Therefore, it's important to ensure proper selection, acquisition, and pre-processing of training data. Be sure to manage data storage securely according to data security best practices and in compliance with existing regulations with respect to data sensitivity. Collect data from trustworthy sources, using cryptographic measures to ensure integrity and origin.
Protect Against Model Bias: Select data according to application needs, ensuring diversity and assessing for potential biases. Use anonymization or filtering for sensitive data, applying differential privacy methods and unlearning techniques as needed
Validation, Sanitization, and Formatting of Inputs and Outputs: Detect and filter manipulative inputs, using tools and knowledge bases to maintain input integrity. Implement filters to clean inputs and outputs, allowing users to verify and cross-reference outputs with other sources before use. Implement filter mechanisms and annotations to prevent harmful outputs, using automated checks against trusted sources
The "Llama Drama" CVE-2024-34359 was caused by software weaknesses in the llama_cpp_python package, and has the potential for attackers to achieve remote code execution (RCE) through improper handling of the Jinja2 template engine. This incident underscores the need for organizations to protect themselves against all forms of supply chain vulnerabilities, not just LLM-based ones.
On the vendor side, effective DevSecOps for LLMs should include not only all of the traditional adversarial training and RLHF to combat prompt injections, strict sandbox permissions to mitigate risks from untrusted inputs, and secure handling of training data to prevent exploitation. Additionally, addressing model bias and implementing rigorous validation and sanitization of inputs and outputs are critical steps in maintaining the integrity and reliability of AI applications.
October 24 - Blog
Packetlabs is thrilled to have been a part of SecTor 2024. Learn more about our top takeaway's from this year's Black Hat event.
September 27 - Blog
InfoStealer malware plays a key role in many cyber attacks, enabling extortion and lateral movement via stolen credentials. Learn the fundamentals about InfoStealers in this article.
September 26 - Blog
Blackwood APT uses AiTM attacks that are set to target software updates. Is your organization prepared? Learn more in today's blog.
© 2024 Packetlabs. All rights reserved.