Blog

Guidelines For Corporate Governance of Generative AI Adoption

Artificial Intelligence (AI) is rapidly transforming the business landscape, offering unprecedented opportunities for innovation and efficiency. However, as organizations increasingly integrate AI into their strategies, it becomes crucial to develop robust Governance, Risk Management, and Compliance (GRC).  This is because AI introduces both opportunities and risks. The benefits may entice employees to use AI without proper limits, introducing an unacceptable level of risk. 

Developing policies for incorporating AI ensures that its deployment aligns with the organization’s goals and complies with ethical standards and risk appetite. This article delves into the importance of GRC for managing risk when adopting AI, and essential considerations for incorporating AI into a business strategy.

What is GRC?

GRC stands for Governance, Risk Management, and Compliance. It is a comprehensive approach to ensure that an organization's operations align with its objectives, manage risks effectively, and adhere to relevant cybersecurity laws and regulations.

  • Governance: Governance refers to the overall framework of rules, practices, and processes used to direct and manage an organization. Governance ensures that the organization’s strategic goals are achieved, decisions are made transparently, and stakeholders' interests are protected.

  • Risk Management: Risk Management refers to the process of identifying, assessing, and controlling threats to an organization's financial capital and earnings. Risk Management helps in mitigating risks related to financial uncertainties, legal liabilities, strategic management errors, accidents, and natural disasters. Managing risk means using security controls to either mitigate, avoid, transfer, accept, or share risk.

  • Compliance: Compliance refers to an organization's need to adhere to applicable laws, regulations, and legal specifications. Compliance ensures that the organization operates within the legal frameworks and avoids legal penalties, fines, and reputational damage.

What Risks Can AI Introduce to an Organization?

As organizations increasingly integrate artificial intelligence into their operations, understanding the potential risks associated with AI is crucial. From generating biased outcomes to posing unique security challenges, AI systems can introduce complex issues that need careful management. Let’s explore some of the key risks that AI can pose to an organization.

  • Inaccurate or Biased Information: AI systems are not perfect.  The term "hallucinations" is used to describe a LLM model that has been corrupted by user input resulting in the output of nonsensical or clearly biased information. Other biases can be created by LLM services that use specially crafted training data to create output that promotes a particular political or other agenda.

  • Security and Privacy: Managing data privacy when planning AI adoption is as crucial as any other IT operation. When using hosted AI as a service, users are sharing information with a 3rd party that could potentially reveal context about a business's internal operations, strategic goals, and other sensitive information. While locally self-hosted AI solutions can avoid sharing data with 3rd parties, they can be vulnerable to various cyber attacks.

  • Compliance Issues: Ensuring AI systems adhere to existing and emerging regulations is essential to avoid legal repercussions. Regulations such as GDPR, and HIPAA govern how sensitive personal data must be handled and restrict how it can be shared. Also, voluntary compliance standards such as SOC-2 dictate how customer data must be handled to ensure its confidentiality.

Guidelines For Corporate AI Policy

Policies are important to ensure safe and ethical AI deployment within corporations. Here is a summary of the most important AI policy considerations for safe AI adoption:

1. Determine Appropriate Scopes For AI Policies

  • Scoping Who the Policy Applies To: The use of AI may impact a wide range of stakeholders. Therefore, an AI use policy should be tailored to cover internal employees, third party partners, contractors, vendors, or other types of partners appropriately.

  • Scoping Which Data The Policy Applies To: Policies should specify which types of data are allowed to be processed by AI and which should not. Personal data of staff and customers, and proprietary company data are associated with higher risks when sharing. Also, handling, storage, and processing guidelines for each classification of data should ensure compliance with data protection laws and maintain data integrity.

  • Scoping Which AI Technologies the Policy Applies To: The policy pertains to generative AI technologies capable of producing text, images, video, sound, or other content from input prompts. Examples include ChatGPT, GitHub CoPilot, Midjourney, Stability AI, ModelScope, and programming language code, as well as generative AI features embedded within other applications.

2. The Assessment Process For AI Platforms

  • Corporate Use Limitations: Consider restricting the use of generative AI to only corporate accounts and ban the use of personal accounts. Obtain approval from key internal stakeholders like product management, engineering, data privacy, legal, security, and risk management.

  • Privacy And Data Security Assessment: Check for the possibility of opting out from data being used to train the AI, assess risks, and evaluate the compliance capabilities with the AI system’s terms and conditions.

  • Technology Assessment: Assess the data source and quality of the training data set and the accuracy of output. Evaluate different methods of access such as chat interfaces or APIs, and types of accounts (personal, free, paid) under terms acceptable to the company.

  • Business Assessment: Consider implementation costs, expected return on investment (ROI), and the development of mechanisms to track actual ROI. Implement a process for approving new use cases of the AI, possibly designating specific approvers for accelerated learning. Ensure the use and monitoring of available safety features, updating as new ones become available. Ensure the adoption of a new generative AI platform follows the organization’s standard procurement process, possibly enhanced with additional checks specific to AI.

3. Define Acceptable Use Policies For AI Generated Content

  • Deny AI Use By Default: The most conservative approach is to prohibit the use of generative AI across the organization unless specifically approved by exception. Decision makers can regularly review and update the list of approved use cases periodically to accommodate any new strategic endeavors. Specific considerations should be made when using AI with confidential, business-sensitive, or personal data; organizational IP; proprietary code; information about customers, suppliers, or employees; system access credentials. Other top concerns include any use that could affect rights, obligations, incorporate into the organization's IP, violate policies or laws, or demonstrate unethical intent.

  • Code Written by Generative AI: Ensure the AI-generated code achieves the intended functionality and adheres to modern memory-safe practices.  AI generated code should be initially subjected to strict review including security and vulnerability analyses. Be sure to investigate the vendor's policies when using 3rd party AI to clarify IP rights, and determine their compliance with data privacy regulations.

  • Usage of ChatGPT and Similar Tools for Personal Productivity: Permit use for increasing personal productivity with stipulations to avoid prohibited uses. While adopting a hosted 3rd party AI service means you must adhere to terms and conditions, it's worthwhile checking if you may opt-out of training data contributions. Furthermore, internal policies should instruct personnel using AI generated content to strictly verify output accuracy and appropriateness.

  • Training and Certification: Develop training or certification programs that update existing security courses to include risks associated with using generative AI.

  • Compliance and Consequences: Implement and enforce consequences for not adhering to AI usage policies, similar to other organizational guidelines.

4. Plan To Audit AI Policies Periodically

Due to the rapid pace of innovation in generative AI and changes in the legal and regulatory landscape, it's crucial to regularly review and update policies governing its use. Establishing a structured schedule for auditing these policies ensures they remain relevant and compliant with current laws. This proactive approach helps prevent legal risks and enables effective utilization of the latest AI advancements.

Conclusion

The importance of Governance, Risk Management, and Compliance (GRC) for reducing organizational risk - including cybersecurity risk - is crucial. Developing policies is something that any organization seeking to benefit from AI technologies should actively pursue. This article reviewed some important considerations for developing a corporate AI policy framework to support the safe adoption and integration of generative AI into business operations.

Packetlabs Helps Enhance and Strengthen Your Security Posture

What sets us apart is our passionate team of highly trained, proactive ethical hackers. Our advanced capabilities go beyond industry standards. We ask questions to dig deeper and encourage knowledge sharing.

Featured Posts

See All

August 15 - Blog

Packetlabs at Info-Tech LIVE 2024

It's official: Packetlabs is a partner and attendee of Info-Tech LIVE 2024 in Las Vegas. Learn more about event dates and registration today.

August 01 - Blog

A Deep Dive Into Privilege Escalation

This article will delve into the most common techniques attackers use to transition from their initial breach to achieving their end goals: Privilege Escalation.

July 31 - Blog

What Is Attack Attribution?

Did you know? Attack attribution supports cybersecurity by providing contextual awareness for building an effective and efficient cybersecurity program. Learn more in today's blog.