Home Top Stories Legal and Compliance Leaders Should Prepare Their Organizations for Emerging Regulation on...

Legal and Compliance Leaders Should Prepare Their Organizations for Emerging Regulation on AI

Gartner Identifies Four Critical Areas for Legal Leaders to Address Around AI Regulation.

With various lawmakers across the globe proposing regulations and guidance on large language model (LLM) tools, such as ChatGPT and Google Bard, Gartner, Inc. has identified four critical areas for general counsel (GC) and legal leaders to address.

“Legal leaders can examine where the various proposals overlap to help senior leaders and the board prepare for regulatory shifts as they develop their corporate AI strategy,” said Laura Cohn, senior principal, research at Gartner. “While laws in many jurisdictions may not come into effect until 2025, legal leaders can get started while they wait for finalized regulation to take shape.”

Gartner experts have identified four actions for legal leaders to create AI oversight that will enable their organizations to move forward while waiting on final guidance:

Embed Transparency in AI Use
“Transparency about AI use is emerging as a critical tenet of proposed legislation worldwide,” said Cohn. “Legal leaders need to think about how their organizations will make it clear to any humans when they are interacting with AI.”

For example, with AI use in marketing content and the hiring process, legal leaders can help by updating the privacy notices and terms of conditions on their company’s websites to reflect AI use. However, it’s better to develop a separate section on the organization’s online “Trust Center.” Or the organization could post a point-in-time notice when collecting data that specifically discusses the ways the organization uses AI. Legal leaders could also consider updating their supplier code of conduct with a mandate for notification if a vendor plans to use AI.

Ensure Risk Management Is Continuous
“GC and legal leaders should participate in a cross-functional effort to put in place risk management controls that span the lifecycle of any high-risk AI tool,” said Cohn. “One approach to this may be an algorithmic impact assessment (AIA) that documents decision making, demonstrates due diligence, and will reduce present and future regulatory risk and other liability.”

Besides legal, the GC should involve information security, data management, data science, privacy, compliance, and the relevant business units to get a fuller picture of risk. Since legal leaders typically don’t own the business process they embed controls for, consulting the relevant business units is vital.

Build Governance That Includes Human Oversight and Accountability
“One risk that is very clear in using LLM tools is that they can get it very wrong while sounding superficially plausible,” said Cohn. “That’s why regulators are demanding human oversight which should provide internal checks on the output of AI tools.”

Companies may want to designate an AI point person to help technical teams design and implement human controls. Depending on which department hosts the AI initiative, this person could be a team member with deep functional knowledge, a staffer from the security or privacy team, or, if there are integrations with enterprise search, the digital workplace lead.

The GC could also establish a digital ethics advisory board of legal, operations, IT, marketing and outside experts to help project teams manage ethical issues, and then make sure the board of directors is aware of any findings.

Guard Against Data Privacy Risks
It’s clear that regulators want to protect the data privacy of individuals when it comes to AI use,” said Cohn. “It will be key for legal leaders to stay on top of any newly prohibited practices, such as biometric monitoring in public spaces.”

Legal and compliance leaders should manage privacy risk by applying privacy-by-design principles to AI initiatives. For example, require privacy impact assessments early in the project or assign privacy team members at the start to assess privacy risks.

With public versions of LLM tools, organizations should alert the workforce that any information they enter may become a part of the training dataset. That means sensitive or proprietary information used in prompts could find its way into responses for users outside the business. Therefore, it’s critical to establish guidelines, inform staff of the risks involved, and provide direction on how to safely deploy such tools.

LEAVE A REPLY

Please enter your comment!
Please enter your name here