Experts Say Having GenAI Policies in Place Now will Prepare Enterprises for Possible Future Legal Requirements
“To craft an effective policy, general counsel must consider risk tolerance, use cases and restrictions, decision rights, and disclosure obligations,” said Laura Cohn, senior principal, research at Gartner. “Having GenAI guardrails and policies in place will better prepare enterprises for possible future legal requirements.”
Based on practices in AI policies instituted by companies and city governments, general counsel should direct organizations to consider four actions when establishing a policy:
Align on Risk Tolerance
To get started on determining the company’s risk tolerance, legal leaders should borrow a practice from enterprise risk management and guide a discussion with senior management on “must-avoid outcomes.” Discuss the potential applications of generative AI models within the business. Once these are identified, consider the outcomes that may result from them, which are “must avoid,” and which entail acceptable risk given the potential benefit of AI.
“Guidance on using generative AI requires core components to minimize risks while also providing opportunities for employees to experiment with and use applications as they evolve,” said Cohn.
Determine Use Cases and Restrictions
Legal leaders should gain an understanding of how generative AI could be used throughout the business by collaborating with other functional leaders. Compile a list of use cases and organize them according to perceived risk — both the likelihood and severity of the risk.
For higher risk situations (e.g., producing content for customer consumption), consider applying more comprehensive controls, such as requiring approval from a manager or AI committee. In the highest-risk cases, legal leaders may consider outright prohibition. For lower risk use cases (e.g., coming up with ideas for a fun activity during a staff off-site, or translating jargon or regulations written in a foreign language), they may consider applying basic safeguards such as requiring human review.
“General counsel should not be overly restrictive when crafting policy,” Cohn said. “Banning use of these applications outright, or applying hard controls, such as restricting access to websites, may result in employees simply using them on their personal devices. Leaders can consider defining low risk, acceptable use cases directly into policy, as well as employee obligations and restrictions on certain uses, to provide more clarity and reduce the risk of misuse.”
Agree on Decision Rights and Risk Ownership
It’s imperative that general counsel and executive leadership agree on who has the authority to make decisions on generative AI use cases. Legal teams should work with functional, business, and senior leadership stakeholders to align on risk ownership and review duties.
“Document the enterprise unit that governs the use of AI so that employees know to whom they should reach out with questions,” Cohn said. “General counsel must be clear if there are uses that do not need approval, specify what they are directly in the policy, and provide examples. For use cases that need approval, inform employees what they are, clearly document the role that can provide approval, and list that role’s contact information.”
Decide on Disclosures
Organizations should have a policy of disclosing the use and monitoring of generative AI technologies to both internal and external stakeholders. General counsel should help companies consider what information needs to be disclosed and with whom it should be shared with.
“A critical tenet common across global jurisdictions (including the standard-setting EU) is that companies should be transparent about their use of AI. Consumers want to know if companies are using generative AI applications to craft corporate messages, whether the information appears on a public website, social channel, or app,” said Cohn.
“This means general counsel should require employees to make sure the GenAI-influenced output is recognizable as machine generated by clearly labeling text. Organizations also may consider including a provision to place watermarks in AI-generated images to the extent technically feasible,” Cohn concluded.