Home Tech Plus Artificial Intelligence Gartner Says Legal, Compliance and Privacy Leaders Rank Rapid Generative AI Adoption...

Gartner Says Legal, Compliance and Privacy Leaders Rank Rapid Generative AI Adoption Their Top Issue in the Next Two Years

Gartner’s Legal, Compliance and Privacy Hot Spots Survey Identifies the Top Trends Concerning Professionals in Assurance Functions

Rapid generative AI (GenAI) adoption is the top-ranked issue for the next two years for legal, compliance and privacy leaders, according to a recent survey by Gartner, Inc. In a September 2023 survey of 179 legal, compliance and privacy leaders, 70% of respondents reported rapid GenAI adoption as a top concern for them.

“Increases in capability and usability have prompted rapid and widespread company adoption of GenAI,” said Stuart Strome, director, research in the Gartner Legal, Risk & Compliance Practice. “While AI regulation is still being developed, however, uncertainties and unforeseen risks abound. Businesses will have to contend with these challenges to ensure ethical and legal use of this powerful new technology.”

Gartner experts have identified four key areas that legal, compliance and privacy leaders need to address.

1) Limited Visibility into Key Risks
The ease of adoption, widespread applicability, and the ability of GenAI tools to perform a range of different business task mean that assurance teams will have limited visibility into new risks.

“New processes to detect and manage these risks will take time to roll out leaving businesses exposed in the interim,” said Strome. “Legal leaders should adapt preexisting, well-established and widely distributed risk monitoring and management practices until new processes can be implemented. For example, they might modify data inventories and records of processing activities of privacy impact assessments to track GenAI usage.” 

2) Lack of Employee Clarity on Acceptable Use
Employees will lack clarity on what constitutes acceptable use of the technology due to unfamiliarity with the rules governing it. Legal leaders should work to build consensus on “must avoid” outcomes and institute controls to minimize the likelihood of those outcomes while championing acceptable use cases in policies and guidance.

“Legal leaders need to institute a mandatory human review of GenAI output, prohibit entering enterprise IP or personal information into public tools such as ChatGPT, and develop policies that require clear indication of GenAI provenance on any public-facing output,” said Strome. “It’s important to include real-world examples of prohibited and acceptable GenAI usage in policy guidance and alert employees when policies are updated. Further, consider working with IT to develop embedded controls, such as popups in GenAI tools that require users to attest they are not using the tools for prohibited cases.”

3) Need for AI Governance
As GenAI tools rapidly become more ubiquitous, poor accountability for negative outcomes could create unacceptable legal and privacy risks. Yet for most companies AI governance will not fit neatly into existing functional organizational structures, and the expertise needed may be scattered throughout the business or even not exist at all. Legal leaders need to clearly document roles and responsibilities for approvals, policy management, risk management and training for GenAI.

“Legal leaders should advocate for establishing a cross-functional steering committee, or for modifying the mandate of an existing committee, to establish principles and standards for use, and to align on roles and responsibilities related to AI governance,” said Strome.

4) New Opportunities to Scale Repetitive Legal Tasks
GenAI’s capacity to produce natural language output lends itself to several departmental uses for legal teams. This holds the potential to minimize the time lawyers spend on low-value work. While GenAI tools have the potential to assist with time-consuming, repetitive tasks such as conducting legal research, drafting contracts, and producing summaries of legislation its output often includes errors, legal leaders must ensure the output is reviewed for accuracy.

Legal leaders should develop an internal pilot program to test GenAI automation or augmentation for low-risk repetitive, time-consuming tasks that involve production of written deliverables. They should also compare pilot outcomes on time spent and output quality versus conventionally produced outcomes.

“Given GenAI’s ease of use and flexibility of application for enterprises, it’s no surprise that rapid GenAI adoption is the most referenced risk for legal leaders this year, however, legal leaders should not simply react by instituting draconian policies that restrict its use,” said Strome “That approach will likely impact business competitiveness and encourage employees to illicitly use GenAI tools on their personal devices. Progressive legal leaders accept that GenAI can drive value, and they are working with others in their organization to develop governance and policies that nudge employees and business partners toward high-benefit, low-risk use cases.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here