Analysts Will Discuss New AI Threats and Risks at Gartner Security & Risk Management Summit 2023, September 26-28 in London, U.K.
The Gartner Peer Community survey was conducted from April 1 to April 7 among 150 IT and information security leaders at organizations where GenAI or foundational models are in use, in plans for use, or being explored.
Twenty-six percent of survey respondents said they are currently implementing or using privacy-enhancing technologies (PETs), ModelOps (25%) or model monitoring (24%) (see Figure 1).
Figure 1. Organizations Using or Planning to Use Tools to Address Risks Related to Generative AI (Percentage of Respondents)
IT Is Ultimately Responsible for GenAI Security
While 93% of IT and security leaders surveyed said they are at least somewhat involved in their organization’s GenAI security and risk management efforts, only 24% said they own this responsibility.
Among the respondents that do not own the responsibility for GenAI security and/or risk management, 44% reported that the ultimate responsibility for GenAI security rested with IT. For 20% of respondents, their organization’s governance, risk, and compliance departments owned the responsibility.
Top-of-Mind Risks
The risks associated with GenAI are significant, continuous and will constantly evolve. Survey respondents indicated that undesirable outputs and insecure code are among their top-of-mind risks when using GenAI:
- 57% of respondents are concerned about leaked secrets in AI-generated code.
- 58% of respondents are concerned about incorrect or biased outputs.
“Organizations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage,” said Litan. “This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI malperformance can also cause organizations to make poor business decisions.”