Analysts to Discuss Generative AI Trends and Technologies at Gartner IT Symposium/Xpo 2023, October 16-29 in Orlando
“Generative AI has become a top priority for the C-suite and has sparked tremendous innovation in new tools beyond foundation models,” said Arun Chandrasekaran, Distinguished VP Analyst at Gartner. “Demand is increasing for generative AI in many industries, such as healthcare, life sciences, legal, financial services and the public sector.”
The 2023 Gartner Hype Cycle for Generative AI identified key technologies that are increasingly embedded into many enterprise applications. Specifically, three innovations that are projected to have a huge impact on organizations within ten years include GenAI-enabled applications, foundation models and AI trust, risk and security management (AI TRiSM) (see Figure 1).
Figure 1: Hype Cycle for Generative AI, 2023
GenAI-enabled applications use GenAI for user experience (UX) and task augmentation to accelerate and assist the completion of a user’s desired outcomes. As applications become enabled with GenAI, this will permeate a wide spectrum of skill sets within the workforce.
“The most common pattern for GenAI-embedded capabilities today is text-to-X, which democratizes access for workers, to what used to be specialized tasks, via prompt engineering using natural language,” said Chandrasekaran. “However, these applications still present obstacles such as hallucinations and inaccuracy that may limit widespread impact and adoption.”
Foundation Models
“Foundation models are an important step forward for AI due to their massive pretraining and wide use-case applicability,” said Chandrasekaran. “Foundation models will advance digital transformation within the enterprise by improving workforce productivity, automating and enhancing customer experience and enabling cost-effective creation of new products and services.”
Foundation models are on the Peak of Inflated Expectations on the Hype Cycle. Gartner predicts that by 2027, foundation models will underpin 60% of natural language processing (NLP) use cases, which is a major increase from fewer than 5% in 2021.
“Technology leaders should start with models with high accuracy in performance leaderboards, ones that have superior ecosystem support and have adequate enterprise guardrails around security and privacy,” said Chandrasekaran.
AI Trust, Risk and Security Management (AI TRiSM)
AI TRiSM ensures AI model governance, trustworthiness, fairness, reliability, robustness, efficacy and data protection. AI TRiSM includes solutions and techniques for model interpretability and explainability, data and content anomaly detection, AI data protection, model operations and adversarial attack resistance.
“Organizations that do not consistently manage AI risks are exponentially inclined to experience adverse outcomes, such as project failures and breaches. Inaccurate, unethical or unintended AI outcomes, process errors and interference from malicious actors can result in security failures, financial and reputational loss or liability, and social harm” said Chandrasekaran.”
AI TRiSM is an important framework for delivering responsible AI and is expected to reach mainstream adoption within two to five years. By 2026, organizations that operationalize AI transparency, trust and security will see their AI models achieve a 50% improvement in terms of adoption, business goals and user acceptance.