With enterprises increasingly adopting AI, their metrics for measuring success for generative and predictive (non-generative) AI differ according to use cases tracked in Omdia’s latest AI Business Performance Metrics Database. Compared to traditional predictive AI adoption, enterprises deploying generative AI (GenAI) are more focused on enhancing productivity, ROI, and customer engagement, as revealed by the database of AI deployment case studies.
Since late 2022, GenAI solutions have entered the market. As enough time has elapsed since their adoption, vendors are increasingly releasing GenAI “customer success” studies featuring measured results (metrics or KPIs) for market promotion. In the latest update of the AI Business Performance Metrics Database, Omdia focused on compiling GenAI case studies which now constitute 9% of the 700 records. Of the 67 new case studies, 52 focus on GenAI. This offers an initial insight into how enterprises are evaluating success of their generative and predictive (non-generative) AI deployments.
Comparing generative versus predictive AI within each metric, productivity stands out as the most relatively favored metric for generative deployments, accounting for 17%, nearly double the 9% share of predictive case studies. Top productivity applications include automated code development and virtual assistants. ROI is also more valued in GenAI studies (9% vs 6% of cases), particularly for writing assistants. Engagement is another relatively favored metric, cited in 22% of generative cases, compared to 17% of predictive cases, with key use cases including virtual assistants and digital experience marketing.
Predictive AI remains commonly favored for metrics like revenue improvement, accuracy, and cost reduction.
“Recent case studies confirm that enterprises are indeed adopting GenAI and keeping a sharp eye on measuring how the technology is affecting their business outcomes,” said Neil Dunay, Omdia Principal Forecaster. “With significant investments being made in the technology, vendors and enterprises are eager to prove to customers and investors that GenAI is delivering on promised results. That may mean cases studies of GenAI failures could go unreported.”
Originally published in 2021 and updated twice a year, Omdia’s AI Business Performance Metrics Database monitors case studies from AI vendors and end users to document KPIs for measuring AI business impact. This resource addresses questions about which metrics are most valued by AI customers, how these metrics differ across industries and applications, and the methods used for quantifying them. The database exclusively tracks case studies from completed projects that provide specific numerical metrics of the outcomes.