ABBYY Survey Reveals FOMO Drives AI Adoption in 60% of Businesses, but Raises Trust Issues

  • Fear of Missing Out (FOMO) a key driver for AI uptake – even as trust in AI is high
  • Trust in AI is highest in the US at 87%, while France lags at 77%
  • Purpose-built AI considered the most trustworthy type of AI at 90%
Fear of missing out, or “FOMO,” has been found to play a significant role in businesses’ investment in artificial intelligence, even as IT leaders continue to place high trust in AI to benefit their business. Their concerns, goals and plans for AI investment vary globally, revealing distinct priorities that reflect the business landscapes of their respective regions. (Graphic: Business Wire)

Fear of missing out, or “FOMO,” has been found to play a significant role in businesses’ investment in artificial intelligence, even as IT leaders continue to place high trust in AI to benefit their business. Their concerns, goals and plans for AI investment vary globally, revealing distinct priorities that reflect the business landscapes of their respective regions. (Graphic: Business Wire)

“It’s no surprise to me that organizations have more trust in small language models due to the tendency of LLMs to hallucinate and provide inaccurate and possibly harmful outcomes. We’re seeing more business leaders moving to SLMs to better address their specific business needs, enabling more trustworthy results.”

With fears of being left behind so prevalent, it is no surprise that IT decision makers from the US, UK, France, Germany, Singapore, and Australia reported that average investment in AI exceeded $879,000 in the last year despite a third (33%) of business leaders having concerns about implementation costs. Almost all (96%) respondents in the ABBYY State of Intelligent Automation Report: AI Trust Barometer said they also plan to increase investment in AI in the next year, although Gartner predicts that by 2025, growth in 90% of enterprise deployments of GenAI will slow as costs exceed value.

Furthermore, over half (55%) of business leaders admitted that another key driver for use of AI was pressure from customers.

Surprisingly, the survey revealed another fear for IT leaders implementing AI was misuse by their own staff (35%). This came ahead of concerns about costs (33%), AI hallucinations and lack of expertise (both 32%), and even compliance risk (29%).

Overall, respondents reported an overwhelmingly high level of trust in AI tools (84%). The most trustworthy according to decision makers were small language models (SLMs) or purpose-built AI (90%). More than half (54%) said they were already using purpose-built AI tools, such as intelligent document processing (IDP).

Maxime Vermeir, Senior Director of AI Strategy at ABBYY, commented, “It’s no surprise to me that organizations have more trust in small language models due to the tendency of LLMs to hallucinate and provide inaccurate and possibly harmful outcomes. We’re seeing more business leaders moving to SLMs to better address their specific business needs, enabling more trustworthy results.”

When asked about trust and ethical use of AI, an overwhelming majority (91%) of respondents are confident their company is following all government regulations. Yet only 56% say they have their own trustworthy AI policies, while 43% are seeking guidance from a consultant or non-profit. Half (50%) said they would feel more confident knowing their company had a responsible AI policy, while having software tools that can detect and monitor AI compliance was also cited as a confidence booster (48%).

On a regional basis, levels of trust were highest among US respondents, with 87% saying they trust AI; Singapore came next at 86% followed by the UK and Australia, both 85%, then Germany at 83%. Lagging was France, with just 77% of respondents indicating they trust AI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here