- Fear of Missing Out (FOMO) a key driver for AI uptake – even as trust in AI is high
- Trust in AI is highest in the US at 87%, while France lags at 77%
- Purpose-built AI considered the most trustworthy type of AI at 90%
- MILPITAS, Calif., August 28, 2024 –(BUSINESS WIRE)–A new survey from intelligent automation company ABBYY finds that fear of missing out (FOMO) plays a big factor in artificial intelligence (AI) investment, with 63% of global IT leaders reporting they are worried their company will be left behind if they don’t use it.
“It’s no surprise to me that organizations have more trust in small language models due to the tendency of LLMs to hallucinate and provide inaccurate and possibly harmful outcomes. We’re seeing more business leaders moving to SLMs to better address their specific business needs, enabling more trustworthy results.”
With fears of being left behind so prevalent, it is no surprise that IT decision makers from the US, UK, France, Germany, Singapore, and Australia reported that average investment in AI exceeded $879,000 in the last year despite a third (33%) of business leaders having concerns about implementation costs. Almost all (96%) respondents in the ABBYY State of Intelligent Automation Report: AI Trust Barometer said they also plan to increase investment in AI in the next year, although Gartner predicts that by 2025, growth in 90% of enterprise deployments of GenAI will slow as costs exceed value.
Furthermore, over half (55%) of business leaders admitted that another key driver for use of AI was pressure from customers.
Surprisingly, the survey revealed another fear for IT leaders implementing AI was misuse by their own staff (35%). This came ahead of concerns about costs (33%), AI hallucinations and lack of expertise (both 32%), and even compliance risk (29%).
Overall, respondents reported an overwhelmingly high level of trust in AI tools (84%). The most trustworthy according to decision makers were small language models (SLMs) or purpose-built AI (90%). More than half (54%) said they were already using purpose-built AI tools, such as intelligent document processing (IDP).
Maxime Vermeir, Senior Director of AI Strategy at ABBYY, commented, “It’s no surprise to me that organizations have more trust in small language models due to the tendency of LLMs to hallucinate and provide inaccurate and possibly harmful outcomes. We’re seeing more business leaders moving to SLMs to better address their specific business needs, enabling more trustworthy results.”
When asked about trust and ethical use of AI, an overwhelming majority (91%) of respondents are confident their company is following all government regulations. Yet only 56% say they have their own trustworthy AI policies, while 43% are seeking guidance from a consultant or non-profit. Half (50%) said they would feel more confident knowing their company had a responsible AI policy, while having software tools that can detect and monitor AI compliance was also cited as a confidence booster (48%).
On a regional basis, levels of trust were highest among US respondents, with 87% saying they trust AI; Singapore came next at 86% followed by the UK and Australia, both 85%, then Germany at 83%. Lagging was France, with just 77% of respondents indicating they trust AI.