Generative AI was a groundbreaking development last year, with the emergence of popular chatbots such as OpenAI’s ChatGPT and Google’s Bard taking the technology world by storm. However, using the large language models (LLMs) powering these chatbots has several disadvantages, hindering the adoption of the technology. For this reason, LLMs will be gradually replaced by smaller, custom language models in 2024, says GlobalData, a leading data and analytics company.
Training and operating LLMs involve steep costs, with expensive computing resources required to process the vast amounts of data used in AI. Specialized models ultimately deliver better value and accuracy. Oftentimes, firms start by experimenting with very large general-purpose models to explore different use cases, only to find out later that the compute cost doesn’t justify the scale of the transaction. In 2024, as Generative AI penetrates the enterprise space and business cases become more conspicuous, companies will start leveraging smaller models, and tune them with their proprietary data to get the performance they want for a specific use case, for a fraction of the cost.
Beatriz Valle, Senior Analyst at GlobalData, comments: “Enterprises could potentially find themselves in a vulnerable legal position due to potential copyright infringements or privacy violations, for example, because the origin of the data used to train the models is often unknown. For this reason, organizations will opt for deploying small language models (SLMs) instead. These custom models will be trained on proprietary data and render better results with fewer risks involved.
Rena Bhattacharyya, Chief Analyst of Enterprise Technology and Services research at GlobalData, adds: “2024 will see the passage of comprehensive regulatory policies to guide AI project deployments and management. The world will look toward Europe, which is poised to approve groundbreaking legislation via the AI Act, which categorizes the risk of AI applications and bans certain use cases, establishes requirements for the use of high-risk applications, and requires human oversight of computer models and actions. The rest of the world will begin to discuss similar frameworks and the debate around the use of copyrighted content to train AI models will be top of mind for executives.”
GlobalData’s report, “2024 Enterprise Predictions: Artificial Intelligence,” also revealed that enterprises will begin to explore how they can use multimodal AI to generate improved outputs in a range of industries and applications. In addition, techniques such as RAG (retrieval-augmented generation) which is used to augment LLM prompts and responses with information from reliable internal or external sources, will become more widespread. The use of synthetic data will also become more common among enterprises and AI companies.
Valle continues: “Companies such as OpenAI and Cohere will be studying the use of synthetic data as an alternative to real-world data to access high-quality data sets without taking the risk of being sued. For example, The New York Times recently filed a lawsuit against OpenAI, and other similar cases have taken place in the industry. We will also see more agreements and partnerships between AI companies and media companies, like OpenAI recently signed deal with German media company Axel Springer and with the Associated Press.”
GlobalData forecasts that the overall AI market will be worth $909 billion by 2030, registering a compound annual growth rate (CAGR) of 35% between 2022 and 2030. In the generative AI space, revenues are expected to grow from $1.8 billion in 2022 to $33 billion in 2027 at a CAGR of 80%.
Generative AI will impact every industry and 2024 will see the number of live AI implementations grow exponentially in the corporate space, particularly in the fields of customer experience and marketing.