Since its release in November 2022, ChatGPT has been hailed as one of the most impressive and successful consumer software applications in history. Its release sparked a new leg of the AI arms race, inciting panic and hasty actions from Silicon Valley’s heaviest hitters, Meta, Google, and Amazon.
Also: What is AI: Everything to know about artificial intelligence
But as the hype dies down for ChatGPT and other large language models, AI companies set their sights on achieving an intelligent agent that supersedes the wonders of today’s AI models: artificial general intelligence (AGI).
What is artificial general intelligence?
AGI is a hypothetical intelligent agent that can accomplish the same intellectual achievements humans can. It could reason, strategize, plan, use judgment and common sense, and respond to and detect hazards or dangers.
This type of artificial intelligence is much more capable than the AI that powers the cameras in our smartphones, drives autonomous vehicles, or completes the complex tasks we see performed by ChatGPT.
What’s the difference between AI and AGI?
Drew Sonden, product lead EMEA at SS&C Blue Prism, says AI chatbots like ChatGPT and Google Bard are considered narrow AI. Narrow AI is classified as an AI that uses algorithms to complete a task without learning anything from that task or applying knowledge to another task.
Special Feature
“A true AGI would be able to undertake conversational engagements, carry out detailed planning activities, deliver mathematical insights, create novel artworks — in theory, undertake any task,” Sonden says.
AGI could change our world, advance our society, and solve many of the complex problems humanity faces, to which a solution is far beyond humans’ reach. It could even identify problems humans don’t even know exist.
“If implemented with a view to our greatest challenges, [AGI] can bring pivotal advances in healthcare, improvements to how we address climate change, and developments in education,” says Chris Lloyd-Jones, head of open innovation at Avande.
To address a problem like climate change, AGI could access the internet and digest all the existing research about climate change. Then, it could create solutions while assessing every possible outcome. AGI could serve as a stream of consciousness more capable than humans, all while never needing a period to rest or learn more information.
However, the achievement of AGI is still far away — or at least not an achievement that can be reached soon. Expert opinions vary; some say it could take three years, while others say it could take decades before humans can achieve AGI.
Also: What is ChatGPT and why does it matter?
AI is an exciting area of innovation, and where we are today once seemed like it was light years away to achieve. But the journey to AGI didn’t start with ChatGPT. The race to AGI is an expansive, worldwide effort involving researchers, engineers, big-thinkers, and governments working together.
Is AGI a threat to humanity?
An intelligent being capable of intelligence comparable to or more advanced than humans could bring innovations to society only visualized in science fiction. But behind every transformative technology is the possible harm that comes with it.
AGI carries considerable risks, and experts have warned that advancements in AI could cause significant disruptions to humankind. But expert opinions vary on quantifying the risks AGI could pose to society.
The majority of experts would agree that AGI alone is not necessarily a risk to humanity, but the hands that puppeteer the intelligent being would dictate how helpful or harmful this technology could be.
Also: What happens if a superintelligent AI goes rogue? OpenAI doesn’t want to find out
Aaron McClendon, head of AI at Aimpoint Digital, stressed the importance of dispersing access to generally intelligent beings and ensuring that no one group of people or countries monopolize AGI.
“If AGI results in a significant increase in wealth and productivity, those benefits should be shared broadly rather than concentrated in the hands of a few,” he says.
Nigel Cannings, CTO and founder of Intelligent Voice has a similar sentiment. A generally intelligent being would learn morals, reason, and judgment from the humans teaching it. The consequences could be grave if pointed in the wrong direction.
Also: Even Google is warning its employees about AI chatbot use
“…[I]t is the humans operating machines that have great power that is far more worrying. It is likely that any machine that achieves some level of sentience or intelligence will only become evil if pushed in that direction by its human masters,” Cannings says. “Left to its own devices, it is probably more likely to be fair and impartial in its approach to humanity.”
How close we are to achieving AGI
According to Wei Xu, an assistant professor at The Georgia Institute of Technology’s College of Computing, we can trace the quest to achieve AGI back to the advent of computer systems in the 1950s.
Xu cites the Turing test, a test created by Alan Turing in 1950 where a human and a machine engage in a text-based conversation, and the conversation is evaluated by a judge. If the judge cannot distinguish which conversationalist is a human and which is a machine, then the machine passes the Turing test.
Also: Human or bot? This Turing test game puts your AI-spotting skills to the test
Xu also mentions the General Problem Solver, a computer program created by two computer scientists in 1956. The two believed that teaching a machine to use symbols to connect with the world around it would eventually lead to general intelligence.
Both the Turing test and the General Problem Solver were early attempts to create and test a machine’s ability to act intelligently. And these tests lead us to today, where the success of language models like ChatGPT and GPT-4 is the culmination of research and a testament to the spirit of human innovation.
Xu refers to AGI as “the holy grail of computer science,” as it consolidates many technologies to create a general, unified intelligent being that can complete tasks and solve problems. Applications like ChatGPT and DALL-E can produce text output and generate artwork independently, but AGI could do both — and much more.
Also: AI can write your emails and essays. But can it express your emotions? Should it?
“It is more efficient, more convenient, and more impressive to have one single product with one user interface that can do more things all at once,” she says.
Experts are careful to say exactly how many years we could expect to see AGI, but Lloyd-Jones has an idea. He says although language models like ChatGPT are impressive and revolutionary, they cannot create new ideas, but technology that can is on the horizon.
“AI models are only as good as the data set used to train them, and they are unable to create new styles of art, or poetry, for example. We do not yet have a perfect data set for this purpose,” he says. “Personally, with the pace of development, I think we’ll be 80% there in five years.”
But Cannings says we are still far from achieving AGI and can’t know precisely when it might happen. He says that AI chatbots showcase the advancements in engineering but heavily lack the necessary characteristics of human intelligence.
Also: LLMs aren’t even as smart as dogs, says Meta’s AI chief scientist
Cannings emphasizes the difficulty of creating an intelligent being that can emote, empathize, and sense — cornerstones of the human experience. He believes that achieving AGI goes beyond engineering feats but will necessitate the involvement of the philosophical and psychological definitions of intelligence.
“While progress in AI is remarkable and continues to evolve, achieving AGI, encompassing the full range of human cognitive abilities, appears to be a challenge that will require significant advancements in both technical and philosophical domains,” he says.
How AGI should be regulated
Many countries are investing in AI research in hopes of being the first country to report a breakthrough in achieving AGI. McClendon notes that although the US and China are at the forefront, Canada, the UK, France, and Germany are making strides in the race.
“It’s important to note that achieving AGI will be a significant scientific and technological milestone, and its impact will be global, irrespective of where it’s developed,” he says.
Also: China is ramping up efforts to drive AI development
What matters about the country in which AI is researched are the laws in that country and the geopolitical reasons for which a government could leverage the power of AGI. A country’s philosophies about regulation and technological innovation are integral to how fast and widely adopted its technology could be.
Bryan Cole, director of customer engineering at Tricentis, says companies like OpenAI, Google, and Microsoft may disclose their progress to AGI, but governments and nation-states may be less open.
The secrecy around who could reach AGI first lies in global dominance and influence, and whoever achieves it first could become more powerful without letting adversaries know their next step.
Also: OpenAI could ‘cease operating’ in EU countries due to AI regulations
“Major nation states like China and the US are pouring enormous resources into this because whoever gets an AGI system first will likely have the ability to prevent any other AGI system from coming into existence through technological dominion [or control],” he says.
But until countries reach AGI, AI companies, researchers, and lawmakers must collaborate to create legal safeguards for citizens.
Countries in the EU will soon have to adhere to the three risk categories outlined in the EU AI Act, which will ensure that AI is regulated by people — not by automated systems — to eliminate the possibility of harmful outcomes. China’s regulations require AI systems to support and align with the country’s political values, while the US has no legal framework at the federal level to regulate AI.
Also: The EU AI Act: What you need to know
Michael Queenan, founder and CEO of Nephos Technologies, says Western governments should not willfully ignore the vital importance — and possible dangers — of AI, or they risk losing the race.
“AI technology is evolving at breakneck speed, far faster than regulators are, and we need to act fast,” he says. “We are facing a tsunami of AI and have no plan for it. The West is at risk of being left behind; however, regulation is critical in deciding what and how we should be using it.”
As AI becomes more advanced and its applications span different facets of life, it becomes increasingly difficult for lawmakers to create laws that clearly define the risks and how to address them.
Sarah Pearce, partner at the law firm Hunton Andrews Kurth, says lawmakers shouldn’t spend too much time coming up with an exact definition of AI and should contain their efforts to regulate the technology’s output.
Also: 3 ways OpenAI says we should start to tackle AI regulation
“I think lawmakers would be better focusing on the outputs and uses of the technology when trying to legislate around it rather than trying to settle on an overly broad definition of what it is, as it is likely that any definition will be outdated by the time the legislation comes into force,” she says.
For countries that don’t have formal federal legislation surrounding AI, Pearce says governments and AI companies should focus on protecting user data. AI companies collect and use significant amounts of data to train AI models.
“Companies will inevitably be accused of taking their collection activities towards the excessive and may be asked to explain whether and why they are retaining the data for longer than may be perceived necessary,” Pearce says. “Often, this is to help improve and further replicate algorithms — it is not necessarily being used for additional commercial gain.”
Also: US Chamber of Commerce pushes for AI regulation, warns it could disrupt economy
AGI, a technology dreamed of since the dawn of computers, shown to us in movies like Spike Jonze’s “Her,” Jon Favreau’s “Iron Man,” and Stanley Kubrick’s “2001: A Space Oddysey,” is the lifeblood of AI research, and it’s impossible to stop anyone from trying to achieve it.
In our favorite movies, AGI is a helpful sidekick, a loving companion, or a villain that recognizes humans as a threat to themselves and decides to exterminate humankind. Which version will we see? Experiencing a generally intelligent agent is no longer a matter of if — but when.
Will we be ready?
Artificial Intelligence