What Big Tech Gets Wrong About AI And How Turing Can Make It Right

“Well, we’re fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence. But that impression is false. We’re really fooled by it.”

The words come from Meta’s AI chief, Yann LeCun. In an almost three-hour long interview with Lex Fridman, he recently argued that large language models cannot have a deep understanding of the world.

A few weeks later, he told the Financial Times that he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence within the next 10 years.

Big Questions For Big Tech

LeCun is not alone in making predictions about AI. A few months ago Ray Kurzweil reminded us of his 1999 prediction that AI will achieve human-level intelligence by 2029. And CEO and co-founder of Google’s DeepMind, Demis Hassabis, predicts that human-level AI may be just a few years away.

But are when and how AI will reach human-level intelligence the right questions to ask at a time when existing AI is fooling us into believing it has characteristics it doesn’t have? If the LLM-driven AI we already have is fooling us with its fluency, shouldn’t we expect the future AI that LeCun and others are working on to fool us even more and give us even more false impressions of what it is capable of doing?

“Too meaningless to deserve discussion”

When Fridman asked LeCun what he thought Alan Turing would say if he had the chance to hang out with today’s LLM-powered chatbots, LeCun promptly replied, “Alan Turing would say that a Turing test is a really bad test.”

As a 2018 Turing Award recipient, LeCun seems like the right person to ask what Turing would think about today’s AI. But Turing’s own words suggest that LeCun and his AI colleagues at Meta, Deepmind, OpenAI and the rest of the world’s big tech companies have misunderstood the purpose of the Turing test.

According to his seminal 1950 paper, “Computing Machinery and Intelligence”, Turing never intended to build a machine that could think like a human. In fact, he made it clear that he considered the question of whether machines can think “too meaningless to deserve discussion.” Instead, he asked what it would take for a machine to play the so-called imitation game as convincingly as a human.

How AI Became A Fool’s Game

As the name suggests, the imitation game is about pretending to be something or someone that you are not. To play the game convincingly, you must convince — or, fool — others into believing you are whatever or whoever you pretend to be.

In the machine’s case, the imitation game is about pretending and fooling the other players into believing that it is a human being. So, when LeCun says we are fooled by the fluency of LLMs, he isn’t proving that the Turing test is “a really bad test.” Rather, he is rediscovering a fundamental, yet long-forgotten truth about AI, which is that it always was and always will be a game of fooling versus avoiding being fooled.

The purpose of the Turing test was never to build machines with human-level intelligence. It was to build machines capable of convincing humans that they are interacting with something that is — or is on the verge of becoming — as intelligent as themselves. This is exactly what ChatGPT and other large language models have succeeded in doing, and so LLMs have not only passed the Turing test with flying colors, as Fridman puts it, it has also turned us into fools.

Turing Would Say The Test Is On us

Rather than saying the Turing test is a really bad test, hanging out with ChatGPT and the like would probably make Turing reconsider the implications of AI.

Although he made a prediction himself, saying that “at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted,” he would probably rather see his successors contradicted by their fellow humans than fooled by their own inventions.

Regardless of when and how tech experts predict that AI will reach human-level intelligence, they all seem to miss the fundamental premise Turing had for building intelligent machines: comparing them with humans is meaningless.

And so, Turing would probably say that the difference between his 1950 child-machine (his word) and the AI we know today is that the test is no longer on the machines, but on us: Do we have what it takes to avoid being fooled?

How To Win The Imitation Game

While political and corporate leaders are drowning in questions about what will happen to us humans as AI becomes faster and better at everything we do, Turing did not see AI as a threat to human roles and responsibilities. If anything, he seemed to think that every time AI solves one task, we humans must solve two.

When the Turing test is no longer on the machines, but on us, the question is not what it takes for a machine to play the imitation game as convincingly as a human, but what it takes for us humans not to be fooled by the machine.

In his initial description of the imitation game, Turing introduced three players:

  • Player A, a man whose job it is to pretend he is a woman
  • Player B, a woman whose job it is to tell and help player C discover the truth, and
  • Player C, someone whose job it is to ask questions of player A and B in order to determine who is a man and who is a woman

Turing only proposed replacing player A with a machine — that is, the only player who is already pretending to be someone he is not. In fact, he never even considered replacing player B and player C.

This makes it easy to answer the question of what it takes for us humans to win the imitation game. We must: 1) tell and help each other discover the truth (like player B), and 2) question everything we hear and see in order to determine what information can and cannot be trusted (like player C).

These basic human characteristics are rarely discussed by Big Tech. But maybe they will be in the future?

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here