Explained: Microsoft-Backed OpenAI’s New AI Model GPT-4, Its Capabilities and Limitations

Microsoft-backed startup OpenAI began the rollout of GPT-4, a powerful artificial intelligence model that succeeds the technology behind the wildly popular ChatGPT.

GPT-4 is “multimodal”, which means it can generate content from both image and text prompts.

What is the difference between GPT-4 and GPT-3.5?

GPT-3.5 takes only text prompts, whereas the latest version of the large language model can also use images as inputs to recognize objects in a picture and analyze them.

GPT-3.5 is limited to about 3,000-word responses, while GPT-4 can generate responses of more than 25,000 words.

GPT-4 is 82 percent less likely to respond to requests for disallowed content than its predecessor and scores 40 percent higher on certain tests of factuality.

It will also let developers decide their AI‘s style of tone and verbosity. For example, GPT-4 can assume a Socratic style of conversation and respond to questions with questions. The previous iteration of the technology had a fixed tone and style. 

Soon ChatGPT users will have the option to change the chatbot’s tone and style of responses, OpenAI said. 

What are the capabilities of GPT-4? 

The latest version has outperformed its predecessor in the US bar exam and the Graduate Record Examination (GRE). GPT-4 can also help individuals calculate their taxes, a demonstration by Greg Brockman, OpenAI’s president, showed.

The demo showed it could take a photo of a hand-drawn mock-up for a simple website and create a real one.

Be My Eyes, an app that caters to visually impaired people, will provide a virtual volunteer tool powered by GPT-4 on its app.

What are the limitations of GPT-4? 

According to OpenAI, GPT-4 has similar limitations as its prior versions and is “less capable than humans in many real-world scenarios”.

Inaccurate responses known as “hallucinations” have been a challenge for many AI programs, including GPT-4.

OpenAI said GPT-4 can rival human propagandists in many domains, especially when teamed up with a human editor.

It cited an example where GPT-4 came up with suggestions that seemed plausible, when it was asked about how to get two parties to disagree with each other.

OpenAI Chief Executive Officer Sam Altman said GPT-4 was “most capable and aligned” with human values and intent, though “it is still flawed.”

GPT-4 generally lacks knowledge of events that occurred after September 2021, when the vast majority of its data was cut off. It also does not learn from experience. 

Who has access to GPT-4?

While GPT-4 can process both text and image inputs, only the text-input feature will be available to ChatGPT Plus subscribers and software developers, with a waitlist, while the image-input ability is not publicly available yet.

The subscription plan, which offers faster response time and priority access to new features and improvements, was launched in February and costs $20 per month.

GPT-4 powers Microsoft‘s Bing AI chatbot and some features on language learning platform Duolingo’s subscription tier.

© Thomson Reuters 2023
 


After facing headwinds in India last year, Xiaomi is all set to take on the competition in 2023. What are the company’s plans for its wide product portfolio and its Make in India commitment in the country? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here