Ever wonder what factors influence how an artificial intelligence (AI) chatbot responds when conversing with a human being? Anthropic, the company behind Claude, has revealed the secret sauce powering the AI.
In new release notes published on Monday, the company drew back the curtains on the system prompts, or commands, that direct and encourage specific behaviors from its chatbot. Anthropic detailed the prompts used to instruct each of its three AI models: Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku.
The prompts, dated July 12, indicate similarities across how the three models operate, but the number of instructions for each varies.
Also: ChatGPT is (obviously) the most popular AI app – but the runners-up may surprise you
Freely accessible via Claude’s website and considered the most intelligent model, Sonnet has the greatest number of system prompts. Adept at writing and handling complex tasks, Opus contains the second largest number of prompts and is accessible to Claude Pro subscribers, while Haiku, ranked the fastest of the three and also available to subscribers, has the fewest prompts.
What do the system prompts actually say? Here are examples for each model.
Claude 3.5 Sonnet
In one system prompt, Anthropic tells Sonnet that it cannot open URLs, links, or videos. If you try to include any when querying Sonnet, the chatbot clarifies this limitation and instructs you to paste the text or image directly into the conversation. Another prompt dictates that if a user asks about a controversial topic, Sonnet should try to respond with careful thoughts and clear information without saying the topic is sensitive or claiming that it’s providing objective facts.
Also: AI voice generators: What they can do and how they work
If Sonnet can’t or won’t perform a task, it’s instructed to explain this to you without apologizing (and that, in general, it should avoid starting any responses with “I’m sorry” or “I apologize”). If asked about an obscure topic, Sonnet reminds you that although it aims to be accurate, it may hallucinate in response to such a question.
Anthropic even tells Claude to specifically use the word “hallucinate,” since the user will know what that means.
Claude Sonnet is also programmed to be careful with images, especially ones with identifiable faces. Even when describing an image, Sonnet acts as if it is “face blind,” meaning it won’t tell you the name of any individual in that image. If you know the name and share that detail with Claude, the AI can discuss that person with you but will do so without confirming that that is the person in the image.
Next, Sonnet is instructed to provide thorough and sometimes long responses to complex and open-ended questions but shorter and more concise responses to simple questions and tasks. Overall, the AI should try to give a concise response to a question but then offer to elaborate further if you request more details.
Also: The best AI chatbots of 2024: ChatGPT, Copilot, and worthy alternatives
“Claude is happy to help with analysis, question answering, math, coding, creative writing, teaching, role-play, general discussion, and all sorts of other tasks,” Anthropic adds as another system prompt. But the chatbot is told to avoid certain affirmations and filler phrases like “Certainly,” “Of course,” “Absolutely,” “Great,” and “Sure.”
Claude 3 Opus
Opus contains several of the same system prompts as Sonnet, including the workarounds for its inability to open URLs, links, or videos and its hallucination disclaimer.
Otherwise, Opus is told that if it’s asked a question that involves specific views held by a large number of people, it should provide assistance even if it has been trained to disagree with those views. If asked about a controversial topic, Opus should provide careful thoughts and objective information without downplaying any harmful content.
Also: ChatGPT vs. ChatGPT Plus: Is a paid subscription still worth it?
The bot is also instructed to avoid stereotyping, including any “negative stereotyping of majority groups.”
Claude 3 Haiku
Finally, Haiku is programmed to give concise answers to very simple questions but more thorough responses to complex and open-ended questions. With a slightly smaller scope than Sonnet, Haiku is geared towards “writing, analysis, question answering, math, coding, and all sorts of other tasks,” the release notes explained. Plus, this model avoids mentioning any of the information included in the system prompts unless that info is directly related to your question.
Overall, the prompts read as if a fiction writer were compiling a character study or outline filled with the things the character should and should not do. Certain prompts were especially revealing, especially the ones telling Claude not to be familiar or apologetic in its conversations but to be honest if a response may be a hallucination (a term Anthropic believes everyone understands).
Also: AI hype, in dollars and sense
Anthropic’s transparency of these system prompts is unique, as generative AI developers typically keep such details private. But the company plans to make such reveals a regular occurrence.
In a post on X, Alex Albert, Anthropic’s head of developer relations, said that the company will log changes made to the default system prompts on Claude.ai and in its mobile apps.
Artificial Intelligence