HomeTech PlusTECH & OTHER NEWSThe five biggest mistakes people make when prompting an AI

The five biggest mistakes people make when prompting an AI

Gemini AI prompt mistake on a laptop.

ZDNET

Generative AI tools like ChatGPT, Gemini and Copilot can be powerful. Even though generative AI is a fairly new technology, a powerful limitation on its use dates back to the 1950s or earlier: GIGO. GIGO means “garbage in, garbage out.” If you ask AIs the wrong questions or don’t ask them correctly, you’re pretty much guaranteed to get nonhelpful answers.

Also: 7 ways to write better ChatGPT prompts – and get the results you want faster

In this article, I’m going to show you what I consider the five biggest mistakes, but I’m only one source. So, in addition to my own answers, I took this question to the “people” who should know best about this particular topic, the AIs.

I’ve asked ChatGPT, Copilot, Grok, Gemini and Meta AI the same question: “What are the five biggest mistakes people make when prompting an AI?”

To make sure my thoughts weren’t influenced by those of the AIs, I started out with a list of what are, in my opinion, people’s five biggest mistakes and had already written my detailed descriptions before prompting the AIs for their opinions. The results (and especially the common themes) are very interesting. Here’s a table that aggregates all our answers.

ai-prompting-mistakes

David Gewirtz/ZDNET

Note that we all agree on the top answer: not being specific enough in your prompts. For more details, let’s dive deeper into what each of us considers the biggest mistakes you can make when prompting an AI. Let’s start with my five.

1. Not being specific enough

Neither AIs nor humans (with the possible exception of my wife) are mind readers. You may have a very clear picture of the problem you’re trying to solve, the constraints, things you’ve considered or tried and possible objections.

Also: 7 advanced ChatGPT prompt-writing tips you need to know

But unless you provide a very clear question, neither your human friends nor your AI assistants will be able to extract those images from your head. When asking for help, be clear and be complete.

2. Not specifying how you want the response formatted

Do you want a list, a discussion or a table? Do you want a comparison between factors or do you want a deep dive into issues? This mistake happens when you ask a question but don’t give the AI guidance about how you want the answer to be presented.

This mistake isn’t just about style and punctuation — it’s about how the information is processed and refined for your eventual consumption. As with my first item on this list, be specific. Tell the AI what you’re looking for and what you need to be able to accept an answer.

3. Not remembering to clear or start a new session

I have found that the AIs sometimes get confused when you use one session for multiple lines of inquiry. Sometimes they’ll assume that details you previously provided or questions you previously asked were also relevant to a later set of prompts.

Also: 7 advanced ChatGPT prompt-writing tips you need to know

To overcome this, I either start a completely new session or tell the AI to “clear session,” which tells the AI to consider whatever we’re about to discuss as a completely new discussion.

4. Not correcting, clarifying, or guiding the AI after an answer

As in a human conversation, I’ve found AIs often misunderstand the question or go down a line of inquiry not directly relevant to what I’m looking for.
I’ve found that I often have to cycle through the question, asking follow-up questions, telling the AI it got an answer wrong, correcting it and guiding it toward what I’m looking for. If it’s on track but missing part of the answer, tell it. Tell it what it’s getting wrong and what it’s getting right, then continue down that path of inquiry.

5. Not knowing when to give up

Sometimes the problem isn’t something an AI can solve or it can’t provide you with the degree of quality you need. Fundamentally, an AI is a tool and one tool can’t do everything.

So, know when to hold ’em and know when to fold ’em. Know when it’s time to go back to a search engine, or look on forums or construct answers on your own. There is a point of diminishing returns and if you can identify that point, you can save time and frustration.

OK, so those are my five bits of wisdom. Let’s see what the AIs have to say.

ChatGPT’s advice

Here are the answers ChatGPT gave me.

  • Being too vague or unclear: Writing prompts that lack specificity, leading to incomplete or irrelevant responses.
  • Overloading the prompt: Including too much information or asking for multiple unrelated tasks in a single prompt.
  • Using incorrect or inconsistent terminology: Using terms or phrasing that could be interpreted differently or are inconsistent with the desired context.
  • Failing to specify the format or structure of the response: Not instructing the AI how the output should look (e.g., bullet points, paragraphs, summaries).
  • Assuming the AI knows everything about context: Omitting essential background information, assuming the AI already knows specific details.

Also: How I set ChatGPT as Siri’s backup – and what else it can do on my iPhone

ChatGPT matched my answers on two of its five: being too vague and failing to specify format. Let’s see what CoPilot has to say.

Copilot’s advice

Copilot gave me its advice after I pasted in my prompt.

  • Lack of clarity: Vague prompts can lead to equally vague responses.
  • Overloading information: Giving too much information in one go can be overwhelming.
  • Ignoring context: Failing to provide context can result in responses that are off the mark.
  • Not being specific: Broad questions can yield broad answers.
  • Assuming AI knows everything: Don’t have personal experiences or the ability to understand extremely niche or highly specialized topics without sufficient context or background.

Interestingly, Copilot separated lack of clarity from not being specific. I consider those pretty much the same thing. But no matter how you list it, both Copilot and I agree you need to give it enough information to be able to do its job.

Also: How to use Microsoft’s Copilot AI on Linux

Both Copilot and ChatGPT use the same underlying LLM, so it’s interesting that their answers are somewhat different.

Grok’s grokkings

As with the other AIs, I asked Grok to tell me what humans are doing wrong. It had some answers.

  • Vague or ambiguous queries: Asking questions that are too broad or unclear.
  • Assuming human-like understanding: Expecting the AI to understand context or nuances in the same way a human might.
  • Overloading the prompt with information: Giving too much information or asking too many questions in one prompt.
  • Ignoring the AI’s capabilities and limitations: Asking the AI to perform tasks it’s not designed for, like real-time data analysis or personal advice based on emotions.
  • Not iterating or refining queries: Accepting the first response without considering if it fully meets the need or if it could be improved with more specific questioning.

As with the others, Grok’s first recommendation is to avoid vague queries. Interestingly, it provided two takes that could be considered similar to my suggestion of knowing when to give up.

Also: X’s Grok did surprisingly well in my AI coding tests

Assuming human-like understanding and ignoring the AIs’ limits tie directly into the idea of knowing when not to use an AI. Grok and I both recommended iterating on queries to provide stepwise refinement of answers you might get.

Gemini’s advice

Here’s what Gemini had to say.

  • Vague or ambiguous instructions: This is probably the most common mistake.
  • Lack of context: Similar to vagueness, not providing enough context leaves the AI guessing.
  • Overly complex language: While AI models are sophisticated, using overly complicated or convoluted language can confuse them.
  • Misalignment with AI capabilities: Expecting the AI to do things it’s not designed for is a common mistake.
  • Ignoring the importance of iteration: Prompt engineering is often an iterative process. You might not get the perfect response on your first try.

Also: Google TVs are getting a major Gemini upgrade in 2025

You gotta give Gemini points for alliteration on that last one. Seriously, though, we’re seeing a definite trend about unclear instructions. Gemini and Grok agree with Gewirtz (see what I did there?) and consider iteration to be important to get quality results.

Meta AI’s advice

Finally, I decided to see what Meta’s AI had to say about the subject. Here are its answers.

  • Lack of specificity: Vague prompts can confuse the AI, leading to irrelevant or inaccurate responses.
  • Insufficient context: AI models lack human experience and common sense.
  • Ambiguity and uncertainty: Avoid using ambiguous language, idioms, or sarcasm, which can be misinterpreted by the AI.
  • Unrealistic expectations: Understand the limitations and capabilities of the AI model.
  • Not reviewing or refining prompts: Take the time to review and refine your prompts.

We have a grand slam home run on lack of specificity. Meta’s AI also agreed on many of the other themes.

How to be successful when writing prompts

Knowing what the most common mistakes are can guide you when writing your prompts. Just do the opposite. That leaves us with a list of nine powerful guidelines for writing your prompts:

  • Use specific, clear, and complete prompts.
  • Remember that the AI is just a program, not a magic oracle.
  • Iterate and refine your queries by asking better and better questions.
  • Keep the prompt on topic.
  • Specify details that give context to your queries.
  • Make sure that any buzzwords or jargon are defined, as well as any words and concepts the AI may need to know that are specific to your query.
  • Start with a fresh session to make sure you avoid confusing the AI with earlier work.
  • Know when to try a different tool.

And there you go. This should get you a good distance along in creating great prompts that give you excellent results.

What do you think? Do you have any additional best practices you recommend? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img
spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS

TECH NEWS & UPDATES