4 Common Errors New Prompters Make
Using AI to learn languages can be impossible with the wrong prompt
I’ve now been learning languages using mainly ChatGPT (mostly GPT-4) for over 4 months. It was also part of my toolbox for another 8-9 months before that.
During this time, I’ve made tons of mistakes in my prompts and learned a lot about what makes a good prompt and helps get the results I want.
I’ve learned especially much from each piece I’ve written for The Language of AI. This is because every piece started with an idea I had while using the paid version, GPT-4, while I want my advice to be usable for users of the free version, GPT-3.5.
GPT-4 and GPT-3.5’s capacity for inferring and understanding the directives is at such a different level from GPT-3.5 that I’ve struggled to find ways to make it work.
And this brings me to the very first mistake new prompters make.
What’s the goal already?
Most new prompters ask very vague questions and expect a precise answer to come back to them. That’s not how this works.
Large Language Models (LLMs) like ChatGPT work from what’s written down. The more detail you give, the more likely you are to get the answer you want.
In short, you shouldn’t be vague.
See the difference between the below two examples? On the left, I just asked it to “explain” while in the second I gave it an exact number of sentences, asked for bold, and requested one specific use.
You have to be clear so you get a clear answer.
Don’t drown the LLM
Then, there’s the ones doing the opposite.
Once you realize you have to be precise for ChatGPT to follow properly the directives you give it, it’s easy to go overboard and submerge it under a sea of requests.
ChatGPT tends to follow precise directives well. You tell it in detail what you want and you get it. Up to a certain point.
There comes a time when information and requests somehow merge and can get the ChatGPT lost.
Taking the same example above, I added other requests that could be useful as a learner of German: getting the gender and case of nouns in a table.
Both screenshots are generations from the same prompt. They each gave me something completely wrong:
On the left: Not only did I not get my table, but ChatGPT started telling me about the case of pronouns, something I hadn’t requested.
On the right: I got an explanation of the conjugation that I hadn’t requested. I got my table but it’s not the one I requested and the vocabulary column is empty.
ChatGPT is like a trained dog. It can do what you request of it but it will struggle and mess everything up if you ask it too much in one go.
Don’t overload it. Ask for one thing at a time.
And tweak the response you get as you go with an iterative process.
Assumptions are risky
I recently stopped using the German Assimil-GPT I had created for myself a while back. It worked well but, as time passed, I started feeling it wasn’t getting harder as it should or would get instantly impossible if I told it to make the dialogue more difficult.
While I knew that GPT-4 had 8,000 tokens1 (about 4,000-6,000 words) and could not hold the entire conversations in its memory, I reckoned that, since I had asked it to increase day after day the level, its knowledge was based off the day before and therefore the day before that, and so on.
That wasn’t the case.
I would get the same explanation for something I had seen weeks before, therefore making me feel stuck in place.
This is a common mistake most people make.
ChatGPT and other LLMs don’t retain the information we give them so each prompt should contain all the necessary context for the AI to respond.
Forgetting key aspects of prompting
A great prompt usually gives all the information below—that I shared in a previous post:
A task
A context
An exemplar (basically examples)
A persona
A format
A tone
The best response you can hope for is only as good as the best prompt you give it.
If your prompt doesn’t have the information above, there’s always a risk of not getting the response as you want, with the information you want.
Now. Instead of having to rewrite all this information for each prompt, you can also write it in the Custom Instructions menu (in the bottom left). This can reduce the length of your prompts and avoid repetition.
Of course, as you learn to use ChatGPT, you will become used to prompting it and will learn when you can skip some of those details.
For example, I never give a persona or a tone in my examples above because that wasn’t important as part of my examples.
Final tips
Like any other tool, learning to use LLMs like ChatGPT and AI in general will need some practice and a lot of tweaking until you find the way that works best for you.
You will make errors, like I did—and still often do—along the way.
And you will get better as you experiment and try to find new ways to get the answer you need.
It’ll be frustrating. I can assure you that. But it’ll also be hella rewarding every time you succeed.
I can also promise that.
Have you made other errors you think new prompters could avoid? Let me know by responding to this email or commenting.
Cheers for reading,
Mathias, an average polyglot and future fluent speaker of AI 🤓
It’s now gone up to 32,000 tokens but accuracy for long texts seems to be debatable.
I have been playing with image generations with Dall-E and you are so right in your points.