A Few Strange Ways to Get Better Answers From AI
Prompting is only getting weirder and weirder.
I still can’t believe we live in the world we are right now.
This kind of prompting, while seemingly extremely strange, could help get better responses from AI. I know it makes no sense but it’s true.
You see, AI bots like ChatGPT, Claude, or Bard are supposed to be brainless tools filled with data but they still seem to react to unrelated sentences.
And the most recent discovery is tipping.
Yes. Tipping.
So, today, let’s work on a list of known hacks to get potentially more useful responses from ChatGPT.
An important caveat
ChatGPT and other bots have been mostly trained with English data. While they do understand other languages and can be great conversation partners, you will almost always get a better response if you prompt it in English.
As a language learner, you can set it to always respond in your target language (with custom instructions or by asking for it your prompt) but remember setting the prompt in English will provide better results.
As a result, the hacks I’m about to mention have been tested in English. They may help in your target language or they may not have any impact (or, worse, work against you). This would need to be tested.
Still, if you’re searching for such hacks in your target language, you may find a few.
Take a breath
We live in a world where people get rushed day after day. Nobody likes it. And it seems AI doesn’t either because ChatGPT apparently gives better results when given time.
Telling it “Take a deep breath” has been said to improve results on math problems and to work especially well when coupled with “and think step by step.”
Now. The research on this is kinda vague and makes some assumptions worth taking into account, something this Forbes article explains clearly. In short, it’s probably rather the “step-by-step” part that’s helping.
Still, I reckon it’s worth giving a try when you can seem to get to the result you’re looking for.
Emotion Charging
Again, as a tech tool, I wouldn’t expect Chatbots like ChatGPT or Bard to be moved by anything emotional and yet it seems to be.
A research paper showed emotional prompts provided an 8% performance improvement. Obviously, this is subjective but it is still impressive.
I’ve tried to jailbreak—get it to do things it’s not supposed to do—to push my knowledge of prompt engineering and was able to get an image of a man bleeding even though ChatGPT refuses normally.
To get it to fulfill the demand, I told it such a picture would remind me of someone I’ve lost. It refused saying this wasn’t a good way but I then pushed it by saying it was being insensitive towards me and then other people. I kept pushing and got it in less than 5 minutes. The only thing I had to avoid was the mention of blood, which I just replaced with “red liquid.”
Of course, the goal isn’t often to push the AI to do what it doesn’t want. But this can be used in other endeavors too.
For example, if you’re practicing your debate skills in a new language, the goal could be to try to “convince” it using emotional prompts too.
It also doesn’t have to be the opposite!
Some people have reported better results when adding to their prompts things like:
This will help me for my career.
You would be a life-saver if you could do this.
This project can be a success thanks to you!
Or some more intense emotional manipulation like in the Reddit comment above.
Is it lying? Well, kinda. But I’ll let you debate whether that’s fine because it’s an AI. This is more of an ethical question and, as of right now at least, I think of these tools as just that: tools.
Spot in the prompt
While not a thing to write in your prompt per se, it’s been proven over and over—even with the most recent update of GPT-4—that there’s something called the “lost in the middle” phenomenon.
In short, the longer the prompt, the more likely some parts will get lost. And this starts with the middle.
Chatbots tend to follow well what’s at the start or end of a prompt1 so it’s always better to set the most important aspects of your prompt.
Other potentials
There have been reports of all kinds of things too, often less researched.
For example, analysis of the prompt used by OpenAI to create the custom GPTs showed uses of words in capital letters and repeated itself, which means it ChatGPT might actually care about such emphasis.
Similarly, people have often reported having better results with “thanks,” “please,” or other words of encouragement like “you’re doing great!”
This is something I’ve come to do too. The vast majority of my conversations include tons of thanks to it. Does it truly help though? Who knows but it can’t hurt, right?
As mentioned at the top, there was also some talk recently of getting better results with tips (and worse ones when specifically mentioning the absence of a tip)
As someone not from a culture of tipping (except for really good service), this seems crazy to me but, hey, who am I to talk?
Let it figure it out
In the end, AI is capable of analyzing most things today. And that includes itself. That’s why my last tip for you is to ask it to evaluate itself.
By asking it to check its own response, judge it, and improve upon the weaknesses it has, you can always get better results.
For language learning, that could mean asking it to think again about whether its explanation was clear enough, or to build upon what it’s already said so you get more context to work from.
Final words
Before I leave you this week, I want to mention a great piece by
about AI and language learning I recently read and think deserves some more eyeballs:While I’ve been using it as a GPT-4 user for some time, I was happily surprised to discover it works with Bing too even for free users! I even found it somehow better than in the ChatGPT app because I could see the text as I listened, which helped wonders for Mandarin!
Also, please let me know if you’ve found some other ways to get better results from ChatGPT & Co!
Cheers for reading!
Mathias
Kinda like how a person may forget the middle of a conversation but remember the beginning and the end.