AI & Frustration: How to Deal With it And Get What You Want
Knowing is part of the solution
I’ve had fits of anger quite often since I started using AI. I bet you have too.
AI’s been hyped ever since ChatGPT came out. Anybody who’s tried to use it once will have been shocked at what it could accomplish so it makes sense. AI’s incredible. There’s no denying it.
But if you’ve tried to use it a few times, you’ll have faced some… let’s say, difficulties.
From ChatGPT giving results nowhere close to what you asked for, to it repeating the same thing over and over again, to—unfortunately—outright false information.
The more you use it, the more this sticks out. Often ending the conversation you’re having with it—like it did for my brother.
You can’t get rid of what I’ll call “AI frustration.”
You have to learn to deal with it.
The biggest cause of frustration
We’ve been told over and over AI bots like ChatGPT, Claude, or Gemini can understand “natural speech.” We can just “say what we want” and they’ll do it.
This works if you’re asking a straightforward question like “What can I cook with Gochujang?”1 but when you want an explanation about a complicated topic or require some nuance, these tools will need more.
The easiest way to make sure each response follows your directives as well as possible is to share the below 6 points as I mentioned in the past:
A task
A context
An exemplar (basically examples)
A persona
A format
A tone
Even with this list—which, let’s be honest, can be a pain to fill each time—ChatGPT and so on can still miss the mark.
So, what’s the solution? Well, there’s no one-way answer.
Depending on what you want, the way you engineer your prompt will impact the results you get. Even the order of information matters. It’s been proven again and again that what appears at the beginning and the end impacts most what you’ll get out of AI bots and what’s in the middle often gets overlooked.
This is why you must know precisely what you’re looking for and organize it accordingly. What directive matters most? The style? The context? What could you accept a compromise on? Weigh these and prompt accordingly.
Prompt engineering is not just for engineers.
It’s for all of us, users of these AI chatbots.
Hitting a wall? Don’t bang your head on it
I know all of the above and still grow angry at answers completely off the mark. I’ll usually first think my prompts were wrong before turning the fault back onto the AI tool again after a few more variations.
Hitting my first once on my desk out of frustration is a reaction I get weekly.
Does it help? Unfortunately, no.2 But it does make me realize my frustration and usually serves as a start of a new approach.
First, one solution you can try is to turn to a new conversation or a different tool (Claude3, Gemini, or Perplexity.ai are my go-to).
Starting a new conversation allows the tool (or the other one) to start from scratch without its memory and you can therefore prompt it with a new variation and change one of the factors mentioned above.
For example, when I helped my brother with figuring out his Excel formulas, I started 3 new threads in my tests to push ChatGPT from different angles. When I still ended up with similar results, I knew there truly weren’t other options.
Instead of asking the answer to your question, you can also turn things upside down and make ChatGPT figure out the process.
For example, don’t ask “Can I use ~거든요 in a conversation about what happened since the last time someone and I met?” but rather “How can I use ~거든요…” or “In Korean conversations, in which situations would you use ~거든요?”
The second example is a good way to actually make it a real exchange with the tool, just like it’d be if you were to ask a real teacher or friend.
Remember, AI is supposed to be a support.
It’s not the holy grail that’ll solve everything. Your brain is.
What matters most
In the end, the most important thing to remember is to recognize AI isn’t perfect. The frustration we feel is only caused by the expectations we put on it.
Because we often turn to AI once we’ve already hit a wall, we’re hoping to get the solution to an already growing frustration (or at least curiosity). If AI can’t solve it, we end up thinking the chatbot is the problem. Not our prompt nor us.
That imperfection is in stark contrast with the high praise all these tools keep getting in the news and online—hell, even by me!—and it makes us think we must be stupid for not getting what we want from it while others are succeeding.
It may be true sometimes. It most likely is even.
But it’s also likely we haven’t experimented enough with our prompts or pushed too far.
Don’t over-expect and frustration will decrease.
Just like you’ll never feel disappointed in your improvements if you stop comparing yours to others with different lives and skills, you can’t feel disappointed if every experience with AI acts only as part of a learning process.
After all, you’re learning the language of AI.
Fluency will take time.
Do you often get frustrated with AI? What causes it and how do you deal with it? Let me know!
Cheers for reading!
Mathias
The answer is: anything, it’ll be tasty no matter what 😁
Ah, reminds me of the good ol’ days when you could hit a cartridge or a console on the side to make it work better.