How to Deal With AI Lies When Learning
An important reminder on trust and AI hallucinations
It’s no secret I’ve fallen in love with using AI to learn languages. The opportunities each tool opens still shock me.
I mean, you can now get audio in many languages in the same voice using ElevenLabs. And that voice can even be your own!
Here’s the same text in Japanese and French. The Japanese becomes awful starting in the middle1 and the French gets an American accent at the end for no reason, but still!
This can help us get an audio version for pretty much any text we’d like, therefore learning it through reading and listening! 🤯
But it’s also dangerous when it starts lying to us.
This is what’s commonly known as “AI hallucinations.”
Limitations of AI (for now)
One of the most well-known problems of AI is its capacity to confidently give answers that are completely wrong. It’s also easy to make it backpedal and apologize for whatever it said.
This is especially true when it comes to languages. It’s also not a problem limited to GPT-3.5.
My most recent experience with this was with GPT-4 regarding the German word “Hin- und Rückfahrkarte
” (Round-trip ticket). I thought having a space after the dash felt wrong so I asked about it. I also checked a dictionary and saw this was, indeed, normal but ChatGPT had backtracked twice by then.
This is about German, a language I can easily double-check online, but what happens with other, less widely spoken languages?
You end up trusting information that’s incorrect.
A recent paper (From September 21st) explained the incapacity of GPT-4 (and therefore GPT-3.5 too) to connect to information in a bilateral way.
The example taken, reproduced below by myself was about Tom Cruise’s mother. Asking ChatGPT about her directly gave no result but when asked about Tom Cruise’s mother, her name comes up instantly.
These show clear limitations of AI as of late 2023. Limitations that’ll disappear as Large Language Models (LLMs) and technology as a whole improve.
Until then, it’s up to us to be careful.
Trust in the age of AI
The expression “fake news” has been around for quite a few years. We’ve gotten used to being careful about the news we encounter. If we think about it, this is something we should have always done because there’s always a point of view, an angle, hidden behind the news. It’s political after all.
But what happens when we’re looking up something we’re learning?
What if a topic on which there’s no reason to lie comes up? Should we take it at face value? Normally, I’d say we should but AI hallucinations make it so that any information we receive now needs to be double-checked to some extent.
We’ve entered a world where critical thinking matters most.
My dear friend
has been thinking about how to deal with his English students using AI, what’s ethical, what’s not, etc. I’ve also been listening to The Digital Learning Podcast where they talk about this topic too.Whatever we try, there’s no coming back. Children will grow up using AI as part of their studies. And we adults will also come to a point when not using AI while learning will seem crazy.
Until we can trust it—if we ever can,—we all need to learn to doubt, verify, and triple-check whatever AI tells us to come to our own final decision:
Do we trust the information? Or don’t we?
How and what to trust?
There’s no clear cut on this and the tools we use will keep on changing so there won’t ever be a single solution.
When it comes to language learning, at least, there’s a need to stay doubtful. It’s always good to first ask whatever tool (whether ChatGPT or another like Bard, Perplexity, Claude, etc.) to give you more explanation.
A few more verifications can help you be sure:
Evaluate if the information makes sense.
Ask for more example sentences using the pattern or word.
Create a sentence in English that would seemingly require the same word/expression/pattern and ask for its version in your target language. If it’s not using it, ask why.
Don’t hesitate to dig deep and even get out of AI to ask Google or a native speaker you know (or don’t know! on platforms like HiNative).
You might think that’d be a waste of time but it’s not.
Any extra time you spend with a specific pattern/word/expression strengthens your memory of it. And if it turns out AI was hallucinating, you’ll remember the correct way because of all the time you’ll have spent figuring it out.
I’m sure I won’t forget Hin- und Rückfahrkarte
in German anytime soon!
Let me leave you with one final thought.
AI is here to stay.
The sooner we learn to master it, the better off we’ll all be.
Cheers for reading!
What do you think of AI Hallucinations? How do you deal with them? Let me know in the comments.
Mathias
An average polyglot
It stops recognizing the kanji correctly and gets a strong American accent