Posts

Showing posts from May, 2026

Hallucinations in Large Language Models

  Hallucinations in Large Language Models A months ago I asked a chatbot for information about a researcher in our field. It gave me a paragraph. University affiliation, published papers, research interests. I was impressed until I searched for the person. Found they didn't exist. The chatbot had made up a human being confidently and without hesitation. This phenomenon has a name: hallucination. As someone studying AI and Machine Learning I think hallucination is one of the problems our field faces today. What is hallucination? Imagine a student who studied well but panics during an exam. Of leaving a question blank they write something that sounds right. But is completely made up. A chatbot generates text by predicting what word comes next based on patterns it learned during training. The output sounds fluent and authoritative. Theres no fact-checker. If a chatbot hallucinates a movie recommendation you just watch a film.. If it hallucinates a medicine name, a legal clause or a st...