Main takeaways
Hallucinations: AI confidently states false information
Why it happens: LLMs predict plausible, not true
Temperature: The creativity-accuracy dial
Real failures: Lawyers, doctors, academics affected
Detection: Verify specific claims, check citations
Prevention: Careful prompting helps but doesn’t eliminate