You’re hallucinating if you think AI won’t hallucinate.
Reports of fake citations, fabricated quotations, manipulated images, and AI-generated misinformation now appear regularly. Lawyers using generative AI have submitted fictitious cases in court filings and have been sanctioned.
These developments understandably alarm legal and dispute resolution professionals. They should.
My article, The Surprising Value of AI Hallucinations, argues that the discussion about AI hallucinations may be missing something important: hallucinations may produce unexpected benefits.
To be clear, I’m not suggesting that hallucinations are desirable. They create serious risks, and the prevalence of hallucinations highlights the importance of careful verification. Those risks make verification skills increasingly important.
Some law students already receive intensive training in these skills through law review cite-checking, where students verify whether authorities actually support the propositions asserted in scholarly articles. In the age of AI hallucinations, those verification skills are important for all law graduates, not just law review editors. Lawyers increasingly may need to verify not only their own citations and assertions, but also those generated by counterpart attorneys, colleagues, experts, and AI systems.
Hallucinations also could encourage professionals to exercise better habits of judgment that they should use anyway. Verification duties did not begin with AI. Lawyers, mediators, negotiators, arbitrators, academics, journalists, and experts always have operated under conditions of uncertainty. Professionals long have had responsibilities to evaluate sources, assess reliability, analyze ambiguity, and question assertions that merely sound persuasive.
AI hallucinations make verification responsibilities more obvious.
Indeed, hallucinations may improve professional thinking in some situations. When people carefully verify AI-generated content, they may discover ambiguities, conflicting authorities, omitted assumptions, overlooked issues, or productive new questions worth exploring. Even inaccurate outputs sometimes stimulate creativity and deeper analysis.
In short, the article suggests that an important lesson of hallucinations concerns professional judgment, not just AI.