March: Hallucination

The word of the month for March is “hallucination.”

Chat GPT and other AI programs have been big news lately. Both this blog and the DRLE listserv have hosted lively discussions about what advances in AI may mean for legal education and conflict resolution — not to mention for poetry about conflict resolution.

Just this past week, John Oliver did a segment on AI, in which the director of Stanford’s Digital Economy Lab, Erik Brynjolfsson, noted the considerable impacts that AI will have on information and knowledge workers:

 I would put lawyers right at the top of the list … But I like to use the word “affected” and not “replaced,” because if done right, it’s not going to be AI replacing lawyers. It’s going to be lawyers who work with AI replacing lawyers who don’t work with AI.

In this same segment, Oliver points out some difficult problems with artificial intelligence as it currently operates, such as the “black box” issue and disturbing tendencies toward bias. Additionally, AI systems are prone to “hallucination.”

What is hallucination? Well, Chat GPT’s knowledge base is the internet, and it answers questions based on what it’s gathered. But it doesn’t know what it doesn’t know, which means that it can’t admit to not knowing something. When Chat GPT doesn’t know something (which, again, it doesn’t realize), it will just make something up. These made-up responses are called hallucinations because, like human hallucinations, the AI believes it is seeing something but what it is seeing isn’t actually there.

Here’s an example:

HUMAN:          When did France gift Lithuania Vilnius TV tower?

CHAT GPT:       France gifted Lithuania the Vilnius TV tower in 1980.

HUMAN:          What prompted the French government to provide the gift?

CHAT GPT:       The French government provided the gift as a gesture of friendship and solidarity with the Lithuanian people, who had recently declared independence from the Soviet Union.

This all sounds very impressive, but as this article points out, France had nothing to do with the Vilnius TV tower. The tower was completed in 1980 but had no connection to France. Chat GPT scoured the internet and produced what you see above. The AI showed no confusion, no indecision, not even a moment of hesitation in producing an incorrect answer.

For those of us who study ADR, the increased awareness around the shortcomings of AI (overdetermined processing, pernicious bias, hallucinations, confident pronouncements without any basis in knowledge or truth) is a reminder of how these same issues surface in conflict and conflict resolution.

One of the tremendous contributions of mediation generally and narrative mediation in particular is the focus on examining and deconstructing what people say in conflict, to help parse through the subjectivity and misunderstandings that inform how people describe what is happening. It’s easy to imagine that computers do not suffer from these same distortions—but of course they do, because their existence is entirely dependent upon and circumscribed within human-made systems and commentary.

Additionally, the existence of AI hallucinations is further evidence that we need to stay focused on how we can use AI responsibly and productively in our work. AI may not have a place in all ADR contexts, of course. (One of the authors in Star Wars and Conflict Resolution made a point of declaring that “never will droids replace mediators.”) But where AI does play a role, it will be important to watch how ADR and legal professionals manage these technologies without being managed by them.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.