AI, ADR, and Anxiety

This post started as a response to Jen Reynolds’s comment about my Avatar Mediation post.  It has grown into this new post about AI generally, growing anxiety about it and the state of the world, and how we can manage this anxiety.

AI Risks . . . and Potential Benefits

Jen wrote, “I hope that we don’t get so hung up on the anthropomorphic romanticizing of this technology that we forget the corporate overlords and others who benefit from extracting/mining data.”

I agree that there are serious and growing risks from the control and abuse of the gobs of data on the internet.  Business interests in market-oriented economies and governments in countries like Russia and China have tremendous power over the internet and AI.  Although there is some regulation of AI, I expect that we will live in a world where it will develop much faster than governments will be able to regulate new threats.  That’s already the current situation, and I expect it will get worse.

I think it’s also important to focus on the anthropomorphizing of AI, which I imagine will become seductive and eventually invisible, being taken for granted.  We already can see pieces of this, which can readily be assembled into increasingly plausible avatars.  When we go onto websites, we often encounter bots with drawings of humans saying, “Hi, I’m [fill in the name].  How can I help you?”  Digital assistants like “Siri” and “Alexa” have become normal parts of many people’s lives.  Videos, including deepfakes, replicate humans or create characters that people relate to.  Social media has apps for that too.  Characters don’t need to be realistic likenesses of human as advertisements frequently use cartoons, which presumably are very effective in motivating sales.  AI is pretty crude now but will become increasingly sophisticated and integrated.

After a spooky “conversation” with a chatbot, tech columnist Kevin Roose worried that “the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways.”

So I assume that in some ways we will inhabit a brave new world where we should be concerned about both control of AI and influence by AI.

Of course, that’s the part of the glass that’s empty.  There’s a part that’s full too.  Historically, technology has produced amazing improvements in people’s lives and AI probably will too.

Washington Post tech analyst Will Oremus described both parts of the glass in his article, “Lifesaver or job killer? Why AI tools like ChatGPT are so polarizing.”

For every success story in tech’s latest AI boom, there’s a nightmare scenario.

If you listen to its boosters, artificial intelligence is poised to revolutionize nearly every facet of life for the better.  A tide of new, cutting-edge tools is already demolishing language barriers, automating tedious tasks, detecting cancer and comforting the lonely.

A growing chorus of doomsayers, meanwhile, agrees AI is poised to revolutionize life – but for the worse.  It is absorbing and reflecting society’s worst biases, threatening the livelihoods of artists and white-collar workers, and perpetuating scams and disinformation, they say.

The latest wave of AI has the tech industry and its critics in a frenzy.  So-called generative AI tools such as ChatGPT, Replika and Stable Diffusion, which use specially trained software to create humanlike text, images, voices and videos, seem to be rapidly blurring the lines between human and machine, truth and fiction.

As sectors ranging from education to health care to insurance to marketing consider how AI might reshape their businesses, a crescendo of hype has given rise to wild hopes and desperate fears.  Fueling both is the sense that machines are getting too smart, too fast – and could someday slip beyond our control. “What nukes are to the physical world,” tech ethicist Tristan Harris recently proclaimed, “AI is to everything else.”

New York Times columnist Ezra Klein wrote that AI “changes everything.”  He said, “There is no more profound human bias than the expectation that tomorrow will be like today.  It is a powerful heuristic tool because it is almost always correct.  Tomorrow probably will be like today.  Next year probably will be like this year.  But cast your gaze 10 or 20 years out.  Typically, that has been possible in human history.  I don’t think it is now.”

Obviously, this uncertainty can create profound anxiety, especially considering the powerful effects of AI technology.

“The ADR Glass”

My speculations about possible avatar mediation questioned some assumptions and focused mostly on the empty part of the glass.  We would like to think that AI could not reproduce human skills such as communication and empathy that are necessary for good mediation.  Of course, AI can’t truly reproduce real human cognitions and emotions – but the simulations are likely to be increasingly good approximations.  And avatar mediation could reproduce the worst aspects of human ADR processes.

Many of us have been disappointed with aspects of human ADR that don’t fulfill our idealistic aspirations.  For example, some mediators are lousy listeners and press parties to settle as the mediators suggest.  Some businesses force employees and consumers to sign adhesion contracts with one-sided arbitration clauses.  Human biases inevitably color the processes, sometimes quite adversely.

People often focus on the empty part of the glass because of the loss aversion bias.  We generally pay more attention to potential problems than benefits.

So it’s important to remember that every day, lots of people benefit from mediation, arbitration, and other dispute resolution processes, albeit imperfectly.  As the saying goes, the perfect is the enemy of the good.  ADR processes are imperfect but people often prefer them to the alternatives.  And sometimes, ADR practitioners do a damn good job.  If avatar mediation is developed, it too might be quite good at times.

So the human mediation glass is partly empty and partly full.  Presumably, so will be machine mediation if it is developed.

Growing Anxiety

Just spitballin’ here, but anxiety about AI may be feeding into a more general anxiety in the US and probably elsewhere.

Political polarization has markedly increased as manifest in contemporary culture wars.  Wikipedia defines them as “wedge issues in the United States includ[ing] abortion, homosexuality, transgender rights, pornography, multiculturalism, racism and other cultural conflicts based on values, morality, and lifestyle.”  People on both sides are anxious because the other side threatens their deeply held values.  “Civilians” in the culture wars are turned off by both sides.

The covid pandemic radically upset normal life for about two years and we still are feeling lingering effects.  The Russian armed aggression against Ukraine disrupted feelings of confidence since World War II that major powers would not launch unprovoked wars.  The economy isn’t behaving “normally,” i.e., consistent with economic theory and experience in recent decades.  Etc. etc.  So there’s a lot of uncertainty about the future, which feeds our anxiety.

Dealing with Anxiety

New York Times columnist David Brooks wrote a column about what he called “The Self-Destructive Effects of Progressive Sadness.”  He wrote that a “well-established finding of social science research is that conservatives report being happier than liberals.”  Even if this is true as a generalization, committed political partisans on all sides may display three indicators of maladaptive patterns he identified.  These include a “catastrophizing mentality,” “extreme sensitivity to harm,” and a “culture of denunciation.”  Indeed, many people in all kinds of intense conflicts often display these patterns.

He suggested addressing these symptoms by focusing on what people actually can control.  “People who provide therapy to depressive people try to break the cycle of catastrophic thinking so they can more calmly locate and deal with the problems they actually have control over. … Just about everything researchers understand about resilience and mental well-being suggests that people who feel like they are the chief architects of their own life[] are ‘vastly better off than people whose default position is victimization, hurt and a sense that life simply happens to them.’”

Mr. Brooks believes that the “woke” era is winding down.  I think that conflicts about “wokeness” are symptoms of our culture wars, which I expect will continue to ramp up.  Which is why I think his analysis and suggestions for dealing with sadness and anxiety are especially valuable.

Returning to focus on AI and ADR, it’s important to recognize our own reactions and fears, have as accurate and balanced an understanding of what’s happening as possible, acknowledge the uncertainties, and focus on what we can control.

5 thoughts on “AI, ADR, and Anxiety”

  1. One problem with the debate about whether AI is good or evil is that it’s focusing on some kind of end state where AI threatens to replace the human mediator. We might benefit from taking a step back and recognizing that the first things this technology is likely to do is to become a helpful tool in the hands of the mediator. For example, I gave ChatGPT a few notes of a mock landlord-tenant mediation agreement and it drafted a respectable settlement agreement in under a minute. Ask Bing’s chatbot for some suggestions on how to break an impasse in negotiations and you get a half-dozen tactics you likely don’t recall from your 40-hour with source references. It can also read my shorthand notes and organize them into which are issues and which are proposals. It seems likely that a chatbot soon could simply take organized notes in real time during a session. Parties could be invited to access chatbots to learn more about the process of mediation and gain clarity on legal rights and remedies to help set expectations before mediation efforts begin. A chatbot can suggest reasonable questions a mediator might ask to gain facts needed to help resolve a particular type of conflict. Properly prompted with a general description of the case, chatbots can even suggest proposals parties might consider based on access to reports of many more resolved disputes in comparable cases than any human could recall. But AI suggested questions or proposals are a mix. Some are obvious, some are not relevant and sometimes a few are insightful. It still requires a human to make that assessment. In short, we should be thinking creatively about how to employ this new technology in small ways to improve the services ultimately delivered by a human mediator. The day may come when AI can independently mediate between human parties, but like humans, AI will first need to be trained, refined and proven reliable in the myriad smaller functions – even as a co-mediation partner to the human. It may be much more productive to work toward these more modest objectives rather than engaging in the good versus evil debate.

  2. This post mentions how human bias colors the mediation process, suggesting that AI-based mediation would likely incorporate some of these problems as well.  AI can also help people correct their biases.  Mediators are often not following our theoretical models for mediation, to such an extent that the Real Mediation Systems Project is gathering data documenting how people actually practically operate in their own ad hoc manners.  Imagine putting each of those real mediation systems into an AI and asking it to simulate mediations, and then watching what happens.  Seeing an AI act out the human biases can help illustrate which real mediation systems are more likely to have them.  This kind of thing has already been happening, as major companies shut down their AI’s to correct biases the machines inadvertently incorporated from the underlying human content they mimicked.

    I’ve been asking ADR practitioners and organizations to change biased policies that are illegal, such as ones screening out parties based on their mental illness.  It’s hard work because many people become defensive about how they practice, and avoidant of communicating with me or dealing with the problem.  But folks may take it less personally if we’re criticizing a computer instead of them.  And it may be harder to avoid the tangible examples of a computer’s written bigotry that can be more readily traced back directly to the writings that influenced the machine’s biased thought process.  So human mediators might learn from watching machine mediators repeat their mistakes.  Imagine if we could easily take every bit of guidance that’s been argued about on this listserv and put it into a machine to try it out for everyone to watch and compare.  Seeing a machine implement your Real Mediation System might become the most powerful way to reflect on your practices, notice your biases, and improve.

Leave a Reply to John Lande Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.