Avatar Mediation

I’m sure no techie, but probably like you, I have been intrigued about stories about the ChatGPT program.  One illustration was in my post, A Mediator and a Bot Walk into a Bar …

Of course, artificial intelligence almost certainly will become much more sophisticated in the foreseeable future.  Major tech firms have been working on their own AI systems, and the ChatGPT sensation has spurred them to accelerate their efforts.

People have speculated about how AI will affect things that people routinely do for work – and what jobs will become obsolete because bots will perform the tasks much more efficiently and accurately.

Some have suggested that bots won’t be able to adequately duplicate some human skills and behaviors like communication and empathy.  So, phew, dispute resolution jobs will be safe.

But will they?

I speculated about how well bots will be able to simulate empathy and whether people generally will notice or care that they are interacting with machines, not humans.  I don’t know – its really impossible to know what will be the human-machine co-evolution in the coming years.

One can easily imagine AI systems performing dispute resolution functions in the not-too-distant future.

Imagine a system providing mediation with made-to-order avatars mediating by video.   There could be all sorts of settings such as the “mediator’s” race, skin color, national origin, gender identity, age, attire, hair color, tattoos, accent etc.  Mediators’ theoretical orientation could be set to facilitative, evaluative, transformative, or any of the zillion other theories.  Mediators’ goals could be set in priority order.  The amount of time available, lists of mediation techniques (such as use of joint sessions and caucuses), and other characteristics of the process also could be in the “settings” tab.   The “mediator” could caucus simultaneously with all sides – think of all the time that could be saved!

Creating bespoke mediators would be expensive.  So ODR companies might offer a variety of cheaper, off-the-shelf model mediators with specified combinations of features at different prices.  Maybe you could “hire” (?!) / buy / rent one on Amazon.

Would these processes generally be any better or worse than currently done by human mediators?

One of our esteemed colleagues recently emailed me that s/he thinks that mediation “often really sucks.”  Certainly, some mediators act robotically, repeating routines they were trained.

Some mediators, like ones profiled in this article, are especially skilled due to their experience, self-awareness, judgment, intentions, etc.  It may be a long time, if ever, before machines can surpass the performances of such practitioners.  Even skilled humans are fallible, but so are machines.

It’s easy to imagine, however, that AI systems could do much better than many mediators who are not particularly self-aware, experienced etc.  And at a much lower cost.

Will human mediators compete in the market with avatar mediators?  Will humans and avatars co-mediate?

700 words.

4 thoughts on “Avatar Mediation”

  1. Avatar mediators are not so far-fetched as they may sound.

    The NYT published a column entitled, Personalized A.I. Agents Are Here. It says:

    ““Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.” … OpenAI has made it very easy to build a custom GPT, even if you don’t know a line of code. Just answer a few simple questions about your bot — its name, its purpose, what tone it should use when responding to users — and the bot builds itself in just a few seconds.”“

    There’s a bot called “The negotiator” which says, “I’ll help you advocate for yourself and get better outcomes. Become a great negotiator.”

    What’s not to like?

  2. AI builds what it “knows” from existing data, huge amounts of it. In the absence of new and robust regulation, anyone who works with AI in mediation or any other sensitive environment is essentially just contributing more information to the system–which could be used to expand the dataset for the AI and/or used in all kinds of other ways. I hope that we don’t get so hung up on the anthropomorphic romanticizing of this technology that we forget the corporate overlords and others who benefit from extracting/mining data.

  3. This post mentioned that mediators who have a lot of experience could be hard to replicate using AI. It’s also important to note that some mediator experiences include unconscious bias endemic in the field, and that there is ample evidence that shows that AI has been replicating racial, gender, and other biases. Having AI that is derived from empirical mediator practices may be a mistake and a missed opportunity, then, because it could inadvertently perpetuate biased empathy and communication gaps that exist in the real world. It might be better for us to design something fresh, based on our best principles, untainted by the data of how even the most experienced mediators have sometimes been impacted by biases they don’t know they have.

    1. Absolutely. A question we should also ask is whether or not these AI mediators could start to generate biases on their after interacting with enough people to form patterns of information.

      This is such an interesting food for thought.

Comments are closed.