There was an excellent program on methodologies and terminology for research with a real-world focus at June’s past-and-future conference co-sponsored by Pepperdine’s Straus Institute, the ABA Section of Dispute Resolution, and Texas A&M Law’s Aggie Dispute Resolution Program. It featured an all-star cast of Howard Herman, Russell Korobkin, Donna Shestowsky, and Roselle Wissler, with moderation by Nancy Welsh (one of the organizers of the conference). This post summarizes their presentations and includes their notes and powerpoints.
The session began with a short discussion of the question “Why conduct empirical research regarding dispute resolution?” Roselle Wissler used professional baseball as an example of the use of detailed data to improve players’ performance — and suggested that courts and parties likely would be interested in similarly enhancing mediators’ and other dispute resolution neutrals’ abilities.
Howard Herman pointed out that while settlement probably could be analogized to runs batted in, we are also interested in other outcomes of dispute resolution processes, including procedural justice (how the processes work) and access to justice (who gets to use the processes). There are many mediator interventions that could be studied, and they are hard to isolate as independent variables. And the context of the research (such as the subject of the dispute) can make a big difference, so we must be careful not to overgeneralize research results.
Nancy Welsh then reminded the audience that the ABA Section of Dispute Resolution has established a Dispute Resolution Research Advisory Committee, which she chairs. The overall charge of the Committee includes “bring[ing] science to the delivery of conflict prevention and dispute resolution services” and “plac[ing] the Section at the intersection of practice knowledge and know how” in order to “ultimately assist with the development and sharing of cutting edge information that will strengthen the Section’s members, their practices, the profession as a whole, and the people it serves.” The Committee also is interested in hearing from researchers and sophisticated practitioners to move both qualitatitve and quantitative empirical research forward — e.g., identifying research needs, developing common definitions, and sharing research findings. This conference session and the subsequent one represented just such opportunities.
Russell on Experimental and Non-Experimental Research
Russell Korobkin then kicked off the individual presentations and provided a good primer on general social science research issues related to dispute resolution.
Dependent and independent variables. Dependent variables are the outcome measures of interest – essentially the desired goals. Independent variables are the factors that may affect the dependent variables. Studies commonly focus on settlement rate as the dependent variable, but there are many other important dependent variables to study including satisfaction with the process, efficiency, division of the “cooperative surplus,” and whether the parties created value (“expanded the pie”).
Experimental and non-experimental studies. Experimental and non-experimental studies have complementary advantages and disadvantages. In experimental studies, researchers design an environment in which they vary only a few independent variables. This enables them to make stronger inferences about the causal effect of the independent variables, especially when subjects are randomly assigned to the experimental and control groups. The disadvantage of experiments is reduced “external validity” – the ability to generalize the results to the real world. Non-experimental studies conducted in the real world have greater external validity but cannot provide as strong inferences about causal effects of the independent variables because there are many “uncontrolled” variables that could affect the findings.
Russell described his clever experimental study (co-authored with Michael Dorff) about how the negotiation process for hiring corporate CEOs affects the amount of CEO compensation. Traditionally, after considering several candidates, companies firmly decide who to hire and only then negotiate the amount of compensation. Russell and Michael hypothesized that companies could pay CEOs less if the companies negotiate possible compensation packages with several candidates before deciding which one to hire. They instructed students to negotiate in simulations where the salary negotiations occurred before or after the selection of the CEO, and they found that the CEOs received lower salaries when the companies negotiated the salary before selecting a candidate. Of course, one couldn’t do this study with CEO candidates in real life, so it provided insights that wouldn’t be possible without doing a laboratory experiment. On the other hand, readers may have doubts whether the dynamics in student simulations would be similar to CEO salary negotiations in real life.
Roselle on How Problems with Language Affect Meaning of Research Findings
Roselle Wissler then focused the discussion further by discussing key points from the ABA Section of Dispute Resolution’s Mediation Research Task Force Report. She was the principal author of the report, which I discuss in this post (including a link to the report itself). The Task Force identified 47 studies from the past four decades with empirical data analyzing effects of particular mediator actions on certain mediation outcomes.
Roselle noted that there were differences in how concepts were defined and measured in the various studies, what comparison group(s) were used, the data sources, and whether the studies considered whether factors such as the setting or dispute type might have affected the findings. She said that these differences could produce different findings regardless of the actual underlying effects of mediator interventions.
Differences in definitions make it hard to compare results of different studies. For example, various studies defined “pressing” or “directive” actions as:
- Press parties, push parties hard to change positions or expectations
- Urge parties to compromise, concede, or reach agreement
- Advocate for / agree with one side’s positions / ideas; argue one side’s case; push with bias for / against one side
- Tell parties what the settlement should be; press them toward that solution; try to make parties see things their way
- Control, dominate, direct the session
- Some also included: threaten to end mediation; use frequent caucuses; express displeasure with lack of progress; criticize one party’s behavior / approach
- Some also included aspects typically used to define other approaches, e.g.: analyze strengths / weaknesses; note costs of non-agreement; make face-saving proposals; clarify parties’ needs
She highlighted several of the Task Force’s recommendations:
- Develop common terminology, definitions, and measures for a core set of concepts
- Conduct research on the best way to study important concepts
- Develop reliable and valid measures and data sources
- Identify important contextual factors (e.g., dispute, setting, timing) that could alter the effects
Howard on Variables, Language, and Future Research
Howard Herman discussed several issues in his presentation. He endorsed the recommendations of the Mediation Research Task Force about the need for improved language. He said that the terms need to be clear, focus on specific interventions and behaviors, match the real world, and not use too high a level of generality. He criticized the evaluative – facilitative dichotomy, which he argues leads to over-simplification of the actual interventions.
He recommended that mediation researchers focus on joint sessions, convening, work done before mediation sessions, mediators’ opening statements, use of legal analysis, caucusing, mediator proposals, matching (or mismatching) demographic characteristics of mediators and participants, repeat players, unbundling, and the use of technology.
Donna on the Nuts and Bolts of Conducting Empirical Research
Donna Shestowsky described nitty-gritty details of conducting empirical research on dispute resolution. She cautioned that it is hard, requiring help with ideas and funding, approval from one’s school, convening a research team, and with no guarantee of publication. Her comments are relevant to empirical research generally, and even more so for the kind of complex quantitative research that she has done.
She listed a number of sources of funding and noted the requirement to get research approved by your school’s institutional review board to assure that you are following ethical requirements for human subject research. Getting this approval has the potential to involve many steps and can take some time, so you should plan ahead.
If you will use a survey, you should recognize that this is much harder than one might think, so you should never do this alone. One option would be to use questions that have been vetted, such as ones from model forms (e.g., RSI / ABA Section of Dispute Resolution Mediation Model Forms) or peer-reviewed articles. Especially if you write your own questions, get feedback on them, and test them in a pilot study or focus group.
You will need a team to help you with various aspects of the research. This might include other academics, students, various professionals (e.g., statisticians), and administrative support. Funding often is needed to cover the expenses associated with such help, or to pay for research participants if you decide to collect your own data. She provided a detailed list of funding sources and methods for learning about new funding sources as they become available.
She listed ideas for promoting empirical research on dispute resolution for universities, courts, dispute resolution professionals, and academics.
As an example, she described her recent article, Inside the Mind of the Client: An Analysis of Litigants’ Decision Criteria for Choosing Procedures, which I discussed in this post and received the award from the AALS ADR Section for the best article of 2018.
A Bit More
Before the conference, I wrote this post discussing some of the issues addressed in this program. I urged that developing common language about dispute resolution (along the same lines as Howard’s suggestions) should be a top priority. I argued that we need better understandings of what actually happens in practice. It’s important to use a range of complementary research methodologies – and that qualitative research is particularly well suited to help us learn about actual practice. I suggested that academics who would like to do some empirical research but don’t have research training or experience would find it easier to do qualitative research.
This program was followed by another session discussing related issues about empirical research.