At the Past-and-Future Conference last month, I was on a program with Doug Frenkel, Michaela Keet, and Donna Stienstra entitled, “Research and Scholarship with a Real-World Focus: Studying What Practitioners Actually Do.” This program followed one on research terminology and methodology, described in this post.
This program was in a conversational format, framed around several questions. The fabulous Donna Shestowsky took notes of the conversation. Because of the limited time in the program, the panelists didn’t have a chance to address the questions as much as they would have liked, so in this summary of the main points, they have revised and extended their remarks.
Obviously, there wasn’t enough time to give these questions the time they deserve, so this conversation needs to continue.
What should be the most important and realistic goals of future empirical research on dispute resolution? For example, should it seek to:
- develop clearer concepts and language
- identify key contextual factors affecting processes
- develop new theories and insights
- develop valid generalizations
- help establish consensus on best / worst practices
- help design conflict management systems
How should we deal with the fact that DR processes are so complicated and affected by many contextual factors that it is hard to generalize?
Donna: Because you can’t do some of things on the list without doing some other things first, my top two priorities would be (1) clear concepts and language, and (2) better understanding of the context. We could have greater consistency across studies if, for example, we used similar definitions of important variables like party type and case outcome. Regarding context, to take courts as an example, many important variables might be affected by whether mediators are compensated, but we can’t know if compensation isn’t included in a study.
But there’s a prior step. We need to stop being in such a rush to do research to change the world—which usually means quantitative research. Well before we draw up a list of variables or design questionnaire, we should take a step back and immerse ourselves in the thing we’re studying, so we can use the right concepts and develop appropriate measures.
Regarding the complexity of dispute resolution processes and the many contextual factors, there’s a quick answer that’s hard to execute: experimental study designs. There’s also a longer answer that’s hard to execute: repeated, in-depth observation in multiple settings over time.
Doug: I would add “best and worst practices” to the list of study priorities in the following sense: We academics have been engaged in an ideological debate for over 30 years about what makes for good / ethical mediation but, in trying to shape those norms, have gathered little data on what consumers value or how various behaviors operate. For example, we tout the unique mediation process property of participant self-determination. And yet, perhaps ironically, we mostly define that concept in terms of our own preferences, comfort, or values instead of those of users or on what might be learned about the actual impact of various persuasive or other interventions. Those kinds of things can be measured empirically and segmented by contextual variables (subject matter, lawyered or not, mandatory vs. voluntary participation, etc.)
Audience: We need randomized control studies to help us generalize to different contexts.
John: It’s very hard to do randomized experiments about dispute resolution in the real world. That methodology doesn’t solve the problem that so many contextual variables may affect the outcomes. If you do an experiment in one context, it won’t necessarily generalize to other contexts.
For example, in the 1990s, Donna Stienstra conducted a rare study involving randomized assignment in the US District Court for the Western District of Missouri. Some parties were required to mediate (group A), some parties were given the option to mediate (group B), and a third set weren’t permitted to use mediation (group C). On average, group A’s cases were resolved faster than both Groups B and C. The same person mediated all the cases, which provides more confidence in the comparison between the groups, but it also raises questions about the generalizability to cases handled by other mediators. In addition, the fact that there were differences between groups A and B reflects the importance of contextual and program design features. All mediation is not the same – even when conducted by the same mediator.
Audience: How problematic is it if a study doesn’t include factors that could affect the outcomes?
John: Omitting key potential independent variables leaves questions about whether the unmeasured variables are responsible for the observed outcomes. Of course, it’s impossible to include all potential variables in any study, especially in the dispute resolution context, where so many variables could be influential. That’s one reason why we shouldn’t rely on the findings of any single study and need multiple studies that include a more complete set of factors that could affect the outcomes.
Audience: Should we use empirical research to try to distill best practices or identify what practices we clearly should avoid?
John: I think that empirical research could help inform our understandings about what practices may be particularly appropriate and effective or not. Because of differences in context and complexity of DR processes, I doubt that we can make confident generalizations about the frequency and nature of good or bad practices. Ultimately, this is a judgment by the professional community – and empirical research can identify actual practices occurring with substantial frequency that the community might encourage, discourage, or (by advocating for legal rules) even prohibit.
The world is changing rapidly and DR practice is changing as part of that. What questions would be important for our field to study empirically? What are new forms of DR we should study? What are challenges or barriers for improving DR processes and systems?
Doug: One set of questions worth examining surrounds the in-person behaviors of state court judges in dealing with the overwhelmingly unrepresented body of litigants who appear before them. Do they conform to the classic (largely federal court-based) image of the passive arbiter when dealing with such parties? Do they provide counsel-type assistance? Do they apply the law or mete out “fairness” when adjudicating? Are they “settlers”? As the “ADR” field and court-annexed mediation started in large measure with traditional images of judicial conduct in mind, such current data might inform access-to-justice policy makers, court administrators, neutrals and judicial trainers going forward. Fortunately, observational and other empirical work has begun in this area in several parts of the country.
One new (to at least half of the states) form of DR worth studying is parenting coordination in child custody courts. In the growing number of jurisdictions that have adopted such systems, the responsibility for overseeing the enforcement of custody orders is delegated to legal, mental health or other professionals whose task is to facilitate and, if necessary, arbitrate ongoing disputes in order to free courts from having to micromanage recidivist litigants. But what do these neutrals actually do? How do they balance mediating and decision making roles? Should confidentiality apply to all or part of this process?
Finally, much is being done in terms of harnessing video and other technology in courts and in alternative processes where in-person participation is costly or impractical. But we know little about differences in emotional and other dynamics when conflict communication takes place over a screen. This would seem to be a fertile and important interdisciplinary area for study.
Michaela: In Canada, part of this rapidly changing world is the growing awareness about the justice system’s shortcomings: the “access to justice” problem. Since the release of the Roadmap for Change report written by a respected Supreme Court justice, priorities have shifted around the country. The report exposes how the justice system is failing average Canadians across socio-economic classes. There is a lot of worry about access to justice in Canada following that report.
Donna: I talked with some people in the courts and in my office to get a sense of the challenges ahead. Here are the things I heard: access to justice, especially for self-represented litigants; declining resources; declining confidence in the courts; and developments in artificial intelligence.
What can research on dispute resolution do to understand and address these challenges? Here’s one quick point. We know from recent research that 2/3 to 3/4 of plaintiffs are individuals while 3/4 to 4/5 of defendants are something other than individuals. We also know from this research that outcomes from an ADR process were seen as better than outcomes from bilateral party negotiations, suggesting that ADR could play a significant role in enhancing access to justice.
John: We need to better understand lawyers. I interviewed 32 lawyers asking about the last case they settled, starting from the very first meeting with client. I think that this was very helpful in understanding how lawyers operate and how negotiation occurs during the entire life of a case.
Audience: How can we trust lawyer recollections about meetings taking place 1-2 years before?
John: Problems with recollections can be a problem with many studies of dispute resolution involving every role in dispute resolution. In my study, lawyers described the case they settled most recently, which should have reduced problems of recollection. Also, I was particularly interested in their perceptions of the cases, not “just” the facts.
Data from qualitative interviews like these is recognized as a legitimate form of social science data. All data is imperfect and susceptible to various kinds of errors, including human responses to quantitative fixed-choice questions. Researchers and readers should evaluate potential errors in lawyers’ interviews as they should for all data.
Audience: We need more direct access to the actual parties. We can’t assume that lawyers know what the clients want or think.
John: You’re right that lawyers generally have different perspectives than their clients and that we should collect data from parties whenever appropriate and feasible. Unfortunately, there are practical challenges in recruiting parties for studies. And it’s important to understand lawyers’ perspectives as they manage the cases and influence parties a great deal. They generally are involved in the case from the outset, usually way before mediators are brought into the process in civil cases where parties have lawyers.
Audience: I know of a situation where research findings didn’t show positive effects for dispute resolution and the researcher didn’t publish the findings.
Donna: Researchers should publish the bad with the good and should be neutral. The ethical approach is to not hide findings.
What factors do you hypothesize to cause changes in DR – and thus would be important to study?
Michaela: Here again, at least in Canada, access to justice issues are likely to define the future of DR. Here, for example, are four shocking statistics from the Canadian Access to Justice study: (1) 90% of legal needs are going unmet; (2) only 6.5% of legal problems end up in formal legal system; (3) 65% of people with legal problems think that nothing can be done about these problems, and (4) 50% of Canadians think they will self-represent in legal cases.
For those of us in the DR field, the assertion that formal legal processes (especially litigation) can’t solve all problems’ is not a revelation. However, for the first time, DR processes are also under scrutiny. While we have known for a long time that 98% of cases (or much more) do not proceed to a trial, we don’t really know why, or where they end up. The Roadmap for Change report suggests that a large number are still not getting resolved. And that’s true even though mediation has been well-integrated into Canadian courts for up to 20 years.
It’s therefore important to study the real journey of these claims – and how people are qualitatively experiencing their encounters with DR processes along the way.
What do we do with these statistics as dispute resolution professionals? We know that these statistics come from a world where we already have dispute resolution options. Where is the access to justice?
Audience: We could really use similar statistics here in the US along with the mandate and the money to try to fix things. What would these statistics be if we didn’t have DR options? We know that many people self-represent in the US, which is not good. What role can court ADR play in helping them? Limited scope counsel programs are one idea.
What are good methodological approaches for designing empirical research to get realistic understandings of what happens in the real world?
Michaela: Understanding people’s experiences (deeply) is best done through qualitative research. To better understand certain issues, we need to study things differently.
Doug: Some simple designs can yield important results. One that comes to mind is the 2007 fairly large-scale Swaab-Brett Netherlands study of caucuses conducted pre-mediation and, more conventionally, after a joint session in family and labor disputes. Based solely on mediators’ responses to post-process questionnaires, it yielded some interesting and potentially important data (correlations, if not actual “findings”) about the desirable timing, frequency and purpose of both forms of caucusing.
Donna: We should use multiple methodologies, including focus groups, surveys, interviews, and observations. Convergence of results can give us confidence about our understanding of a particular phenomenon. Over many studies, patterns and generalizations can emerge. On the particular point of experimental field studies, these are tricky in a court setting because judges may be wary of treating cases differently, and random assignment, e.g. of cases to ADR and not ADR, risks taking away from some cases a procedure they’ve become accustomed to.
Audience: Both qualitative research and quantitative research have value. It all depends on what question you are trying to answer. Randomized experiments are not always the best, either. Again, it depends on what question you are trying to answer.
John: I agree that both qualitative and quantitative research have value. Ideally, researchers would use both approaches in combination. Qualitative research is especially useful in gaining new insights. Quantitative research is especially useful in making population estimates and testing hypotheses.
Audience: It’s already hard to publish work on DR in regular law reviews, but when it’s empirical, it is even harder.
John: Articles can combine theoretical and empirical material. So empirical research need not be limited to articles that only report the results of a study. Indeed, good articles reporting empirical results generally do that anyway to some extent. But you can write articles with more of a balance between theoretical and empirical material.
Qualitative research can have fewer challenges compared to quantitative research and may be appealing for law reviews. Qualitative studies have produced juicy quotes that make for compelling reading and have gotten published. It’s generally a lot easier to do qualitative research than quantitative research, as I described in What Me – A Social Scientist? I have done a fair number of qualitative studies and I never had to write a grant proposal or need funding. Of course, I did have to get IRB approval as with any human subject research, but that wasn’t very hard.