Creating Knowledge Together, Part 2: Revised Plans for The Stone Soup Project

In a series of posts, particularly this one, I wrote that the University of Missouri had tentatively planned to develop a database of reports about actual cases.  As described in my post, Creating Knowledge Together, the underlying goal was for faculty, students, scholars, practitioners, educational institutions, and professional associations to collaborate to produce, disseminate, and use valuable empirical data about dispute resolution.

We recruited this impressive Board of Advisors, with people in various roles in our community, to review our plans and make suggestions.  After receiving many valuable comments, we decided to defer developing a centralized database but to pursue the same goals in a decentralized stone soup project.  In particular, we still encourage faculty to assign students to collect qualitative data about actual dispute resolution practice, starting next year.

Fifteen faculty have already indicated that they would use a Stone Soup assignment next year and we hope that others will join them.  You might do this by having students write case reports as originally planned or in a variety of other ways, as described below.

If you plan to use a Stone Soup assignment next year, please let me know what course you would use it in and in what semester.  I would like to connect faculty using it in the same course so that you can share ideas if you like.  This post provides documents that will help you develop a Stone Soup assignment.

In this post, I describe the genesis of the database idea, the value of qualitative data, and how we plan to proceed to encourage people in our community to create knowledge together as part of the Stone Soup Dispute Resolution Knowledge Project hosted by the University of Missouri.

Genesis of the Database Idea

I got the idea for the database after reading the article by David Matz and Adrian Borbely for our Tower of Babel symposium.  They sharply critiqued empirical data on negotiation, arguing that it is insufficient to understand the complex reality of negotiation.  Of course, quantitative research is necessary and helpful, but it inevitably condenses dispute resolution processes so much that it misses important realities that qualitative data can illuminate.  David and Adrian advocate developing book-length case studies.  In my article, I argue that it would also be helpful to develop a larger body of shorter case studies.

Their critique resonated for me because I had recently conducted a study in which I interviewed 32 lawyers, asking them to provide detailed accounts of the case they most recently settled.  This study produced data challenging traditional understandings of the two-model structure of negotiation theory (integrative-distributive models etc.) and even the conception of negotiation itself.  Rather than estimating frequencies in a population, this study analyzed patterns in the case narratives.  I documented some cases that didn’t fit the traditional integrative and distributive models but constituted a third model (which I called “ordinary legal negotiation” or which might generically be called “norm-based negotiation”).  More importantly, I identified cases that didn’t fit into any of the models, and I suggested a framework for analyzing negotiation.

I also used this data for a companion article, Good Pretrial Lawyering:  Planning to Get to Yes Sooner, Cheaper, and Better, which catalogued practices and recommendations from the lawyers.  These theoretical and practical insights would not have been possible with traditional quantitative methods.

Purpose and Validity of Quantitative and Qualitative Data

The Stone Soup project was intended primarily for qualitative theory-development rather than quantitative theory-testing.  This would be valuable considering the need to develop new theory, as identified in the Tower of Babel symposium, and the limitations of quantitative methods for developing theory about the complex phenomena of dispute resolution.

Although quantitative research has benefits, it also has significant limitations.  In studies relying primarily on quantitative data about dispute resolution cases, the entire experience of each case is collapsed into a relatively small number of quantitative variables for a single person in the case.  Ideally, research subjects could validly reflect their experience as framed in the questions and response options, though that can be difficult or impossible for lengthy, complex interactions.  For example, how much does one learn about a case from a subject’s “somewhat satisfied” response?  Even when such research permits valid comparisons, it misses a lot of important information about the processes, interactions, and evolution of cases.

Just as our current theory needs renewal (for reasons described in the Tower of Babel symposium), I think we also need a renewed approach to empirical research.  Indeed, they should go hand in hand.  It would be one thing if our quantitative research provided clear, consistent findings, but I think that’s generally not the case.  Our field is so broad that people don’t have consistent understandings of some basic concepts, and research findings often have been equivocal.  Although we would like to have robust empirical findings that people using ADR generally get better results, save time and money etc., one can find many quantitative studies showing that particular interventions in mediation, for example, have positive effects, negative effects, or no effects.

We certainly should appreciate prior learning and use quantitative methods as appropriate.  But there is relatively little qualitative data, which we especially need to rethink our basic theories and complement the predominant quantitative theory-testing approach.

Of course, there are potential problems with qualitative data collected by students, but no data are perfect, there are ways to reduce the threats to validity with qualitative data, and readers should evaluate qualitative data just as they should evaluate quantitative data.  Some quantitative data are more persuasive or helpful than others.  Similarly in my study, I decided that some of the 32 case narratives were more helpful than others.  In another post, I will describe how qualitative data can provide very valuable insights and how you can evaluate it.

Moving Forward to Create Useful Knowledge Together

Although we will not proceed with the database project, we still want to encourage collaboration of faculty, students, practitioners, and professional organizations in our field to create valuable new knowledge.

Since we won’t be collecting case reports to include in a central database, faculty would have flexibility to use a range of other assignments.  I still think that collecting reports of actual cases is a fabulous assignment that can produce very valuable insights.  This assignment creates greater concerns about protecting confidentiality than some other options, so you might consider other assignments.

One option would be to assign students to interview people about smaller aspects of cases instead of entire cases.  Sort of like eating parts of an elephant instead of the whole thing.  For example, students might interview people about cases where specific issues arose – e.g., intense emotions, problems of distrust, cultural differences – and then ask for descriptions of parts of real cases that featured those issues.

Another option would be for faculty to use a class period like a focus group, asking selected practitioners about a certain set of issues.   Click here for guidance about planning and conducting Stone Soup “focus group classes.”

Faculty teaching clinical courses can produce greater knowledge by having students systematically analyze their observations.

Practitioner organizations may want to collaborate with academics to design small research projects of mutual interest.  For example, at the ABA conference in April, we conducted a session identifying some issues that would be helpful for practitioners to learn about.

Course assignments (1) might be only for use in particular classes, or (2) might be both for course assignments and publication in professional and/or academic journals.

In the coming weeks, I will write a series of posts to help faculty plan assignments for next year.  You can think of this as a free online course, with relatively short readings, no exams, and NO GRADING.  Topics will include some or all of the following.  As I add posts, I will add links to them in this post, which can serve as an index.  So you might want to bookmark this page.

  • Why new empirical knowledge is needed for the DR field, particularly knowledge derived from qualitative data
  • How qualitative research can provide valuable insights – and how to interpret qualitative evidence
  • Possible research questions of theoretical and/or practical value
  • Examples of qualitative research about DR
  • Basic instruction in qualitative research methods to maximize validity and value of evidence
  • Designing course assignments for students to collect evidence using qualitative methods
  • Training students to do interviews and other factual investigation
  • Short course in social science research methods relevant to DR
  • Advice about whether IRB review is needed and dealing with IRBs
  • Identifying and recruiting research subjects
  • Writing useful – possibly brief – summaries of evidence collected
  • How research products can be used in teaching and research
  • Potential for developing shared research agenda by academics and/or professional organizations
  • Developing a system for dissemination of analyses, e.g., this blog, SSRN, website

Watch this space.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.