Stone Soup Mini-Course:  My Students and I Can Do This!

In a recent post in this mini-course, I highlighted several impressive studies relying on qualitative data.  This post follows up the one describing how you and your students can get great joy from doing Stone Soup interviews and focus groups.

This post uses my research to demonstrate that qualitative studies are readily do-able by you and your students.  I am including some research instruments and publications for illustration.  Most of these studies don’t include quantitative data, such as frequencies of responses in the sample that would be suitable for making generalizations to populations, but this qualitative data would be very helpful in figuring out what survey questions to ask to get valid frequency estimates and do good quantitative analysis.

I will start by describing some relatively simple studies to evaluate aspects of programs and then describe academic studies.  Finally, I make some general observations.

In the next post, I will suggest a wide range of questions that you might want to ask to answer important theoretical and practical questions.

Evaluation Studies

A colleague and I were hired by a court mediation program to get feedback from parties.  We conducted five focus groups with 18 parents who had mediated parenting issues in the program.  We wrote a report consisting primarily of quotations from the subjects, which were organized by topic and ideas expressed, with little analysis.  The quotes give a vivid feel for the parents’ perspectives.  The report includes appendixes with the focus group protocol and suggestions for conducting focus groups of programs like this.

I conducted focus groups with lawyers using a court civil mediation program.  The program director and I developed the focus group protocol to get reactions about the program generally and the timing of mediation.  We conducted two half-day focus group sessions with a total of about 20 lawyers.  The program director found the responses very helpful and proposed some changes in the program, which were adopted by the court.

A colleague and I were retained to study a school’s complaint procedures.  One procedure involved complaints against faculty or staff and the other involved charges that students violated the school’s rules.  We conducted interviews with 15 students, faculty, and administrative staff.  Most of the interviews were conducted as part of two focus groups:  one with students and the other with faculty.  We asked both groups about perceived problems and possible solutions, and we developed useful information and ideas.

When I was in grad school, I was hired to evaluate a wonderful “Teen Assistance Program” run by the University Extension.   In this program, extension agents in each county functioned as community organizers by convening stakeholders concerned about the welfare of teens in their communities.  The local groups developed surveys for teens to complete in school, and the groups used the data to develop programs addressing problems identified in the surveys.  Each local group decided what questions to include in the surveys and, after getting the results, what actions to take.  I conducted 29 telephone interviews with extension agents to learn how well the program worked.  As described in my report, the projects worked very well in most counties, though it prompted difficult conflict in some counties where some people didn’t like the idea of studying sensitive issues like teens’ sexual behavior and alcohol use.

Academic Research

For my doctoral dissertation, I started with 13 in-person interviews of inside counsel, outside counsel, and nonlawyer executives about their opinions about litigation and ADR.  Based on these interviews, I constructed a standardized survey, which I conducted by phone.  There were 178 survey respondents including 70 outside counsel, 58 inside counsel, and 50 executives.  I published the findings in two articles:  Failing Faith in Litigation? A Survey of Business Lawyers’ and Executives’ Opinions, and Getting the Faith: Why Business Lawyers and Executives Believe in Mediation.

I collected data from 10 federal district court clerks in connection with a program at the Eighth Circuit Judicial Conference.  Before the program, I sent the clerks a survey with 11 open questions about how well their courts work.  At the program, I summarized their responses and led a discussion about the issues they raised in their responses.  Based on the surveys and discussion at the conference, I published a study addressing issues raised by the so-called “vanishing trial” controversies, How Much Justice Can We Afford?: Defining the Courts’ Roles and Deciding the Appropriate Number of Trials, Settlement Signals, and Other Elements Needed to Administer Justice.

I conducted a study of the Divorce Cooperation Institute (DCI), a group of “cooperative” lawyers in Wisconsin.  Cooperative law in a process in which lawyers and parties agree to negotiate cooperatively, generally from the outset of a case.  Unlike collaborative law, there is no “disqualification agreement,” so the lawyers can represent the parties in court if desired.  This study is based on semi-structured telephone interviews of 10 DCI members, several surveys completed by a total of 77 DCI members, and discussion at an annual DCI seminar.  The study describes members’ views about cooperative practice and how it worked.  I published the results in Practical Insights from an Empirical Study of Cooperative Lawyers in Wisconsin as well as a short summary, Learning From “Cooperative” Negotiators in Wisconsin.

In an article about collaborative lawyers’ ethical duties to screen cases for appropriateness and obtain clients’ informed consent, Forrest (“Woody”) Mosten and I included data collected from eight collaborative law books and 126 websites.  We used these materials as “artifacts” for content analysis to summarize the views of collaborative practitioners.  This data complemented analysis of applicable ethical rules.  We published this as Collaborative Lawyers’ Duties to Screen the Appropriateness of Collaborative Law and Obtain Clients’ Informed Consent to Use Collaborative Law.

As a member of the ABA Section of Dispute Resolution’s Task Force on Improving Mediation Quality, I took the lead in designing and analyzing its research.  We conducted focus groups in 10 cities in the US and Canada.  Various members of the task force conducted the focus groups involving lawyers with extensive experience in civil mediation as well as civil mediators.  After we did about half of the focus groups, we started collecting surveys at the end of the focus groups.  More than 200 people participated in the focus groups, and 109 respondents completed the surveys. The Task Force also conducted individual telephone interviews with 13 parties in mediation.  The ABA SDR website includes the final report as well as the research instruments, data, and lots of other good stuff.  Here’s a short article summarizing the Task Force’s findings and recommendations, Doing the Best Mediation You Can.

I recently did a study based on 32 phone interviews with lawyers using this interview protocol about the case they settled most recently as well as general patterns of their negotiations.  This resulted in two publications:  A Framework for Advancing Negotiation Theory: Implications from a Study of How Lawyers Reach Agreement in Pretrial Litigation, and Good Pretrial Lawyering: Planning to Get to Yes Sooner, Cheaper, and Better

For my latest study, I collaborated with Peter Benner, a dispute resolution professional and adjunct faculty member at Quinnipiac.  We used this protocol to conduct 15 phone interviews, primarily of inside counsel in large businesses that have what we call “planned early dispute resolution” programs.  Our full article is Why and How Businesses Use Planned Early Dispute Resolution.  We published two short pieces based on the study, How Businesses Use Planned Early Dispute Resolution, and How Your Company Can Develop a Planned Early Dispute Resolution System.

General Observations About Qualitative Research

Qualitative research is a tool for factual investigation of questions of interest.  So it is a means to an end, rather than an end in itself.  It should be used when it is particularly suited to answering the questions, often in combination with other research methods.  For example, the ABA Mediation Quality Task Force study, the study of the Divorce Cooperation Institute, and my doctoral dissertation illustrate how qualitative and quantitative data can complement each other to provide a much richer understanding than you can get from either alone.  And empirical data usually are analyzed in connection with doctrinal and theoretical issues.

Data from qualitative research is evidence, not truth.  So, rather than taking subjects’ statements at face value, researchers and readers should consider how well the evidence supports the propositions put forth by the subjects and the researchers.  Much like lawyers, judges, and juries, one should consider factors that support or detract from subjects’ credibility.  Are the statements internally consistent or contradictory?  Are there reasons why their accounts may not be fully accurate?  Subjects generally do not intentionally lie, but they may have faulty memories or they shade their accounts to avoid embarrassment, etc.  Do the statements seem plausible?  How do they relate to other relevant information in this or other studies?

Using this kind of analysis, one can develop more or less confidence in particular pieces of evidence.  Lawyers make these factual assessments all the time, so analyzing qualitative research data can provide useful practice for law students.

My research instruments almost always had too many questions, some didn’t “work,” and I rarely got to ask them all, but that’s ok.  I generally put the most important questions at the beginning and I developed ideas about which questions I might skip.

I approached my interviews with genuine curiosity and tried to convey to the subjects my confidence that they would tell “their truth.”  I wouldn’t cross-examine them, but when I was puzzled or if I knew that other people had different perspectives, I would respectfully ask subjects about this.  I certainly had ideas about what I would find, but I was open to being surprised and, indeed, asked questions testing other possible explanations.  I distilled my approach to interviewing in this document providing guidance for conducting and summarizing interviews.

After I collected data, I would put responses to each question in a separate word-processing file, identifying the source of each statement (usually with some code).  Looking at all the statements together, usually a coherent story would reveal itself.  Sometimes, there would be a consistent pattern within my sample;  other times, subjects had different views.  I had a mental metaphor of creating a mosaic with the data or putting together a jigsaw puzzle.

I tried to follow the data wherever it led rather than trying to force it into a neat narrative.  Since these projects had small, non-random samples, my conclusions were necessarily cautious and tentative, with cautions noted in the publications.  Even so, I think that they all produced valuable insights.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.