Stone Soup:  Model for Gathering Data at Continuing Education Programs

 

Recently, Susan Yates and I conducted mediation trainings on behalf of the United States District Court for the District of New Hampshire, the New Hampshire Judicial Branch Office of Mediation and Arbitration, and the University of New Hampshire, School of Law.

As part of the trainings, we collected survey data and focus-group-like comments from the training participants.  We did this both to provide additional information for the participants and also to test and demonstrate the Stone Soup idea of using continuing education programs to generate and disseminate information about dispute resolution practice.  This process of gathering information is useful to get practitioners’ perspectives about specific issues even though it doesn’t explore specific cases in depth.

The training participants have a lot of mediation experience (both as mediators and lawyers) and we described our plan as having people learn from each other as well as the trainers.  Program evaluations indicated that the participants found this to be very useful.

This post describes what we did and provides documents that you can adapt if you want to do something like this in a continuing education or conference program.  People are gearing up for the annual conference of the ABA Section of Dispute Resolution, so you might consider this if you are planning a program for the conference.

A companion post summarizes the key takeaways from the trainings we did and includes links to more detailed presentations of the data.  Given the responses in the program evaluations, I suspect that the trainees probably will find this information to be quite interesting.  Although DR experts may find some of the ideas to be familiar, I think that there are some interesting nuggets to refine our understanding of how practitioners experience mediation.

If you want to do something like this, you might or might not need to get approval from the research ethics body (called “institutional review boards” or “IRBs” in the US).  These requirements apply only to university and college faculty.  Others who collect information at educational programs do not have to comply, though it’s good practice to follow the same procedures to provide informed consent and protect confidentiality.

Even faculty who want to share the information generated solely with the program participants, probably won’t need to get approval.  If the host takes responsibility for taking and distributing the notes, it’s very unlikely that you would need ethics approval.  A senior staffer at my IRB told me that I wouldn’t need IRB approval if I wanted to manage the process of collecting and editing the data.  However, ethics bodies vary a lot in their requirements.  So if you want to collect and distribute data from a continuing education program, you might check with the appropriate authority at your school.

I did request and get IRB approval for this data collection because I wanted to share the results generally and develop models that others could use if you want to get approval from your school.  At the end of this post, I include the documents I used, which you can adapt if you want to do something similar, particularly if you want to use the data for your research.

Focus-Group-Like Data From the Training

The easiest thing to do is to ask questions during your program and have someone record the responses.  As Susan and I planned our program, we included questions we wanted to ask the participants as well as questions following up their comments during the training.  We also asked questions in debriefing simulations and included people’s responses in the data.

At the beginning of each training, we explained that we planned the event as a two-way learning process.  We knew that they wanted to learn from us.  We told them that we wanted to learn from them (and have them learn from each other), acknowledging that the audience had a lot of mediation experience.

We asked our hosts to take notes on a computer as if they were diligent students in class or program attendees.  Essentially, they served as notetakers for the group.  We did not make audio or video recordings for two reasons.  First, it is a lot quicker and easier for people to read a summary than listen to or watch a recording.  Second, making a recording would have raised significant confidentiality concerns for the IRB.  Our notes do not include people’s names or other identifying information, so that avoided this issue.  Even if you don’t get ethics approval for a program, it’s a good practice to avoid including this information in the notes.

Once you get the notes, you can decide how much you want to edit them.  At minimum, you will want to clean them up so that they are understandable, but this doesn’t need to be polished prose.  During a program, notetakers generally will have time to make only cryptic notes rather than verbatim transcripts.  Soon after the program, you and/or the notetaker should flesh out the notes so that people can understand what they mean.  It’s important to do this as soon as possible because memories fade quickly.

If you want to do a little more, you can weave the notes into an integrated analysis for distribution to the attendees.  If you want to do even more, you can incorporate the data into an article for publication.

To get a feel of some “cleaned up” notes, take a look at the notes from our federal court and state court trainings.  To see a more integrated analysis, take a look at this.

We covered a lot of issues in our trainings, so we didn’t explore any of them in great depth.  You can focus on a smaller number of issues and do more to flesh out people’s perspectives.

As you will see, cooking up these summaries is a piece of cake.

Survey Data

Susan and I are interested in empirical research and we wanted to collect quantitative survey data as well as the qualitative focus-group-like data.  We did the survey both to collect interesting data and demonstrate how people might do it in your future programs.  During the trainings, we presented some of the survey data, which provided the basis for more-informed discussion (which led to some qualitative data).

Data from the surveys and discussions have complementary advantages and disadvantages.  Survey data reflect the views of the entire sample and are more generalizable, but do not provide as much depth.  The comments during the training help explain people’s thinking to a greater extent, but do not permit generalizations.

As described in my unforgettable post, What Me–A Social Scientist?, doing survey research is more difficult than the qualitative focus-group-like research.  Conducting a survey takes a lot more time and planning and it is much easier to mess up.  Indeed, despite our experience and care in developing our survey, one question is ambiguous, making it harder to interpret the responses to that question.  (I’d like to say that we messed this up to demonstrate how easy that is to do.  But that would be a lie.)

We developed multiple drafts of the survey and asked our hosts to make sure that the survey would make sense to their population and that we would produce data that the hosts would find useful.  We tweaked the language quite a bit and omitted a question that might have annoyed some respondents but wasn’t that important to us.

We developed an online version of the survey using the Qualtrics program and the hosts provided a link to the survey with the program registration materials.  A few weeks before the trainings, they sent reminders asking people to complete the survey if they hadn’t previously done so.  Predictably, this generated a good number of additional responses.  At the training itself, we distributed a hard copy version of the survey, which generated even more responses.

After the training, we analyzed the data.  This was a very basic analysis, mostly of distributions of responses with a few “crosstabs” using some variables to compare responses of particular sub-samples.  We didn’t do any statistical significance tests.  Even so, I think that the data are interesting and useful.

Working with the IRB

Getting IRB approval was pretty easy because this was considered as an exempt project under 45 CFR 46.101(b)(2) as the subjects couldn’t be identified in the data and disclosure of the data could not harm the subjects.  The term “exempt” is confusing because even though projects may satisfy these requirements, people need to submit such projects to get the IRB to approve it.

I submitted this application for the online survey, which I wanted to get approved quickly so that training participants could start taking it when the program was first publicized. Later, I submitted this application for an amendment, authorizing me to distribute a hard copy of the survey at the training and to collect the focus-group-type data.

Here is the online survey and here is the hard copy version of the survey.  Note that both survey forms include material to obtain subjects’ informed consent.

For the focus-group-like data, training participants were given this informed consent document when they arrived at the training.  At the beginning of each training, we used these powerpoint slides to describe what how we planned to do the focus-group-like process.

IRBs generally want to see the material soliciting subjects, so we provided copies of the training announcements, including references to the survey.

As you can see, this is not rocket surgery.  You can do this – at least the focus-group-type data collection.

If you want to conduct a survey, especially in advance of the program, that requires some real time and planning.  It also requires careful drafting.  Here’s a short handout I developed with advice about designing questionnaires.  Be sure to pre-test your survey so that the questions make sense to the subjects and that the data will be meaningful.