The agency literature is filled with discussions of how best to structure compensation systems to promote the right incentives for people we hire to do things on our behalf. Embedded in most of it is an assumption that we can tell when someone is doing a good job.
That’s not a bold assumption when I hire the neighbor kid to rake my leaves. They’re raked, or they aren’t. He gets paid, or he doesn’t.
With other endeavors, it can be more difficult to discern what “doing a good job” looks like.
Attorneys and others who operate in a setting in which binary determinations (win/lose) are the norm might be able to point to victory as evidence of a good job. But selection biases and other factors make this imperfect. [Insert heated, beer-driven discussion here of who was a better coach in the sport of your choice, the one who led the star-filled team to consistent victory, or the one who took the team of modest talent and exceeded expectations by compiling a genuinely mediocre record.]
How do we know if an arbitrator does a good job? Surely it would have to rest on more than mere survival of the minimalist standards of judicial review operating in most arbitral contexts. In an earlier indisputably.org entry, I wondered aloud about whether we might come up with a different mechanism for assessing non-binding arbitrator performance, but there are no easy answers here.
How about mediators? Mediator quality control is perhaps the issue on which I have done the most professional thinking, and I’m still largely at a loss. One common measure of mediator performance is linked to parties’ satisfaction, either through post-mediation customer survey instruments or through the market’s indirect satisfaction measures. But of course, the incentives may be wrong if the amount a party owes to a mediator depends on the party’s assessment of the mediator’s performance. Scott Peppet has done what I believe to be the best and most original treatment of the question of possible alternative compensation systems for mediators, but much has yet to be done.
All of this thinking, I’ll confess, has been sparked by the same thing that accounts for the sudden silence out of all of us at indisputably.org—grading. Beginning last week, I faced just over 800 pieces of student writing, ranging from short essays to full-length theses.
Many of these students are writing about Dispute Resolution in one form or another. I confess that I continue to struggle to know how best to assess performance in dispute resolution, just as I struggle to assess students studying the same. It’s not that I haven’t thought about it. A lot. I’ve even written about it in an academic journal.
But I am nowhere near having resolved how to do this best.
And I am nowhere near having finished my grading.