AI and Predicting Settlement

The National Law Journal reported yesterday on SettlementAnalytics and their predictive algorithms. “The world’s first quantitative legal measurement to indicate the likelihood of a lawsuit proceeding all the way to trial and adjudication” according to the company.

I am profoundly skeptical.

Or perhaps more accurately, I can easily imagine distortions (or worse) masquerading as mathematical truths. Even the possibility of self-fulfilling, or at least not error-correcting, prophecies.

Perhaps the company does not make the claim I fear they are making– that they can predict with confidence, from the outside, at any given point in time in the arc of litigation, whether a specific case  will or won’t settle, and even more dubiously how, when, and on what terms it will settle.

Perhaps they make no such claim. Perhaps I need not fear garbage-in-garbage out. Perhaps I need not fear the omission of the multitude of factors I’ve seen real life parties weigh that would not typically appear in an economic analysis. Perhaps their algorithm incorporates the larger business contexts of all of the relevant players, their public and private concerns about precedent, the other opportunities and risks they see that might tie up human resources or capital, the potential for publicity, the interests and skills of the lawyers involved, the shifting landscape of decision-makers and their available information, etc etc etc.

It’s possible that the exercise of describing the context for a piece of existing litigation, as must surely be a piece of any program like this, will help the parties to prepare more strategically and thoughtfully about the full potential arc of the case’s life, along with the various potential outcomes along that arc. If that’s the effect, then this is great. Even if the product of the program’s calculations are questionable.

But if it’s essentially an app that takes a few data points and predicts the likelihood of a case going all the way to trial (an eventuality that is not terribly common in any context, of course), I guess I am at least initially reminded of the “compatibility tests” seemingly found in approximately a million magazines every month: “Answer these ten questions about you and your boyfriend/girlfriend/pet/conditioner, and we’ll tell you if he/she/it is the one you will be with forever more…”

MM

4 thoughts on “AI and Predicting Settlement”

  1. I fear that this program is garbage-in, garbage-out. This program focuses on the economics and financial complexity behind a dispute to calculate the risk of ending up in a court room. Some of the factors it considers are the costs, capital costs, time to trial, and fee structures. While these numbers provide concrete evidence of what litigation may entail, those numbers are derived from non-economic factors that existed between the parties during the pendency of their case. So, to completely ignore the non-economic inputs and only use the economic inputs would skew the risk of settlement for every case because each case has unique qualities that cannot be captured in a data pool.

    For example, the litigant’s risk aversion, their psychological bend (glass half full, glass half empty), their problem-solving skills, their emotional capacity for conflict, what’s at stake, how much skin they have in the game. Furthermore, this program ignores the other parties involved in a case, such as the attorneys. Who the attorneys are matter a great deal – whether they are a newly minted lawyer trying to prove themselves, or the veteran attorney who has been around the block before. An attorney’s experience matters, their risk aversion matters, their track record matters, their case load matters, and their history with opposing counsel matters. All of these factors are ignored in the program.

    Without non-economic inputs, the perceived risk of not settling, as determined by the program, will be essentially garbage-out. Instead of informing the parties about the possibility of going to trial, use of this program will give parties either a false sense of security or force a party to settle on terms they aren’t comfortable with.

  2. I agree with the skepticism on the idea of an algorithm being able to appropriately measure the likelihood of a case going to trial. As has already been mentioned, very few cases go to trial in the first instance, so the data would likely be weighed in favor of that.

    The highest doubt in such a program is raised by the human element to a case, specifically the client. Since it is the client who must make the decision whether a case can be resolved outside of litigation the algorithm would need to have appropriate indicators relating to that client, age, race, education level, what kind of family unit the individual grew up in (two parent household, one parent, divorced, adopted, foster among others), and even with those generalities known, data isn’t definitive enough on those to indicate a person’s predisposition to settling and for what amount/conditions.

    The algorithm also would be unlikely able to predict actual randomness or irrational choices individuals make. People do not always act in their best interest, even when appropriately explained the situation by an attorney, they may still not act in their best interest because of some kind of self doubt or belief of wrongdoing or another reason.

    An algorithm can only take into consideration those factors which the programmer indicates as being relevant, when some may be missed or wrongly weighed within the calculation. While the program may allow for generalities to be understood, it cannot possibly come close to understanding the enigma that is the human element of a case.

  3. I am not as skeptical about the app if it bases its prediction on the type of law that is at issue. For instance, if the person using the app has a divorce case versus a product liability case, perhaps the app can tell the user how likely it is that that case might end up in trial, grounded on how many of those similar cases end up going to trial. However, If the app actually tries to predict the likelihood of going to litigation based on more complex factors such as the relationship of the parties involved, the amount of money at stake, or even whether the law is on one party’s side versus the other, then it would be difficult to imagine what type of algorithm is involved and how accurate that information would be given that settlements can and are usually kept confidential by the parties.

  4. Remember, only 2% of cases go to trial. It’s going to be hard to disprove the efficacy of the algorithm. Miss Cleo might be pretty good at making these predictions.

Comments are closed.