My latest Theory Meets Practice column in CPR’s Alternatives magazine, How Legal and Dispute Resolution Professionals Can Manage AI Risks, explains how to assess and manage generative artificial intelligence (AI) risks.
Many legal and other dispute resolution professionals use AI tools to draft documents, brainstorm ideas, organize information, and prepare for negotiations and mediations. There are legitimate concerns about significant ethical and practical risks in using AI.
The article describes how professionals can practically manage those risks. It focuses on three categories of risk that are especially important in legal and dispute resolution practice:
- Confidentiality risks
- Factual inaccuracies
- Sycophantic reasoning
Recognition of those risks should not necessarily lead professionals to avoid AI. If they can gain significant benefits by using AI, they can learn how to use it effectively, responsibly, and ethically.
Practical Safeguards
The article offers concrete, doable steps professionals can take, including:
- Turning off AI training and prompt retention where possible
- Avoiding providing sensitive or identifying information in prompts
- Reviewing and verifying all AI outputs before relying on or distributing them
- Labeling AI-generated material to prevent unintended over-reliance
- Prompting AI tools for candor rather than praise
Risk management depends on matching safeguards to the task. Some uses of AI – such as brainstorming, organizing notes, or drafting internal materials – pose relatively low risk when handled with normal care. Other uses – such as legal research, client advice, or court filings – demand much greater caution and closer human oversight.
Putting AI Privacy Risks in Context
One section of the article puts AI privacy risks in a broader – and often overlooked – context. Some professionals are worried about using AI tools – while routinely relying on other digital platforms that present equal or greater confidentiality risks.
Email services, browsers, social media platforms, cloud storage, collaboration tools, and messaging apps routinely collect, store, and transmit sensitive information. Advertising-supported platforms, such as Google’s services, are engineered to collect and monetize user data at scale, frequently linking information across services and over long periods of time. By contrast, many AI tools – especially when training is disabled – retain prompts for limited periods and do not integrate them into advertising ecosystems.
Professionals should apply consistent standards across all the digital tools they use, rather than assuming that AI is very risky while overlooking greater risks posed by familiar technologies.
Moving From Fear to Informed Use
AI is already embedded in professional practice. With a clear understanding of the risks and sensible safeguards, professionals can use AI to improve their work without undermining confidentiality, ethics, or trust.
Take a look.