By now, you know that students are using AI. Some faculty express concern, and they hope that AI tools will just go away – or that students won’t use them. Not gonna happen.
One recent study found that 86% of university students occasionally, frequently, or very frequently use general artificial intelligence tools. About 70% of the students used it more than an hour per week. Other studies report even higher usage rates.
Suggesting that students not use AI seems similar to saying that the best way to avoid the risks of sex is simply to abstain. The analogy is imperfect but instructive.
Abstinence avoids the risks, but it’s not realistic to expect that everyone will follow that advice. People can use AI for good reasons – as they sometimes do with sex. Ignoring the potential benefits of either one can make guidance less credible and less effective. When people do engage in sex or use AI, it’s better if they do so responsibly.
Failing to educate people about sex or AI may actually increase the risks of irresponsible behavior.
Law schools should prepare students for the world they are actually entering – not the one we might wish for. That includes helping them understand AI’s risks, benefits, and norms for responsible use.
Last week, I posted a 27-minute video, How Faculty and Students Can Use AI Effectively, to teach students and faculty about AI. It gives students basic guidance and encourages them to develop good habits. It includes two AI demonstrations: one showing how faculty can use AI to generate a simulation, and another showing how students can use it to prepare to participate in a simulation.
I just wrote a 4-page article elaborating on why it’s important to teach students about AI and how to use it responsibly – A Video Guide for Teaching Law Students to Use AI Wisely. This article includes the PowerPoint deck with links to additional resources as well as the chat transcript from the demonstrations.
Many law schools now have AI policies. Some schools require faculty to develop course-specific AI policies. Giving clear guidance encourages students to act ethically. Although some students will ignore the rules and cheat, most probably want to understand the expectations and follow them.
Some faculty may assume others in their school will take responsibility for teaching students about AI. But if everyone makes that assumption, students will be left in the lurch. It is better for students to get a little too much encouragement than too little. Hearing similar messages from multiple sources can reinforce the importance of responsible use.
AI is now part of legal education and practice. As the video emphasizes, AI can mislead or assist, depending on how it is used. Faculty can help students develop the curiosity, humility, and judgment needed to decide when to use it or not – and to use it responsibly when they do.
Take a look at the video – and consider suggesting that your students do so too.
I appreciate John’s understatement when he writes, “The analogy is imperfect but instructive.” Ya think?
(I’ll try to maintain some classroom-level decorum here, even as the comparison invites some rather indelicate points…)
I should open by disclaiming my deep skepticism about AI’s promise and fit _in the context of learning and skill development_.
Do we recommend that young people engage in sex without consideration of their emotional connection with the others involved, of their partner’s expectations, of their and their partner’s age or stage of development, of their understanding of the possible outcomes of pregnancy or STDs or shame?
The impulse to get things done faster seems, um, largely unsatisfying. And to suggest that any means justify the ends is, well, a bit dangerous if not immoral and illegal. And to rush through (or skip altogether) the awkward and laborious stages of relationship development, of learning and listening, of understanding the dynamics of interest and consent… often leaves feelings of emptiness. Are all these tasks just annoying hurdles that delay inevitable desired outcomes?
I wonder whether an analogy to pornography, specifically easily-accessible online porn, might be more fitting.
Is it just me, or have you heard concerns about young people more often engaging in sex outside of meaningful relationships? Or disregarding signals that indicate hesitation or disinterest? Or enacting whatever sex acts they’ve seen online, and expecting partners to enjoy those as much as the people on screen appear to? And what about the shame upon learning that many of those people on screen were unwilling victims, whether coerced or drugged or unable to consent to their participation?
While it’s likely too soon to see the full effects of cognitive offloading, of de-skilling and over-reliance, of unaccountable plagiarism built on intellectual property theft, of output devoid of morals, principles, contextual awareness, or the consideration of consequences, it’s not too early to observe that hype and boosterism rely on over-promising results and overlooking concerns. And sometimes setting aside personal or professional ethics, or standard procedural guidelines or guardrails, all in the name of inevitability.
The rhetoric I hear from many colleagues, colleges, and companies about “get onboard now, or be left behind” is at once familiar and troubling. I, for one, remain unconvinced of the broad relevance of AI to dispute resolution practice, and by extension, to DR practitioner preparation.
Nuanced communication skills, principled and informed approaches, emotional awareness and self-regulation, careful consideration of relationships and consequences… these have always felt central to our work.
Can AI replicate much of what’s expected along these dimensions? Maybe. And you can go buy a blow-up sex doll or hire a prostitute most any time of day, any day of the week.