Tired of reading student papers that sound like they were written by the love child of a robot and a corporate communications consultant?
You’re not alone.
Many law school faculty are worried that students are using AI tools to cheat – undermining their own learning and violating academic integrity policies.
Some students do misuse AI, and faculty are right to be concerned.
A short article, Turning Risks of Cheating with AI into Opportunities for Better Teaching, offers practical strategies you can use to improve learning outcomes and reduce student cheating with AI. It suggests redesigning assignments in ways that make cheating riskier. Some students may be deterred if cheating becomes harder, less effective, and more likely to result in lower grades.
The article draws from two recent pieces – Solving Professors’ Dilemmas about Prohibiting or Promoting Student AI Use and De-Skilling or Re-Skilling? The Case for Smarter Writing Assignments – and offers concrete ideas about assignment design, policy clarity, grading rubrics, and classroom conversations that can make a real difference.
If you’re not sure how to deal with the risks of students’ using AI to cheat, check out Turning Risks of Cheating with AI into Opportunities for Better Teaching, which suggests some paths forward.
By rethinking your assignments to improve teaching generally and clearly communicate your expectations, you can substantially enhance students’ learning and reduce temptations to cheat.
Take a look.