It's alarming how many of my business law students use ChatGPT on their exams, even when I tell them they can't. I do essay exams, so I can always tell immediately. A student who doesn't talk all semester all of a sudden has an intimate understanding of counters to the poison pill. I don't even penalize them for it--they won't make it. Long-form thinking is mostly dead. I remember taking essay exams with a blue book & pencil. I'd just sit and think. It would be a timed exam, and I'd still run scenarios, war games in my head. I'd be 4-5 layers deep in thought before I put pencil to paper. The best insight almost never comes immediately. But solutions are available to you if you don't use a crutch. The ChatGPT generation can spit out pre-fabricated muck but has lost the art of wrestling with a problem. Any answer they give isn't really theirs. This crutch creates a desperate need for approval in everything. "I need to check with AI before I have an opinion. You can never be too careful with your own thoughts!" Everything needs permission. Nothing is unique. And I'm all for AI use, by the way. But I do think using it too early & often is very dangerous for human development.
Some of the highest GPAs in my section never said a word in class, dont you think you are judging students too harshly when they just could be introverted? Also, you can use exam software to lock out the rest of the computer and prevent AI use. It seems to me that you are giving students take home tests or something that allows them to copy your issue spotters and paste them in chatGPT to get answers. I cant help but think that this is really on you so im glad you dont penalize them.
Nearly all of my law school exams were administered through a software that locked every other application out while the exam was running. That was nearly 20 years ago. I’m sure that type of software is still in use. Feels like an easy fix. On a different note, I like to ask AI to write counter arguments to my position taken in an initial brief or motion. It’s been a very helpful way to use the tool.
It is possible to design assessment to take into account generative AI. But this takes careful assessment design, so it would be worth asking a learning designer. If you set essays & just tell students they can't use ChatGPT, experience over the last few years shows they will. The smarter ones will use a different Generative AI tool, as you didn't say they couldn't. If you have an unrealistic assessment rule, and then don't enforce it, you are creating a problem for yourself, the institution, and in the long term for the students. Essays on paper under examination conditions have an underserved mythology. This is not testing real world skills & discriminates against those who have difficulties with exam conditions. There are few jobs where you have to write without using any reference sources or aids. Keep in mind some people do need a crutch. If you ban aids for those with a disability, you are unlawfully discriminating.
I let my students to use chatgpt. I expect that they will. But they have to disclose it and know that I will grade their paper harder than someone who didn't. Its now become a bit of a cat and mouse game - what are the limits of it? I want to see how they respond to real life scenarios - chatgpt can give an answer. You can decide if its the right one. So - support your response.
If it's a concern, why don't you make your students write the essays in class or administer oral exams?
Written essay exams are primitive outmoded dinosaurish way to measure creativity and deep-thought in the fast-changing World. The students are smart and alright and are our priority. The lazy-academic moribund ivoried-towered superseded assessment modes are the real problem.
I think you are arguing two things - The use of AI despite asking your students not to and whether the use of AI is a good tool to use in your professional capacity as a lawyer. The former argument speaks to an ethical issue, and you are correct. The use of AI in this scenario is wrong. As others have mentioned, there is a software solution to this problem. With respect to the second argument, I disagree. AI is simply another tool that can be used in practice on top of everything else, such as the internet, or things like Westlaw etc. Adapting and using new technology is a professional requirement where I practice.
Matthew Dearden I totally agree. I remember one of my bar exam prep instructors telling us that if you start writing within the first 5 minutes of an essay you will likely fail. It's critical as attorneys that we THINK about a situation and using GenAI is robbing law students and junior lawyers of that skill. Mind you I use GenAI but I know how to use it as a tool, not as a replacement of my critical thinking skills.
I disagree AI in law is far from perfect, and that’s an important distinction. Legal reasoning isn’t just about citing precedents or knowing the rules; it’s about interpreting nuanced contexts, crafting arguments, and understanding the human elements behind cases. These are areas where AI struggles because it lacks the ability to grasp emotions, cultural subtleties, or the ethical dimensions of legal problems. Tools like ChatGPT might generate answers that sound polished, but they often miss critical nuances or fail to provide the kind of deep, tailored analysis that legal situations demand. I believe AI like ChatGPT doesn’t replace thinking—it complements it. AND Students who use AI effectively aren’t just regurgitating answers; they’re learning how to frame questions, interpret outputs, and refine ideas