A letter to schools regarding AI Detectors
Image by Bing Image Creator. A boy with VR headset interacting with 3D model

A letter to schools regarding AI Detectors

Dear sir/madam,

I would like to clarify a few things regarding the questioning of academic integrity of students 'caught' by AI detectors.  To, effectively, label students as dishonest and not have academic integrity, is something that needs to be addressed for a variety of reasons that I will outline below:

First and foremost, AI plagiarism checkers of any kind do not work effectively. Whether a student use AI or not, these detectors have been proven to inaccurately detect content that is in fact human-generated. Turnitin for example, themselves, have actually admitted that data they provided over the accuracy of their own product in detecting AI content should not be relied upon. Interestingly they will not disclose what the amended accuracy rate is: https://www.turnitin.com/blog/ai-writing-detection-update-from-turnitins-chief-product-officer. Also on this issue, I urge you to read this academic research on Testing of Detection Tools for AI-Generated Text: 

"This paper exposes serious limitations of the state-of-the-art AI-generated text detection tools and their unsuitability for use as evidence of academic misconduct. Our findings do not confirm the claims presented by the systems. They too often present false positives and false negatives."   ( Testing of Detection Tools for AI-Generated Text )

I would also like to share this with you as an example of how a university has communicated with a newsletter announcement that their Teaching Center doesn’t endorse any generative AI detection tools: "the Teaching Center will disable the AI detection tool .... effective immediately"  ( Teaching Center doesn’t endorse any generative AI detection tools )

You should also read some of the academic research by Associate Professor Ethan Mollick, Stefen Bauschard  and others who are worth following in this area (some example posts below):

And this from Joanne Villis  as another good example

Secondly, even if you, your faculty or the school insist on using these unreliable tools for such purposes, sending an email to parents and students, despite their politeness, says a student has cheated, is not an approach that is appropriate or the best option. Even though you may not have known about AI detector deficiencies, you are surely aware that AI use is by no means a binary or easily definable issue. Grammarly uses AI, does that mean the students in your class working with that tool are also plagiarising?  Which then offers the following points of contention:

 

  1. How does the school ensure that tutors, older siblings and parents have zero contribution to the same assessment task? 

  2. What exactly does flagged by an AI detector mean?  Does the department or school have a threshold or certainty of the level of AI usage?

  3. Do you have a school policy that states us of AI detectors and their thresholds? If there is none then students surely cannot be accused of something out of scope of this policy.

I appreciate that AI is causing major disruption in education and the subjects like English and HSIE are feeling the brunt of this. I recognise that there is an extra workload and pressures on teachers in these disciplines. However, if you are to persist with these ineffective tools and/or in dealing with situations where plagiarism is suspected, perhaps a different approach is needed. Possibly an approach of waiting until you speak to the student and have a conversation  about why you are querying their work, i.e. a sudden change in writing style, language, etc then proceed  to ask them questions that gives the student the chance to demonstrate understanding of what they had written and how they may have used AI to support them if it was even used at all.

Finally,   I do understand that schools are trying to deal with this new age of AI but there are a lot of guidelines stating that this is not the way to deal with it.   There is a new generative AI in schools framework that will help you from ministers that illustrates that accusing students is just unjust. It states that members of the school community that are impacted by AI tools must be actively informed about and have the opportunity to question the use of the outputs of the tools you are using, for example.

Students and Parents are able to challenge any decisions informed by AI tools

 

I think this is an opportunity for you to embrace the positive use of the new AI tools in school, within given parameters, and use them as an opportunity to rethink the way you assess and guide students to understand your subject but also realise that they can get support from tutors, parents, as well as AI.  Please do not rely on AI tools to ‘detect’ academic integrity. Hopefully I have illustrated their inaccuracy with the research above.  There are some great teachers out there who are talking about the use of AI in History such as Matt Esterman, as well as AI and student agency Dr Nick Jackson.

The links below provide you also with some commentary on these issues:   https://preview.mailerlite.io/preview/282063/emails/85539571521029259 https://www.oneusefulthing.org/p/the-homework-apocalypse

Any letters of concern can cause considerable upset. Generic accusations based on AI detectors are serious and the tools you are using to make such decisions are shown to be highly unreliable. 

As a parent, an ex-teacher and a person who has been involved in education all my career, I will always support the teachers and school. However you should really take the time to embrace this technology and use it as a way to re-engage, enthuse, and even help save some time planning. This technology can be used to inspire, and augment learning and bring new experiences to life as well as get rid of the traditional, mundane and ineffective processes that add to teachers workload.

 

Angela Phillips

Nationally Certified Lead Teacher and Director of Teaching at Westminster School Adelaide

11mo

Angela C. interesting read - shows what you’re doing is spot on!

Joanne Villis

Director Technology Enrichment| Masters Digital Technologies| Masters Gifted and Talented Education| Nationally Certified Lead Teacher| Stage 2 Design, Technology and Engineering Teacher

11mo

Some teachers can not move on beyond the AI plagiarism discussion, that is why I think it is important that schools have a policy in place that supports teachers and students, like the policy I wrote, well I updated our current Academic Integrity Policy to include AI. Teachers have the responsibility to provide appropriate AI use guidelines for each assessment, to explicitly teach the potential risks and ethical consideration of using AI and to explicitly teach referencing/citing conventions.  Within the policy I also included acknowledgment that anti-plagiarism software such as Turnitin, may not be accurate for detecting AI plagiarism and that is the student’s role, to adhere to the acceptable use of AI for each assignment and reference/acknowledge accordingly.  Link to Policy: https://www.linkedin.com/posts/joanne-villis-2b3654125_ai-aieducation-aipolicy-activity-7137660679846600704-zKqi?utm_source=share&utm_medium=member_desktop

Lynn McAllister

Award Winning Creator of 12 yo Pixie Van Dimple, Circularity Cheerleader & Educator at Capella House School

11mo

The wrong kind of #ai eh #pixievandimple ! The Pandora is out the box now and a fantastic tool used in the right way- it seems to have thrown us into turmoil in #education - it needs guidance and policy sorting out not accusations of cheating! AI policing AI & plagiarism sounds a like precarious occupation ! It will have us all tied up in knots !

Like
Reply
Darren Coxon

Founder @ Node | AI Strategy, Training, Policy and Tools for the Future of Education

11mo

I cannot believe we’re still having this debate! Yet still even university admissions boards believe that they can catch students out for submitting ChatGPT written personal statements. Honestly it’s so simple. Just get students to submit a short video personal statement. 2 minutes on a a topic of their choice. We need to move on please!

Ray Fleming

Global AI and Education Industry Leader | Extensive sales & marketing experience | AI solution strategist | Customer centred thinker | Speaker | Media | PR

11mo

There's also a legal risk, as the AI detectors are more likely to falsely classify writing by certain student groups, including learners where English is a second language, low SES students etc. The Federal Trade Commission started calling out the risk for organisations in July '23 "If you’re interested in tools to help detect if you’re getting the good turtle soup or merely the mock, take claims about those tools with a few megabytes of salt.  Overconfidence that you’ve caught all the fakes and missed none can hurt both you and those who may be unfairly accused, including job applicants and students." https://www.ftc.gov/business-guidance/blog/2023/07/watching-detectives-suspicious-marketing-claims-tools-spot-ai-generated-content

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics