Unreliability of AI Detector in Turnitin
Hello,
You are receiving this message because you have used RIT's Turnitin's Similarity product since April 2023.
As you may be aware, Turnitin provided an AI detection feature within its Similarity Report in April 2023. At that time institutions were not given an option to turn off this feature. Since that time, many institutions have shared concerns about this feature and Turnitin has provided the option to turn it off.
Our own investigation also identified concerns (listed below). Therefore, RIT has decided to remove the AI detection feature within Turnitin on May 13, 2024. We will continue to monitor developments related to AI best practices and technologies for teaching.
Please note that all other features of RIT's Turnitin instance, including Similarity Checking, Draft Coach for Google Docs, and Feedback Studio will remain active.
The CTL encourages instructors to talk with their students about the ethical and practical use of generative AI in their courses. CTL Staff are available to meet with instructors about course design and teaching strategies (request a meeting). There is additional advice regarding the use of AI-generated content in courses at Generative AI in Teaching. Best practice strongly suggests that instructors include a statement in their course syllabus about how AI-generated content can or cannot be used.
For reference, the following are the most concerning issues with Turnitin's AI Detection feature:
- Instructors have no way to double check the validity of the results before talking with students. With AI-generated text, there is no proof or record trail to follow; only probabilities of patterns. This is in contrast to the Similarity Check in Turnitin, where instructors can review the matched source and confirm the similarity for themselves.
- The students do not see the score, and do not have an RIT-vetted and supported tool to check their own work against an AI detector. This makes the AI detector feel more like a policing tool, not an educational tool because students cannot benefit from knowing details about their paper before the final result. Students may not even know the AI detector was run on their content and may be surprised by the result when approached by an instructor.
- Errors in classification and inconsistent classification create a false sense of being able to monitor students with little benefit to instructors.
- Unlikelihood that AI detectors will ever be able to stay current with the generative AI tools. Standard office and productivity tools we use everyday are adding AI into their tool, therefore making AI use even harder to detect. In addition, the models will get better and more human-like, resulting in it being harder to detect human patterns versus computer patterns.
- The ease of being able to fool an AI detector with prompt engineering and minor modifications to AI-generated text. A dishonest student with a little effort can go undetected.
If you have questions, please contact the Center for Teaching and Learning.