This is a quickly developing topic.
This guide was created in early February 2023. It was last updated on July 30, 2023.
A Toolkit for Addressing AI Plagiarism in the Classroom
This guide was created by CommonLit.org and Quill.org for educators.
There are new articles and updates about this topic every day. You can find the most current resources or explore what else has been said by searching.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Suggested Attribution: ChatGPT, AI, and Implications for Higher Education © 2023 by Evangeline Reid, Jake Bailey, and Aurora University is licensed under CC BY-NC-SA 4.0
Because of the limitations of these tools, using multiple detection tools on each sample will provide the most accurate response.
Free Detection Tools
Learn more about Turnitin and similarity reports:
How to spot AI-generated text | MIT Technology Review
How To Check If Something Was Written with AI | Gold Penguin
How AI Writing Detection Works | Gold Penguin
Research on Detecting AI-Generated Text
More articles will continue to be released on this topic. See the suggestions on the left-side of the page to learn how to search for more.
The rapid progress of Large Language Models (LLMs) has made them capable of performing astonishingly well on various tasks including document completion and question answering. The unregulated use of these models, however, can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc. Therefore, reliable detection of AI-generated text can be critical to ensure the responsible use of LLMs. Recent works attempt to tackle this problem either using certain model signatures present in the generated text outputs or by applying watermarking techniques that imprint specific patterns onto them. In this paper, both empirically and theoretically, we show that these detectors are not reliable in practical scenarios. Empirically, we show that paraphrasing attacks, where a light paraphraser is applied on top of the generative text model, can break a whole range of detectors, including the ones using the watermarking schemes as well as neural network-based detectors and zero-shot classifiers. We then provide a theoretical impossibility result indicating that for a sufficiently good language model, even the best-possible detector can only perform marginally better than a random classifier. Finally, we show that even LLMs protected by watermarking schemes can be vulnerable against spoofing attacks where adversarial humans can infer hidden watermarking signatures and add them to their generated text to be detected as text generated by the LLMs, potentially causing reputational damages to their developers. We believe these results can open an honest conversation in the community regarding the ethical and reliable use of AI-generated text.
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.