AcademyEX logo large white

15 May, 2024 - 10 min read

How to avoid getting caught cheating with AI

...Yes, sorry, this is a clickbait title 👆

Last week, I facilitated a couple of sessions for our students at academyEX on how to avoid getting caught cheating with AI—or, in words I’d much rather use, how to use AI thoughtfully and transparently to help you learn—with the learning part being the most important!

AI Clickbait Thumbnail

Very clickbait AI YouTube thumbnail

Encouraging generative AI use

We have been encouraging our students to use generative AI tools (while also understanding the risks) for over a year now. This has been supported by our AI guidelines, which are underpinned by our values of Ako and Pono. However, we are still having regular conversations with students who are afraid to use the tools or are relying on them a bit too heavily in assessments that need more of a human touch.

In the interest of an equitable approach to AI literacy, we decided it was important to support all of our learners in understanding how they can use these powerful tools to experiment and learn without the fear of being ‘caught cheating’. In addition, I wanted to test how reliable AI detection checkers were and make the case that you probably could use AI and not be detected, but I also wanted to discuss, ‘Is that what we are here for?' Overall, I was angling for an approach of radical transparency to enable learning.

A catalyst for the session was this example of how Chat-GPT has been used in peer-reviewed journal articles. See for yourself; you just have to enter “as of my last knowledge update” or a similar AI-bot-specific turn of phrase, such as “I’m very sorry, but I don’t have access to real-time information,” into Google Scholar. It won’t take you long to find an article that has slipped through the net.

an example image of an academic paper using AI without checking its output
an example image of an academic paper using AI without checking its output
an example image of an academic paper using AI without checking its output

I found this quite shocking and used the example to point out that the use of AI in academic work is rife, and even blatant misuse can go undetected in the first instance.

So, how do you cheat?

It will only take a quick search on YouTube and a 3.5-minute investment in a 7-minute video watched at 2x speed to work out how to evade Turnitin and all other plagiarism detectors—hence the overly enthusiastic clickbait image for this post!

Bypass turnitin YouTube Thumbnails

Endless AI detection videos

In learning these new strategies, like app smashing, where you put your text through a variety of apps (e.g. language translators) to fool the AI detection software and/or use a range of prompts to make the text more ‘human’, I thought I’d test these skills out to see if it was worth our students going through the process.

In the presentation, I shared how I compared a piece of writing I did back in 2020, way before any chance of using generative AI, alongside the ‘adversarial’ approaches mentioned in the videos. I prompted a range of generative AI tools to create a similar piece of writing to my original (a short literature review on bug-in-ear coaching), compiled all the approaches in this document and created a table that indicated the variance in AI detection. I also ran the entire report through Turnitin here.

Variance in AI detection example table

The range of approaches is in the left column, and the detection tools are in the top row.

Thankfully, the original version was not detected as AI; this was also the case when I asked Grammarly to ‘improve’ my original writing - which I was slightly surprised about. Unsurprisingly, though, the raw ChatGPT/ChatGPT+Grammarly versions of the prompted output indicated a high likelihood of AI being used. I couldn’t even beat Turnitin by using ChatGPT, then translating it into Spanish, Romanian, and then back into English. It also seemed, in my experience, that undetectable.ai was easily fooled by Bing. However, undetectable.ai was the only tool that was able to fool Turnitin. If students want to use a tool like this they have to pay a subscription, and for me, this falls into another level of deceitfulness, similar to contract cheating. I’m not saying that if the tool is free, then you should use it to cheat, but this pay-to-cheat approach broadens the digital divide, disadvantaging learners without the financial means to pay for a subscription; the same applies to more advanced AI language models that are pay-to-use and are less likely to be detected.

Implications for inclusivity and equity

Perkins et al. (2024) have taken my mini-experiment to another level and tested not only one text input but 114 different samples across a range of tools to understand the approaches that could be used to outwit the AI detection tools. The study highlights that there are inconsistencies in the AI detection results. It concludes that the tools are unreliable and shouldn’t be used as the only judgement of originality regarding student work. However, I was interested to read that from the results, Turnitin did not identify any false positives and also missed 88 AI-generated examples, which indicates that if something is identified as AI-generated in Turnitin, then it is quite likely that it is. Turnitin concurs and claims that there is a less than 1% chance of a false positive.

However, most importantly, Perkins et al. (2024) highlight that generative AI text-detection software, designed with the noble intent of ensuring assessment fairness, appears to be a double-edged sword. On one hand, the tools promise to uphold integrity in the academic realm, but on the other, they seem to inadvertently create barriers to inclusivity and equity. This is largely due to the detection tool’s susceptibility to these ‘adversarial’ techniques, which can be manipulated by those equipped with technological resources (generally financially, but also anyone lacking a moral compass). Perkins et al. (2024) also highlight that the detection software’s propensity to penalise students who exhibit a higher level of complexity in their writing, including non-native English speakers, raises concerns. As it stands, AI detection tools work to a degree, but the use of generative AI detection software may inadvertently erect barriers to inclusive assessment rather than dismantle them. This calls for thoughtful and innovative integration of AI tools in education - and one way to do this is through radical transparency.

How to be transparent?

If you are going by the book, and I mean the APA 7th guidelines, it is not possible to cite ChatGPT directly as the interaction is non-retrievable, although I wonder if this is still the case with OpenAI supporting hyperlinks to chats? Regardless, the APA 7th guidelines still suggest a clear way to acknowledge generative AI use - as seen below.

Citing ChatGPT Example

Example of how you could cite generative AI using APA 7th

The reference list would include:

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

In this example, I’ve indicated that a chat transcript is also available as an appendix. We have encouraged our learners to do this to enable transparency and link to a Google doc where they have copy/pasted the chat with the generative AI. You will also notice that this reference describes a collaboration process with generative AI rather than just indicating that I am quoting something that it made up. Quoting ‘knowledge’ from generative AI is one of the risks of the tool as we know it can be inaccurate.

The previously mentioned research article from Perkins et al. (2024) had an interesting declaration at the end of the document. Again, this supports a level of transparency from the authors. However, ‘develop draft text’ seems quite ambiguous. I’d like to see some more detail if this was a piece of student work.

Generative AI Declaration

Generative AI declaration from a research article by Perkins et al. (2024)

Another approach that my colleague Paula Gair has introduced is through a critical reflection guide. A table that could be included at the end of any piece of work that clearly shows how the student has engaged with generative AI tools. The key elements of this approach enable transparency and insight into what the student has learnt by using the generative AI tool. Suggestions in the table prompt the student to reflect on how they used it, if it worked, if it was biased, safe, and/or ethical, and what could be implemented next time to improve the interaction.

AI Reflection Framework

Critical reflection framework for generative AI use.

Feedback from students is that this could be quite a bit of extra work, and some of our supervisors recollected the days when Google search first came out, and it was expected that you recorded and shared your Google search terms when submitting. I agree, it is extra work, and could in fact take longer using generative AI tools, rather than the ‘productivity 100x’ tool it is suggested as. However, it is important at this stage to be clear about how we use the tools, as this contributes to our learning. Perhaps when generative AI tools are integrated into every digital tool, this approach won’t be needed; however, until then, I think using a simple critical reflection framework like this really supports open and transparent use of generative AI.

An education or a certificate?

Students are going to use generative AI, and unless we create clear guidelines that support radical transparency and thoughtful and ethical use, the tools could be abused. The gap between those who have the means to use the tools (financially or through AI literacy levels) will rapidly pull away from those who don’t have the means.

To end the session, I shared an image that I created using Co-Pilot. I wanted to indicate that, ultimately, it was the student's choice whether to use generative AI to learn, or to cheat. Did they want an education gained through learning, or a certificate gained by using AI to do thinking?

The thing I love about this AI image is that the word certificate is spelt wrong: ‘certiffate,’ and that just sums it up nicely!

AI Art Education vs certiffate

Copilot. (2024). Education vs certiffate [Digital image]. Retrieved April 15, 2024, from direct access through Copilot.

Prompt: create a visual meme that would make the point about a student having to choose between an education (by learning the lessons and experiences along the way) or a certificate (where they just use AI to answer the questions).

You can watch the recording of the session here… and the slides I used can be seen here.

This post was entirely written by a human - me, Tim Gander.

This post was originally published at https://tgander.substack.com/p/how-to-avoid-getting-caught-cheating

References

Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024). GenAI Detection Tools, Adversarial Techniques and Implications for Inclusivity in Higher Education. https://doi.org/https://doi.org/10.48550/arXiv.2403.19148