EAL learners the new victims of AI plagiarism detection

EAL learners are more often than others affected by AI plagiarism detectors. Find out more.

Guest Blog by: James Abela

It’s hard enough when you are learning English, you have a limited range of vocabulary and you are trying to express yourself effectively. English as an additional language learners (EAL) have always sounded a little robotic. They use formulaic language and trot out the same phrases time after time. Whereas ‘native’ English speakers have access to a huge range of vocabulary, rich sources of material and are able to much more subtly play with the language.

These benefits come from often being in an English speaking country and having 15-20 years to learn a language without the complications of other languages getting in the way. Whereas an adult EAL learner might have spent the equivalent of 12-18 months learning the language. Their language is likely to be from a few text books and designed as a shortcut to get these students communicating.

James Abela teaching EAL

A study conducted by Stanford University found that AI detection tools are severely biassed and inaccurate when it comes to non-native English speakers. The study found that over half of the writing samples from non-native English speakers were misclassified as AI-authored, while native speaker sample detection remained nearly perfect. The main issue stems from what’s known as “text perplexity,” which refers to a written work’s amount of creative, surprising word choices. AI programs like ChatGPT are designed to simulate “low perplexity” in order to mimic more generalised human speech patterns. This poses a potential problem for anyone who happens to use arguably more standardised, common sentence structures and word choice.

AI Policy Officer

The fact that teachers need these AI detectors in the first place, shows the pressure on them to deliver as much as possible in very limited time.  This means that there is less time to do writing in class and some assessments are set for homework. Ironically the simplest solution, might be the oldest. All high stakes tests should be in exam conditions. This might seem draconian, but this is why examinations never go away, because they are the only way to ensure students do not get tutors, parents, friends of AI to write for them. 

If you are setting homework that doesn’t mean you shouldn’t use AI detection at all, but rather than a conversation about cheating: talk about how the piece of work is similar to that generated by AI and the steps a student can take to make their own work more authentic, how they can properly cite sources and ensure that higher stakes testing is done under exam conditions. 

This way you can be helping all your students rather than be yet another roadblock to their academic success. 

Sources

About the Author

With a career spanning since 1998, James has gained valuable experience both in the computing industry and the field of education. He has contributed significantly to the academic world through the publication of various academic papers. His expertise has also extended to writing articles for well-known magazines, including Hello World, Linux User & the NST. Additionally, he has authored a best-selling book titled “Gamified Classroom.”

You can find the book at: https://amzn.to/3MdCZrh

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.