When the ChatGPT-mania kicked off last year, the first uproar emerged from the academia. Teachers were worried that students now had a potent tool to cheat on their assignments, and like clockwork, multiple AI plagiarism detectors popped up with variable degrees of accuracy. Students were worried that these AI plagiarism detectors could get them in trouble even if the error rate were low. Experts, on the other hand, opined that one needs to rely on intuition and natural language skills to detect signs of AI by looking for signatures such as repetitive phrases, an out-of-character use of words, a uniformly monotonous flow, and being more verbose than is needed in a regular human conversation.

No method is infallible, but the risk avenues keep spiraling out of control while the underlying large language models get even more nuanced in their word regurgitation skill. Among those avenues is the all-too-important essay required for college applications. According to a Forbes report, students are using AI tools to write their school and college essays, but academics and people on the admission committee have developed a knack for spotting AI word signatures. For example, one of the words that seems to pop up frequently in essays is “tapestry,” which, honestly, is rarely ever used or heard in a conversation or even text-based material, save for poetry or works of English literature.

“I no longer believe there’s a way to innocently use the word ‘tapestry’ in an essay; if the word ‘tapestry’ appears, it was generated by ChatGPT,” one of the experts who edit college essays told Forbes. Unfortunately, he also warns that in the rare scenarios where an applicant inadvertently, and with good intentions, ends up using the word, they might face rejection by the admission committee over perceived plagiarism.

What to avoid?

The Forbes report compiles responses from over 20 educational institutions, including top-tier names like Harvard and Princeton, about how exactly they are factoring AI while handling applications. While the institutions didn’t provide any concrete answers in terms of a proper policy, members handling the task hinted that spotting AI usage in essays is pretty easy, both in terms of specific word selection, which they described as “thin, hollow, and flat,” as well as the tone. Some independent editors have created an entire glossary of words and phrases that she often sees in essays and which she tweaks to give “human vibes” to the essays.

Some of the code-red AI signatures, which don’t even require AI detection tools to spot them, include:

  • leadership prowess”
  • stems from a deep-seated passion”
  • aligns seamlessly with my aspirations”
  • commitment to continuous improvement and innovation”
  • entrepreneurial/educational journey”

These are just a few giveaways of AI involvement. Moreover, they can change and may not even be relevant soon as more sophisticated models with better natural language capabilities arrive on the scene. Plus, people from non-academic domains appear to have established their own framework to detect AI-generated work. “If you have enough text, a really easy cue is the word ‘the’ occurs too many times,” Google Brain scientist Daphne Ippolito said to MIT Technology Review. 

Ippolito also pointed out that generative AI models rarely make typos, which is a reverse-engineered way to assess if a piece of writing is the result of some AI tool. “A typo in the text is actually a really good indicator that it was human written,” she notes. But it takes practice to be good at identifying the pattern, especially at reading aspects like unerring fluency and the lack of spontaneity.

By Admin

Leave a Reply

Your email address will not be published. Required fields are marked *