Skip to Content

Discerning AI-Generated Text from Human Writing, Part 2

Human writer

By Sean Brenner

In my last post, I discussed issues with the tells and detectors meant to discern AI-generated text from human writing and how those sorts of testing methods were inherently flawed. 

While it’s difficult to discern between the two, it isn’t impossible. You just need to understand what makes human writing and generative AI (GenAI) content different. That difference is fundamental, and, to some extent, it can be used to tell human writing and AI-generated content apart. 

The difference is simple: GenAI is incapable of actually writing.

Generating vs. Writing

Contrary to how it may seem, writing isn’t just a matter of following patterns to choose the next word in a sentence, which is what GenAI does. 

And it isn’t just translating abstract concepts into written words—that’s only half the process. As you turn abstract concepts into sentences and paragraphs, the process of writing them out shapes and refines those abstractions. The two parts of the process, the conceptualization and the writing, affect and alter one another

Writing is a dialectical process, an interaction between the unformed thoughts in your head and the words you put on the paper. 

This process is often unconscious and subtle, but it heavily impacts every aspect of your writing. Let’s say, for example, that you start writing something without a specific tone in mind. The manuscript will take on a tone from the words you choose, the way you structure sentences, and other such factors—and this tone can influence how you think of what you’re writing about, which can lead to writing choices that perpetuate that tone. 

This is something that GenAI is incapable of. Even the most advanced GenAI models can’t actually think. GenAI models are especially complex algorithms, closer to the system that chooses your YouTube recommendations than 2001: A Space Odyssey‘s HAL or Star Trek’s Data. 

Unless or until we produce an AI with genuine consciousness, AI models will only be able to mimic writing through text generation. For procedural tasks, like writing directions, the difference might not matter, and GenAI can even do a passable job at those sorts of things. But it can’t genuinely write, and this not only affects the quality of its output but is also the key to differentiating GenAI content from human writing.

Think Like a Human

The most reliable way to discern AI-generated content is to analyze what a text is trying to say and how it says it. You want to focus on the big picture rather than the minor details, like punctuation (see part one). 

Because AI-generated text is not the product of a thought process, it’s prone to logic gaps, strange discontinuities, and sudden changes in the text’s argument, meaning, purpose, tone, or voice, especially with longer chunks of text. 

Of course, human writers make those kinds of mistakes as well. But GenAI models will make them in ways that a human generally won’t, which causes the final product to sound unnatural and sometimes bizarre, as in these situations:

  • An AI-generated article might start by mimicking a certain writing voice, but it can lose key elements of that voice after the first few sentences. 
  • GenAI models will often skip over important parts of the reasoning behind an idea or argument while explaining it, creating jarring logic gaps. 

For example, let’s look at a section from this paper on Asimov’s I, Robot that we can clearly see was generated by GenAI:

Analyses: As an AI language model, I don’t have the capability to read texts the way humans do, but I can provide general information to answer your question. “I, Robot” by Isaac Asimov is a science fiction novel that explores the relationship between humans and robots. The novel presents the following main features of translation: 

1. Multilingualism: The novel features characters who speak several languages, and translation plays a significant role in facilitating communication between them. 

2. Machine translation: The novel features the use of advanced technology to translate languages, including machines that can interpret and translate speech in real-time. 

3. The loss of meaning in translation: Despite the advanced technology, the novel shows that translation can still lead to a loss of meaning and interpretation, particularly when communicating with artificial intelligence (AI). 

4. The importance of precise language: The novel highlights the importance of precise language and the limitations of translation when it comes to conveying the subtleties and nuances of human communication. Overall, the main features of translation in “I, Robot” reflect the novel’s exploration of the relationship between humans and technology and the challenges that arise when trying to communicate with machines.

There’s a stark discontinuity here: The conclusion the AI model draws from the points it presents is that they “reflect the novel’s exploration of the relationship between humans and technology and the challenges that arise when trying to communicate with machines,” but this doesn’t follow from the four points of analysis the conclusion comes after at all. Only the second point lines up with this conclusion; the first is completely unrelated, and the last two are about language, not anything specific to technology. 

The conclusion is a regurgitation of the AI’s brief summary of the novel’s themes at the beginning of the paper, not drawn from the analysis at all. There is no thought process behind this analysis, either; the section is just a string of talking points, with no connections being drawn between them. 

These sorts of discontinuities differ greatly from case to case, and there’s still not any one pattern that’s definite proof of GenAI use. It’s also still not certain that a human would never make similar mistakes (although it’s bad writing in either case, which you don’t want even when it’s written by a human). 

But the more a manuscript reads like there was no thought process behind it, the more likely it is that there wasn’t. As you read, focus on how the pieces of the text fit together. What connections are made between ideas? How do ideas relate to each other? You might suddenly see AI’s fingerprints on the text. 

No one thing is a smoking gun, but when a text has enough inhuman-seeming discontinuities, it’s fair to suspect GenAI. 

Read Carefully

There’s no foolproof method of spotting generative AI. As I discussed in part one, the methods that seem easiest are flawed and often counterproductive. But if you pay attention, you can weed out at least some of the toasters from the humans, and you can spare yourself and your business some trouble.

Headshot of Sean Brenner

Sean Brenner is a freelance writer specializing in scripts for video essays and similar forms of content. He writes scripts for YouTube videos covering Star Wars lore for Frontier Media and Star Trek for Trek Central. You can learn more about his work at Imagined Worlds Writing Services and find him on Bluesky.

2 thoughts on “Discerning AI-Generated Text from Human Writing, Part 2

reply

Your email address will not be published. Required fields are marked *