🎨

13 Ways To Detect AI Written Content

You can detect AI-written content either with the use of AI detection tools or by manually going over text and looking for some of the 12 common signs of AI authorship. 

A combination of the two often works best.

While no method is completely accurate, this blended approach will ensure the highest likelihood of correctly identifying AI writing. 

What you will learn

  • What AI content detection is and how it works
  • Which tools you can use to detect AI-written content
  • The 12 obvious signs of AI-generated text
  • Whether or not you should hide that a piece of content was written with the help of AI 

What is AI content detection?

AI content detection is the process of determining whether a text was generated by AI or written by a human. 

AI-generated articles, images and videos have become the internet’s new normal.

AI tools are being used extensively to speed up and improve creative processes like writing articles and making videos.

And while AI can significantly speed up the process of content creation, it needs human oversight to maintain quality.

Why is AI content detection important?

Identifying AI-generated content is important because certain settings may rely on the accuracy and experience of human text. This is particularly significant in sectors like healthcare and finance, where information accuracy is essential.

In academia, AI content detection helps uphold standards of originality and validity, making sure that contributions are genuinely reflective of human effort and intellect.

This is very important for maintaining the value of academic credentials and professional content, where it is important to show knowledge, critical thought, and creativity.

Academic institutions are still formulating AI content policies.

In a study by education industry financial advisor Tyton Partners, most instructors encourage using AI to ideate but draw the line at AI text generation.

AI content recognition is also very important for fighting false information in the news. AI-generated content can be used to manipulate news stories, pictures, or videos that look real but aren't.

Artificial intelligence has enabled the automation of fake news, spreading false information about elections, wars, and natural disasters.

NewsGuard, an organization that tracks false information, says the number of websites hosting AI-made fake news stories has grown by more than 1,000%.

Detecting this kind of material helps protect the truth and dependability of public domain information.

AI produced content may also negatively impact the perceived quality and authenticity of a commission piece of content that you are paying for.

We generally expect the contribution of more effort and personal attention to paid tasks.

Handmade goods charge a premium across all walks of life. It may be even more important for you to detect AI content if you are paying human writers.

You are ultimately paying for humanized content that will appeal to and convert humans. You want to make sure your content writer is not merely churning out low-value text using AI.

Google's stance on AI content

Google has nothing against AI-generated content, as long it is not used to manipulate their algorithms for clicks and rankings.

However, pure AI content is highly unlikely to meet Google's quality guidelines.

The misuse of AI content generators, without supplementing them with real value, insights, or experience, can lead to misinformation, which could have significant consequences.

In fact, the latest core algorithm updates have heavily penalized websites that misused AI tools to generate hundreds of pages overnight. 

Google has indicated that AI-generated content should demonstrate the attributes of E-E-A-T (expertise, experience, authoritativeness, and trustworthiness) to ensure it meets their standards.

Can AI tools detect AI-generated content?

AI content detection tools can detect AI-generated content, but they're not always reliable and can often mistake human-written content for AI. They use machine learning and natural language processing to analyze the style, grammar and tone of a text.

They are trained to uncover the patterns and structures used by AI writing tools. 

However, they are not 100% accurate and will often flag false positives, confusing human-written content for AI content. 

Here are a few reasons why anti-AI detectors have trouble with AI content:

  • Human writing can sometimes mimic the patterns that AI detectors are trained to spot, leading to false positives.
  • AI writers use repetitive language and ideas. If your content makes the same mistakes, it will be flagged as AI text.
  • If you use unconventional phrasing and grammar, the AI detection tool might mistake that for a sign of AI output. 
  • The complexity of language and the variety of writing styles can confuse AI detectors, causing them to incorrectly flag content.

As AI writing tools evolve, AI content detection tools will find it even more difficult to recognize human-generated content compared to AI-generated content.

Even Open AI admitted roadblocks in building its AI content classifier and has paused its availability.

Don’t rely solely on AI detectors.

While these tools can provide direction, use your own judgment to identify the true nature of the content.

13 ways to detect AI content writing

We suggest using a combination of manual checks and a paid or free AI detector. This dual approach will give you the best chance of spotting AI writing. 

You’ll often encounter text that is a combination of human and AI effort.

Pay attention to sudden shifts in writing quality or tone of voice. Changes in the complexity of vocabulary and grammar can indicate the section was generated with an AI tool. 

Here are 13 ways to detect AI-generated text.

1. Repetitive writing

AI writing is characterized by repetitive phrases and ideas.

AI models lack the human ability to recognize and avoid redundancy, so they won’t realize they are spinning the same tale over and over again. 

To spot AI-generated content, look for the repeated use of words and phrases. They are often related to the main phrasing of the text. 

OpenAI admits to the verbose nature of large language models on their own blog.

The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.

- OpenAI

Take the example of this snippet generated with ChatGPT 4. Although well written, it repeats itself several times in different ways.

A good human writer would be able to transmit the same idea more concisely.

2. Formulaic sentence structures

AI language models heavily rely on common phrases and idioms. They use them more frequently than human writers, sometimes excessively and inadequately. 

A good way to identify content generated by AI is to look for a lack of variation in sentence structure.

  • Are phrases used to make the text sound natural but don’t add any value?
  • Is the tone overly formal and monotonous? 

The text was most likely written by AI if it is full of sentences that are technically correct but feel stiff or rigid.

Look for words and structures that native speakers would rarely use.

For example, AI has the tendency to use sentences that open with “By…” for their closing remarks. While they would pass a grammar check, a human would rarely phrase them quite like this.

And here's another example.

3. Excessive use of AI typical words

One of the surest signs of AI writing is the frequent use of certain words.

They include, but are by no means limited to: 

  • Crucial
  • Delve
  • Dive
  • Tapestry 
  • Furthermore
  • Consequently
  • In today's adjective world of 
  • Not only but

There is even a Reddit thread dedicated to the collection of overused AI words. You can use it to remove the overuse of AI words when editing AI writing. 

Check out this flowery and over-the-top description written by AI.

The whole paragraph is not usable but take a look at this text,

Held within are not only headphones but an invitation to step into a new realm of audio excellence. Crafted with precision, the aluminum ear cups gleam, offered in hues such as space gray and sky blue, while the robust stainless steel headband declares both resilience and elegance.

- Excessive AI phrasing

Can you imagine a human inviting you into a new realm of audio excellence? 

4. Inaccurate facts and claims

AI has the unfortunate and potentially dangerous tendency to confidently present information that may or may not be correct without providing an accurate source.

This can result in misleading and outright incorrect statements. 

The text was probably written by AI if you notice a questionable claim that does not appear to be true and can be easily verified with a simple search.

Cross-reference stated facts with reputable sources in order to detect potential AI authorship. This is especially important in the healthcare and finance niches, where a false claim can have serious or even irreparable consequences.

For example, this AI writer claims that the best room temperature for sleep is 65 to 68 degrees Fahrenheit. 

However, the recommended temperature is actually 60 to 68 degrees Fahrenheit, as this article from WebMD, a reputable medical resource proves.

While this specific claim might not endanger anyone’s life, a similar hiccup could. 

AI has also been known to confuse the names of products and list inaccurate descriptions and attributes.

For example, we tested AI writers where one of them called the Rabbit R1, a scooter.

While there are numerous scooter models with R1 in their names, none of them are called the Rabbit.

The Rabbit R1 is actually an AI-powered handheld gadget. 

5. Monotonous tone of voice

AI tools noticeably lack the traits you would expect from human content.

  • They don’t use informal language.
  • They don’t use colloquialisms or slang.
  • They don’t have a unique tone of voice. 
Look for a lack of personality and voice when identifying AI-generated content.

Human writers can’t help but show their humanity. We’ll make jokes and insert pop culture references. Decent human writers will not sound monotonous and dull. 

Check out this AI response to a simple, straightforward question.

A human writer wouldn’t present this same information as formally or as stiffly. They wouldn’t reach for terms like “emotionally resonant”, “adaptability” or “vocal prowess”. 

6. Generic explanations without details

AI writers tend to provide vague descriptions that lack specifics. Human writers add relevant details and actionable advice, especially if they have first-hand experience with the topic. 

As we saw earlier, AI writers use a lot of filler words that add little to no value.

They make the text sound complex and well-written, but it’s actually just a collection of weighty words, not a meaningful article. 

Here’s what a popular AI writer has to say about bedtime rituals. 

  • Did you learn anything significant?
  • Do you know what deep breathing exercises look like?
  • Do you know how and when to stretch before bed?

This kind of content places the burden of research on the reader and provides no valuable advice. 

A text that an AI wrote will lack concrete examples, relatable anecdotes and supporting arguments. If there is a lot of bloat but a noticeable lack of detail, you are not looking at a human-written text. 

Note how thin the content of this article about Apple’s car marketing strategy is.

A human writer would state which cities the product would roll out in.

Human content would list specific details and platforms for the campaign. In contrast, this AI writer provides a very simple overview.

But even more significantly, doesn't recognize that the Apple car project has been paused.

7. Unmet search intent

Users looking for information on the web are seeking specific answers on overcoming challenges. Your page's relevance to their query plays a role in their search results.  

But AI writers aren’t always able to detect the nuances of search queries.

If a piece of content does not completely match the search intent behind a topic, it may have been generated with AI. 

Check if a text aligns with what readers would realistically expect to see, based on a post’s title and their original search query.
  • Does it provide answers to all the most likely questions?
  • Does it address the most common and obvious pain points?
  • Does the reader need to go elsewhere to find more relevant information? 

Check out what an AI writer has come up with on the subject of “best vacuum cleaners.”

Users looking for a page on "best vacuum cleaners" are likely looking to compare models before moving on to a purchase.

And some of this information would indeed help you pick a vacuum cleaner.

However, there isn’t a single product review or recommendation, which is what your readers are actually looking for. 

AI tends to produce generic responses that sometimes skirt around the reader’s needs and queries.

Poor AI content generators may not be capable of understanding human search behaviors and the intentions behind searches. 

8. Lack of subject matter expertise

AI models are great at predicting what the next word in a text should be. However, they don’t understand what they are “writing” about.

They have no way to grasp meaning.

Therefore, even though they use vast training data sets, their subject matter expertise is limited to the data.

Complex topics are covered superficially because AI text lacks the depth and detail that come from first hand expertise.

AI is unable to display the depth and breadth of human understanding on a given subject.

Look at this example from a website using AI content.

You don’t get the feeling that the author has ever made a cup of coffee.

An experienced barista would tell you the right temperature for espresso vs. latte. They would tell you how much coffee goes into each drink. An AI model is not trained to do that. 

If you spot generic information that can easily be found via a simple web search and no evidence of deep research and understanding, the text might be AI-authored. 

9. Outdated content

AI content may rely on outdated sources and fail to take into account the most recent information and data available. 

Check the dates of all referenced studies, statistics and news.

AI tools do not usually prioritize the most recent sources, as they have not been programmed to update their knowledge base in real time. 

Evaluate the content for relevance in today’s context.

Is the information still applicable? Have there been more recent discoveries? 

Here’s what a generic AI writer says about the release date of the Apple Car. 

The launch was originally pushed back to 2026 before Apple dropped the project altogether.

This false information demonstrates that an AI writer who didn't have access to the most recent developments wrote the text.

10. Absence of personal experience

AI content lacks the ability to establish a personal connection and will rarely convey empathy or humor, like human writing can. 

Human writers, on the other hand, tend to pose reflective and insightful questions in their writing. They will share personal insights and stories. 

When trying to spot AI content, scan the text for personal experiences or anecdotes.

Look for emotions and personal opinions that AI will struggle to express convincingly. 

Think about it. AI writers rely on their training data, not first hand experience to provide information. Any generated content will either paraphrase someone else's experience, or provide a generic overview.

You can easily tell the lack of experience in either case.

This article from a popular AI writer includes a section about unboxing the Apple Airpods Max.

You never get a sense that the author has laid their hands on them. There is a complete lack of excitement and curiosity about the product. 

Also, note the heavy use of AI phrases.

"You're hit with a rush of excitement as the elegant design and superior craftsmanship become apparent, promising an auditory adventure"

– Weak AI writer using extreme AI language

11. Unconvincing storytelling

AI content generators can struggle with creating a coherent narrative in a piece of content. Their storytelling efforts may lack logical progression, and the end result can be unconvincing and difficult to follow. 

Look for abrupt changes in topics or jumping from one subject to another.

  • Do conclusions naturally follow the preceding sentences?
  • Can you easily identify the source and flow of an argument? 

Examine the overall structure of a text for signs of disjointed thought organization. Human content tends to have a natural progression that is easy to understand. AI writing often lacks it. 

Check out this example from another popular AI writer. 

Notice the attempt at storytelling that ends up sounding forced and unnatural. This entire passage is one idea retold in different ways. 

While AI can generate content based on existing data, creativity often comes from thinking beyond expected narratives.

AI lacks this intrinsic human quality, making it difficult for AI writers to fully understand and convey the emotional depth and subtleties that make stories more interesting.

AI writers may also struggle with cultural nuances, social cues, or historical contexts that give stories their depth and meaning.

12. Difficulty with sarcasm

Sarcasm is a complex form of communication that relies on tone, context, and an understanding of social rules and expectations.

AI struggles to interpret and use humor and sarcasm correctly.

It will often use them incorrectly and write disjointed and odd sentences. 

Even though natural language processing has come a long way, AI is still not very good at detecting tone, especially when it differs from what the words actually mean.

AI writers have trouble distinguishing between sarcastic comments and sincere ones.

Examine the text for failed attempts at wit and banter.

  • Does the sarcasm fall flat?
  • Is the timing off?
  • Does it seem forced? 

Human writing is more subtle and humor tends to be context-appropriate.

We use our intuition to figure out how to use unspoken cues and hidden meanings in human interaction. AI, on the other hand, works with algorithms that might not fully understand sarcasm.

Understanding sarcasm requires a degree of emotional intelligence to appreciate the irony or humor intended by the speaker. AI lacks this emotional intuition, making it difficult for these tools to grasp why something might be said sarcastically.

Understanding sarcastic comedy or humor requires some degree of emotional intelligence. Without this, AI writers find it hard to understand why something might be said sarcastically.

Here's an example of an article that the Guardian published using ChatGPT. Notice the AI's self-awareness that it does not possess feelings.

13. Use AI detector tools

There are numerous AI writing detectors designed to help you identify AI-written content. Some of them will also have a built-in plagiarism checker that will help you identify plagiarized content. 

While these tools can be quite helpful, they are not infallible.

They can and often will falsely flag human content. They are best used as a supplementary verification method. 

As a reminder, even OpenAI’s own text classifier is not entirely reliable. 

Tools like GPTZero, Copyleaks, Tunitin and Grammarly can be used to identify AI content.

They analyze writing style and tone, as well as inconsistencies in writing that can signal AI-authored content. 

None of them are flawless but they can serve as a good starting point in discerning whether a piece of content was created by a human brain or an artificial intelligence tool. 

Should you hide AI-generated text?

No, you shouldn’t hide AI-generated writing because there is no penalty to incur as long as you don’t use AI-generated text to manipulate search engine rankings. 

Google has nothing against you using an AI writing tool. You can safely use AI content, including videos, images and text, as long as it meets their quality guidelines. 

As explained, however content is produced, those seeking success in Google Search should be looking to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.

- Google

In practice, this means you need to humazine AI content.

Rewrite overused phrases, add personal examples and experiences, edit the tone of voice and add relevant proof of authority and credibility. 

You don't need to worry about anything as long as your content is interesting, practical, and beneficial to real people.

Note that Google doesn't require you to add a disclaimer for AI content except for some cases.

If you are relying on AI writers to generate news or content on money and healthcare, you may want to clearly highlight which articles and visuals have been created with the help of AI for the sake of transparency.

A competent AI writer will be able to pass off AI-generated content as human-written for informational content that doesn't require obvious evidence of human experience.  

This won't happen 100% of the time, but a proficient AI writing tool can mimic human writing for most topics.

Consider testing Surfer AI if you want an AI writing tool that can pass AI detectors by avoiding repetitive phrases, using varied sentence structures, and simulating a natural tone to conceal its non-human origins.

You'll be able to customize articles based on a blog post format, tone of voice and organic competitors.

But if you're looking to pass AI content detectors specifically, we have an easy button that acts as an AI humanizer.

Here's an example of an article we ran through Originality's AI content detector.

Surfer AI also updates information and generates text that aligns with user intent, demonstrates subject matter expertise, and even includes plausible personal anecdotes, thereby passing as convincingly human-authored content.

Key takeaways

  • AI content detection determines if a text was written by an AI or a human. 
  • AI detection tools are not 100% accurate and can often confuse human text for AI content.
  • Understanding the nuances of AI-generated content can protect businesses and consumers from misinformation and poor-quality information.
  • When trying to detect AI-generated text, look for the repetitive and extensive use of certain words and phrases, formulaic sentence structures and a monotonous, formal tone of voice. 
  • AI writers will not mention first-hand experiences but rather provide generic answers and information. They won’t sound like experts on the subject. 
  • AI tools can also provide inaccurate and outdated information, so you will need to fact-check their work. 
  • As long as you don’t use them to game search engine algorithms, you can safely recruit AI tools to help you automate some of the writing process.  
  • Enhancing AI-generated content with human insight and expertise can result in a more engaging and trustworthy final product.

Conclusion

In order to identify AI-generated content, rely on a combination of your own judgment and an AI content detector. 

Read the sentences and passages AI content detectors have flagged, and use our article to decide who the likely author is. 

Don’t be afraid to use AI in writing; just don’t forget to add a human touch.

Like this article? Spread the world

Get started now,
‍7 days for free

Choose a plan that fits your needs and try Surfer out for yourself. Click below to sign up!

Screenshot of Surfer SEO Content Editor interface, displaying the 'Essential Content Marketing Metrics' article with a content score of 82/100. The editor highlights sections like 'Key Takeaways' and offers SEO suggestions for terms such as 'content marketing metrics