ZeroGPT is an AI detection tool, aka an AI detector, that checks whether your content was written using an AI tool or by a human. It uses AI algorithms and NLP (natural language processing) to analyze your text and look for common AI patterns.
But ZeroGPT, like other AI detectors, isn’t always accurate. A lot of times, it generates false-positive flags in which it incorrectly labels even human-written content as AI-generated. But because many prefer 100% AI-free content, including clients, webmasters, and institutes, writers and students have to humanize their content due to false positives.
But bypassing ZeroGPT isn’t simple. It requires you to thoroughly humanize your content by finding specific AI patterns and rewriting the text to remove them.
But you need not to worry! In this guide, we will learn practical ways on how to humanize content to bypass AI detection in ZeroGPT. If you follow these instructions properly, you will be able to reduce AI detection from 100% to 0%.
How To Bypass ZeroGPT Content?
Before we jump to humanizing to bypass ZeroGPT, we first need to know common AI patterns so that we can actually know what to do and how to humanize. Let’s take a look:
What are AI Patterns?
In writing, AI patterns refer to the predictable ways in which AI tools write content, ranging from word choice to phrases, expressions, to sentence structure and length. These robotic patterns add up and give writing a specific style, called robotic writing style.
Robotic writing style is predictable, meaning, an AI detection tool can predict what words in a sentence could follow another sentence, because they write based on statistically most probable words.
Here are some common and notable AI patterns we know very well:
Robotic Word Choice: AI tools are fond of using certain words in their content, like “underscore,” “landscape,” “realm,” and “innovation.” ZeroGPT and other AI detectors look for these words in your content and flag it as AI-written, especially if there are more than enough of these, which increases AI detection.
Robotic Phrases: Like words, some phrases can also get your content flagged. These are phrases and expressions that AI tools commonly use in their content, like “Whether… or…” and “It’s not just… but also…” etc. ZeroGPT will likely flag your text if these phrases are present in it.
Formal and Neutral Tone: A formal and neutral tone is another common AI pattern. Genuine human content isn’t always in a formal and neutral tone. A lot of content is casual or sometimes even informal. However, AI content is by and large formal and neutral, which has become one of its distinguished characteristics.
Uniform Sentence Structure: AI content is characterized by sentences of uniform structure. There’s little natural variation between the sentences’ structures. This subtle AI pattern is one of the reasons behind AI detection. If your content lacks variation in sentence structures, ZeroGPT will likely flag it as AI-written.
Uniform Text Length: Uniformity in text length is also a notable pattern in AI content. It involves sentences and paragraphs following a similar length, which makes the content feel less spontaneous, compared to human writing.
Humanizing Content to Bypass ZeroGPT AI Detection
Now that you know common AI patterns, you can use humanizing techniques to target these patterns and humanize the content. This process can take a lot of time, though, even though it’s just rewriting and editing.
Humanizing can take time because you will have to spot specific AI patterns and rewrite the text from start to finish, while checking its detection score each time you make a change. Also, ZeroGPT is a stubborn tool. It can misconsider your content as AI-written even if a very small amount of patterns are detected. So, you will have to be thorough in humanizing if you want to reduce AI detection 100%, or you can opt for an easier method and use Paraphraser.us’s Humanizer, requiring only copy-pasting (as discussed in the last technique).
Let’s take a look at some practical and effective humanizing techniques and a complete workflow (using a sample of AI text) in ZeroGPT to remove AI detection.
1. Edit Content in ZeroGPT
Although this isn’t a humanizing technique, editing content in ZeroGPT will make humanizing more practical and easy if this is the detector you’re trying to bypass. If you’re trying to bypass detection in ZeroGPT, it’s best to edit the content within the tool’s interface instead of jumping between tabs.
Paste the Text: Make sure the tool you’re using is ZeroGPT’s AI detector and not another tool like plagiarism checker.
Once you’ve selected the AI detector, paste the required text into it.Detect AI: Click the “Detect Text” button to run the detection test.
Check the Detection: Scroll down to check the detection score and parts of the text that appear are flagged. The tool highlights the flagged parts in yellow color.
Humanize: Scroll up again and find the flagged parts in the input text and humanize them.
Repeat the Process: Repeat step 2, 3, and 4 to:
Detect AI in the newly humanized input text,
Check AI detection in it, and
Humanize the remaining flagged parts of the text.
Repeat until the text becomes fully humanized and ZeroGPT’s detection score hits 0%, although you don’t have to reach exactly 0% detection score, because that can be rather difficult. Instead, aim for a low score of 10-15%, which is considered ideal in most cases and the text is regarded as human-written.
Tips:
Use the Find Bar: When editing, you will have to move between the input and output text. Pressing Ctrl/cmd+F keys allows you to use your browsers’ find bar that can help easily locate exact text within the tool and jump between the input and output content while editing, instead of manual scrolling and finding. For example, if you need to quickly find a line to edit that starts with “Today, there are…,” press ctrl+F and type the words “today, there are” and use the arrow buttons on the find bar to move to the exact-match text.
Use ctrl/cmd + C/V: When using the find bar to find text, an easier way to type the text in the search bar is to copy and paste using ctrl/cmd + C/V keys, where C is for copying and V is for pasting the text. Here’s how to do it on a Windows device: Use the mouse/trackpad to highlight the text you want to search for > then press ctrl+C to copy the highlighted text > ctrl+F to use the search feature > and ctrl+V to paste the text into the find bar and find it.
Avoid Relying on Fluctuations: Sometimes, humanizing a sentence or two causes ZeroGPT’s AI detection score to increase or decrease drastically. Don’t worry if this happens. These are fluctuations, but they’re not reliable. For example, you might wish to stop when the content’s AI score hits 20% all of a sudden from 42% due to a small tweak, but treat it as only a fluctuation because it is not usually accurate and will most likely go back to 40-42% with another tweak. Besides, other AI detectors might also be able to determine the actual score (40-42%). So, continue to humanize the text and only stop when the detection score is consistently low.
2. Replace Common AI Words
Removing common AI words is the easiest humanizing technique, but it can be quite helpful in reducing AI content. Common AI words here refer to words that AI chatbots tend to use a lot in their content, also called AI-isms. Some examples of these words include:
Innovation
Landscape
Underscore
Leverage
Due to chatbots' overuse of these words, they have become strong characteristics of AI writing. A piece of content containing a bunch of common AI words will likely get flagged by ZeroGPT.
So, one of the first things you can do when humanizing content is removing words that are commonly used by AI tools and replacing them with simpler, more natural synonyms. In some cases, it will be better to just remove a word if it’s not strictly needed instead of replacing it with a synonym. Doing so will reduce ZeroGPT’s detection score.
How to fix it:
Simply go through the text you’ve pasted into ZeroGPT and find words that are commonly used by AI tools.
For example, the sample text I input into ZeroGPT has an AI detection score of 82.63%.
When getting rid of common AI terms, I removed the following four words from a part of the text:
excel
flow
consistent
clear
The detection score before removing these words is 82.63%, as clarified:
After I removed the words and patched the sentences, the detection was reduced to 82.48%.
Does this mean that part of the content is now humanized? Well, only partly, and not yet. Part of the reduction happened because I cut a few words off the text, so there’s less of the text that’s detected as AI-generated compared to the entire piece.
If we check the output text, those parts are still flagged as AI-written.
This is because removing those four words only made a very little difference, which is not enough to get rid of detection in those parts. However, these changes, though small, are very important for the bigger picture; remember, there are many AI patterns at play here. Removing a single pattern is not sufficient to make the content feel human-written. So, to fully reduce AI detection, you need to humanize the content thoroughly and remove other patterns as well.
Removing common AI words throughout the text will bring the tool’s suspicion down, especially when you humanize it further, and will eventually remove detection.
3. Remove Common AI Phrases and Expressions
Perhaps the most important and essential humanizing technique is to remove common AI phrases and expressions.
AI tools are fond of certain expressions that give the content a robotic feel, because these expressions are overused in AI content. Below are some of these common AI phrases, along with some examples from the sample text.
The “Whether… or…” expression.
Example: “Whether you’re a blogger, marketer, student, or business owner, generating written content is now faster and more accessible than ever.”
The “While…,” expression.
Example: “While these tools can save time and offer impressive accuracy, the quality of their output depends on how they are used—and what expectations writers bring to the table.”
The “Can… but…” expression.
Example: “They can mimic creativity, but they cannot replicate the human ability to tell stories, create fresh perspectives, or express real feelings.”
The “Instead of…,” expression.
Example: “Instead of replacing writers, AI is becoming a powerful partner—one that helps produce better content faster, without sacrificing creativity or meaning.”
The “Despite…,” expression.
Example: “Despite their strengths, AI tools cannot fully replace the creativity and analytical thinking of a human writer.”
All these phrases add up throughout the content and lead to a stronger AI detection. The tool becomes confident that your content is AI-like, even though these are just expressions that anybody should be able to use. Yet, we are left with no choice but to remove them.
If you observe, many of the common AI expressions include commas, like “Despite…,” and “Instead of…,” expressions. This opens up a way to reduce AI detection, that is, by removing these commas and/or replacing them with a different punctuation when changing AI phrases.
How to fix it:
Go through the text and spot common AI phrases. Think of different ways to write the ideas those expressions convey and paraphrase the text. Here’s how I rewrote the “while” phrase I indicated above:
Original: “While these tools can save time and offer impressive accuracy, the quality of their output depends on how they are used—and what expectations writers bring to the table.”
Rephrased: “No doubt, these tools can save time and offer impressive accuracy. However, the quality of their output depends on how they are used—and what expectations writers bring to the table.”
Removing the AI phrase reduced AI detection. Here’s the detection before removing, which is 82.51%:
And here’s the detection score after removing (81.86%):
If we check the detection, ZeroGPT still flags the sentence. This is because the overall text is still robotic, due to the sheer number of patterns. So, the tool is confident that this part of the text, although not very AI-like, must also be AI-written. It will stop flagging it when the surrounding text seems human-written.
Likewise, continue to look for and rephrase these phrases and you’ll eventually start to see improvement.
4. Remove Unnecessary Modifiers
Modifiers are words—adverbs, adjectives—in grammar that modify the meaning of another word.
AI tools use a lot of modifiers. However, many of these modifiers are just filler words and aren’t necessarily needed in the content. Not do these tools use modifiers, but they have some favorites too, including:
clear
incredibly
extremely
fully
truly
When these modifiers occur along with common AI words, phrases, and other patterns, they lead to a strong AI detection. That’s why removing these can lower the detection score in ZeroGPT.
How to fix it:
Removing modifiers is simple enough, just like removing common AI words. Read through the text, find modifiers and remove them. If you think a word is necessary, you can replace it with a synonym instead of removing it. But pay special attention to adverbs—words that modify verbs like quickly and truly (aka words that end in “ly”).
In my sample text, you can see two unnecessary modifiers in the first paragraph.
Completely, an adverb, modifying the verb “changed.” This is unnecessary, because it suffices to say that AI has changed how we produce content. You don’t necessarily have to add the word “completely.”
Impressive, an adjective, modifying the noun “accuracy.” It’s unnecessary because it suffices to say “AI is accurate” when comparing it to humans. You don’t necessarily have to say “AI is impressively accurate.”
Likewise, I can remove the two more filler modifiers—strongest and extremely—from the next two paragraphs without compromising on the meaning of the text.
Here’s the AI detection after removing those four terms. It fell from 81.86% to 81.73%.
This, again, is a measly decrease. Nothing to be happy about. But we haven’t really humanized the text properly, so the detection persists.
This, nonetheless, is one of the ways to reduce AI detection—by removing unnecessary modifiers.
5. Avoid the Sets of Three
This one is a very interesting AI pattern. If you read lots of AI text, your keen observation will have you realize that AI tools tend to repeat things in three.
This AI pattern is quite obvious (yet not many are talking about it.)
In the AI pattern of three sets, AI chatbots list instances of something in a set of three. This is especially true when these tools name qualities, attributes, and possibilities. And this pattern is very common, too; you can see it in the second paragraph of my sample text.
“They are trained on massive datasets containing countless writing styles, formats, and examples.”
Notice how AI lists three possibilities: “... writing styles, formats, and examples.” If we’re talking about AI datasets containing different types of text material, the possibilities could be well more than just “... writing styles, formats, and examples.” But as I said, AI tools like to limit the possibilities to three.
Here’s another example from the text.
“They can mimic creativity, but they cannot replicate the human ability to tell stories, create fresh perspectives, or express real feelings.”
Notice how AI lists three possibilities: “tell stories, create fresh perspectives, or express real feelings.” like the previous example.
Now, one could argue that this is because the argument requires listing only three possibilities, so all of them are on point. This, I agree, does happen sometimes, but that’s not always the case.
In fact, AI tools aren’t the only ones to do it. This is a common phenomenon in human writing, which actually explains why AI tools are so fond of sets of threes: We know AI tools are trained on massive datasets containing human-writing, so they learn to mimic the most prominent patterns in it, which happens to be full of instances of this pattern. But why is it common? Human writers (as well as now AI) love to use this pattern for various reasons:
Balanced List: Listing three items (qualities, attributes, possibilities, etc.,) is neither too little nor too much. It balances the list. I mean, listing one item may not be sufficient, but two also tends to feel incomplete, and four can feel too much. So, for many writers, number three is the sweet spot where they stop.
Rhythm: Lists of three have a natural rhythm, as in “past, present, future” and “life, liberty, happiness.”
Inclusion of “and” or “or”: The inclusion of an “and” or “or” in the sentence gives the text a smoother flow. Two items come in succession before the third one is separated with a conjunction—it gives a balanced and smooth feel when read.
Higher Probability: AI tools write statistically most words. And because the probability of a human-made list containing three items is higher than the probability of it containing two or four in databases, the tool lists three items, without any preference.
But even though this is common in human-writing, AI tools use it mindlessly—a bit too much—sometimes when it’s not even needed. But nonetheless, this is a strong pattern to look out for now.
How to fix it:
Look out for lists with sets of three and try to break them. Add or remove items in the list or try to vary it in different ways. Here are three ways to break it:
Change the list structure.
For example, an AI list of three like this: “They can analyze data, generate ideas, and summarize text.” can be structured like this: “They can analyze data and generate ideas. When needed, they can also help you condense lengthy documents.”Switch from a list to an example.
For example, an AI list of three like this: “Writers rely on experience, intuition, and perspective.” can be turned into a specific example like this: “Writers rely on intuition, which develops after years of paying attention to people.” The example only focuses on one item.Use cause-and-effect instead of lists.
Instead of a list like this: “The model learns from patterns, styles, and examples.” Give readers a cause-and-effect explanation like this: “The model learns by absorbing patterns from millions of texts, which is why its writing often feels familiar.”
There are other ways to break these patterns. Tweak and experiment with the text to find out what works best for you.
Here’s how removing this pattern reduces AI detection. In the sample text, I altered the list in the following sentence by removing the item “or intuition”:
Original AI Sentence: “This happens because chatbots rely on patterns—not lived experience, personal insight, or intuition.”
Humanized Sentence: “This happens because chatbots rely on patterns—not lived experience, personal insight.”
The detection score before altering the list is 81.73%
The detection score after altering the list becomes 80.96%.
6. Avoid Excessive EM Dashes
Em dashes (—) are common in AI content. It is a versatile punctuation mark in English that allows you to make your writing clearer.
And although I, and many other writers who know their use, love using em dashes because of their versatility, they can increase AI detection scores.
If you’re wondering why AI tools like using em dashes, then the answer should be pretty clear by now; it is because of their training. These chatbots pick up on patterns most prominent in their training data. Like I said, writers love using em dashes for their versatility, and that’s perhaps the exact reason why AI tools do too.
Em dashes can be quite handy. But you don’t have to use them, especially if you’re trying to avoid AI detection, although there’s nothing wrong with using them. In fact, there’s nothing wrong in following any of the writing patterns AI tools follow—it’s just a style of writing, after all. But we have to avoid these patterns to avoid AI detection.
How to fix it:
Find and replace most em dashes in the AI content. Replace them with another suitable punctuation mark, such as comma, semicolon, colon, period, and/or a conjunction like “and” or “or” where possible. You may not be able to simply swap the punctuation marks, so consider rephrasing the sentence to get rid of the dash.
But whatever you do to remove detection, make sure the punctuation and grammar remains accurate, because punctuation is a delicate thing—a simple tweak can break sentences and cause errors, which can also reduce detection, but we don’t want to bypass AI using grammatical errors.
Plus, occasional dashes are okay, as long as they don’t cause detection. In fact, removing some dashes might do the opposite and increase detection, in which case they’re totally fine unremoved.
Let’s take a look at the sample text. I removed two em dashes from the following sentence and replaced them with a comma and the subordinating conjunction “such as”:
Original Sentence: “AI can handle the heavy lifting—drafting, organizing, rephrasing, or generating ideas—while you, the writer, bring originality, personal voice, and emotional connection.”
Humanized Sentence: “AI can handle the heavy lifting, such as drafting, organizing, rephrasing, or generating ideas, while you, the writer, bring originality, personal voice, and emotional connection.”
Before humanizing, the detection score was 80.96%. After humanizing, it became 80.88%.
7. Use Paraphraser.us
Paraphraser.us is an AI-powered paraphrasing tool. It features an advanced AI text humanizer tool that rewrites your content by removing AI patterns to humanize it and reduce its AI detection. Not only humanizing via this tool is a quick method but is also effective against most AI detectors, including ZeroGPT.
Plus, Paraphraser.us humanizes text without distorting its meaning or context, so you don’t have to worry about the meaning being compromised mid-humanizing.
How to use it:
Follow these four steps to use Paraphraser.us:
Paste Your Text: On Paraphraser.us’s AI humanizer, paste your required text into the tool’s input field.
Click “Humanize AI”: Click the “Humanize AI” button at the bottom to start humanizing
Copy the Output Text: Lastly, click the “Copy” button to copy the output text displayed in the output field on the right side of the screen. This is the humanized version of the text.
After humanizing using Paraphraser.us, the detection score was reduced to 0% from 80.88%, which is a significant drop compared to manual humanizing.
Conclusion
Bypassing ZeroGPT can be confusing and difficult. This article shares 7 practical techniques for effective humanizing of AI-generated text to bypass ZeroGPT AI detection, including: Editing content in ZeroGPT’s interface, replacing common AI words, removing common AI phrases, avoiding the sets-of-three pattern, avoiding excessive em dashes, and using Paraphraser.us for convenient and effective humanizing.
Try Our Free Tool
Related Blogs
How to Paraphrase like a Straight-A Student | 5 Simple Steps
Learn how to paraphrase any text like a straight-A student in five simple steps: (1) Reading and...
What Are the Six Pillars of Academic Integrity?
Learn what academic integrity means and explore the six pillars of academic integrity explained in simple terms....
How to Avoid AI Detection in Copyleaks?
9 Practical Techniques to Avoid AI Detection in Copyleaks — Learn how to strategically lower AI detection...