← Back to blog
April 26, 2026·Tips·5 min read

Your work was flagged as AI. Here's what to do.

A playbook for handling a false positive without panicking.

You got tagged by a detector and now there's an email in your inbox asking if you have a few minutes to talk. Uh oh. What you do in the next 24 hours matters more than what the detector said.

Roughly 1 in 20 pieces of human writing will get flagged as AI. You're not the first person this has happened to this week.

Don't panic, and don't confess

The instinct, especially if you're more on the careful side, is to apologize. That's one of the worst things you could do. You didn't cheat, and the person who wrote the email knows the detector isn't the final word. Walking in pre-apologetic gives them less reason to look at the work and more reason to focus on you.

It's also not the time to argue the tech. You'll probably have that conversation, you should start by talking about your work, not about how language models work. The professors and TAs most receptive to the "detectors are flawed" argument are the ones already convinced. The ones least receptive will hear it as a deflection. Save it for after they've seen your evidence.

Your evidence is already there

Modern writing leaves a trail. You usually have more documentation than you realize, and most of it is timestamped.

  • Google Docs version history. File > Version history > See version history (or Cmd+Option+Shift+H). Every save is logged, down to the minute. If you wrote the paper in real time over multiple sessions, this view can settle the conversation. Revisions, time gaps, sentences rewritten three times; none of that exists in a paper one-shot by AI.
  • Microsoft Word recovery and tracked changes. Less granular than Docs, but if you have OneDrive sync turned on, version history works similarly. Locally saved Word files also leave AutoRecover artifacts and metadata.
  • Browser history and open tabs. Sources you pulled from, citations you tracked down, the articles you skimmed in the early morning hours. This helps show your research process.
  • Earlier drafts and notes. Things like your outlines and sticky notes. This is the messy version before you cleaned it up, and even if it's super rough, it shows process.

To be extra safe, screenshot what you have, when you have it, before the meeting. Some platforms quietly trim version history after a few weeks, and you do not want to learn that the hard way.

Run a second opinion

You can run it through GPTypo's free scan as one of those checks.

Different detectors disagree with each other constantly. If only one tool flagged you, that's worth knowing before you walk into a meeting. Run the same passage through two or three different detectors. If most of them return "human," that's worth mentioning, mostly as context: a single flag isn't a unanimous result. Most professors don't realize how much variance there is between detection products, and showing them is more persuasive than telling them.

The kind of detector matters too. A simple heuristic scorer (the kind that checks for repeated phrases and rhythm patterns) and a trained neural model can disagree vastly on the same paragraph. If you can show a verified result from a detector with a published accuracy figures on a public benchmark, that holds up better in a conversation than a screenshot from a free site with no clear methodology behind it. How to evaluate AI detector accuracy walks through what to look for there.

Have the conversation

When you sit down with the professor or TA, the structure that tends to land:

  1. Start with the work. Walk through your process: the assignment, what you read, where you started, how you revised. The more concrete you can be, the better. Mention the source you almost used and dropped. Mention the paragraph you rewrote three times. The texture of real writing is hard to fake on the spot.
  2. Then offer the evidence. Version history, drafts, sources.
  3. Acknowledge the detector without dismissing it. "I understand why the result looked concerning, and I want to walk through what I actually did" goes much further than "those tools are garbage," even if you privately believe the second one.
  4. Ask what they need. If they want a writing sample done in person, or a longer conversation about the topic, agree to it. The goal is to show, not to win an argument about AI detection.

If the conversation goes badly, ask whether the school has an appeals process or an academic integrity board. The official process usually requires the accuser to produce evidence beyond a single detector score, which is a much harder bar to clear than a TA acting on a hunch.

You don't have to defeat the detector. You have to show your work.

What to do next time

If your school uses an AI detector, run your work through one before you submit. This will tell you whether you have a problem before someone else does. The papers that get flagged usually share a few specific patterns that have nothing to do with whether they were generated:

  • Uniform sentence length. Detectors flag prose where every sentence runs about the same length, because AI tends to produce that. Varying rhythm fixes it.
  • Overused transition vocabulary. Words like furthermore, moreover, and "it is important to note" appear at much higher rates in AI output than in human writing. If your essay has six "furthermore"s in it, that signal is doing real work.
  • Flat formality. Real human writing has texture: a casual aside next to a precise definition, a sentence that runs on too long because you got excited about something.

Knowing your score in advance gives you a chance to look at the parts that are tripping the detector and decide whether to adjust them. Same argument, same evidence, same voice, just clearer in the spots that need it.

Why this keeps happening

AI detectors are being deployed faster than they can be validated. Several major universities have quietly disabled their AI detection tools over the past year, citing reliability concerns. The technology is improving, but it's nowhere near accurate enough to be the sole basis for an academic integrity case, and most institutional policies haven't caught up.

If you've been flagged unfairly, you're caught in that gap. Document, verify, communicate, and when you can, check your own work. The detectors aren't going away, but a flag isn't a verdict, and you can respond to one rationally.

We built GPTypo because the false positive problem is real and the existing tools weren't giving writers a way to respond to it. Paste any passage into GPTypo and click Verify for the Pro-tier neural scan. You'll see a calibrated score alongside the specific sentences pulling it down, so the next time a detector flags you, you'll already know what it's reacting to.