Lightweight authenticity checklist (fast scan vs deep dive)

I’m trying to standardize a *lightweight* authenticity review for a small team. We keep getting content that “feels” like it came out of ChatGPT, but we don’t have time to treat every piece like a forensic investigation.

What I want is a two-speed checklist:

**Fast pass (2–3 minutes):** quick flags + “safe to publish / needs human rewrite / escalate.”
**Thorough pass (20–30 minutes):** only when the stakes are higher (legal, reputation, sensitive topics).

I’m not looking for “use a detector” as the whole answer. Detectors are part of it, but I’d rather combine: metadata checks, writing pattern checks (stylometry-ish, but practical), and process checks (e.g., “ask for drafts,” “ask for sources,” etc.).

Here’s the text that triggered this whole thing:

“Across industries, teams are discovering that authenticity isn’t just a buzzword—it’s the foundation of trust. By implementing consistent review frameworks, organizations can ensure that every message resonates with clarity and credibility.”

It’s not *wrong*. It’s just… frictionless and generic.

So: if you were building a fast vs thorough checklist for **how to check if something was written by ai**, what goes on each tier? And what’s your “escalation rule” when you’re unsure?

Fast pass for me is basically: *purpose + specificity + provenance*.

If it’s making claims, I ask: “What’s the concrete example?” If none, it’s a rewrite.
If it references facts, I ask: “Where did this come from?” If they can’t answer quickly, escalate.

Deep dive: request drafting history (even screenshots of revisions). Humans leave mess.

That snippet screams “pleasant fog.” Not a crime, but it’s a smell.

My fast pass is a vibe check + one hard test: ask the writer to add a weird constraint in 2 minutes (“include one surprising detail from your own experience” or “make the second sentence disagree with the first”).
If the revision comes back still smooth/neutral, I escalate.

Thorough pass: look for repetition of rhetorical moves (setup → broad claim → safe advice). It’s a pattern.

I like making the fast pass **binary**:

  1. Is there any *verifiable anchor*? (date, place, named method, actual numbers)
  2. Is there any *idiosyncrasy*? (odd phrasing, non-optimal sentence, specific preference)

If both are “no,” treat it as “unknown” and require edits.

Deep pass: metadata + revision trail + prompt transparency (if they used ChatGPT, fine—just disclose internally).

Hot take: “fast vs thorough” should depend on **distribution**, not just topic. A low-stakes blog post that ranks well can still become high-stakes when it spreads.

My escalation rule: if it’s “is this ai generated text” territory *and* it’s going to be quoted/copied, I do the deep dive.
Checklist item I’d add: “Can we defend this sentence in public?” If not, rewrite.

Also: keep a tiny internal label like “AI-assisted” vs “human-authored” even if you don’t publish it.