Grammarly edits flagged as ai - how to handle detector scores?

I’m a teacher and I’m running into something that’s getting messy.

A student turned in an essay that reads “mostly like them,” but it’s cleaner than usual. A couple of sentences feel slightly… polished in a way I can’t describe. They admitted using grammarly for grammar fixes. Nothing else (they say).

Here’s the problem: two different ai detector tools I tried flagged it high, and now I’m stuck in the “does grammarly get flagged as ai” spiral. I don’t want to accuse someone on vibes. But I also can’t ignore the flag.

“While social media offers unprecedented connectivity, it also reshapes attention into a scarce resource. The constant flow of notifications encourages shallow engagement, which can erode deep work and reflective thinking. However, when used deliberately, these platforms can foster community, amplify marginalized voices, and support collaborative learning.”

If you saw something like that in a normal high school paper, would you treat the AI score as meaningful? Or is this just about ai detection accuracy being shaky once a tool has “smoothed” the writing?

I’m looking for practical ways to handle it without turning class into a courtroom. How are people verifying “is this ai written” when the student might just be using editing assistance?

I’d treat the detector score as a signal, not proof.
Especially if the student used a tool that rewrites phrasing, not just fixes commas.

If you can, ask for process evidence: outlines, drafts, notes, or even a short in-class paragraph on the same prompt. The gap is usually clearer than any percentage.

The snippet you posted is “generic polished,” which is exactly what heavy editing creates.

One thing: some students run a paragraph through grammar help repeatedly. It converges toward the same safe sentence structures. That can look AI-ish even when it’s not.

I’d talk policy first. “Editing is fine, generating is not.” Then ask them to explain 2–3 claims from the essay out loud.

Student perspective: detectors scare everyone, even honest people.

If grammarly changed wording, it can basically “standardize” the tone. That’s what my teachers react to.
Maybe let them redo one section live, same topic, no tools, 20 minutes. Compare it. Simple.

If you want something a bit more objective, look for consistency across the doc.

AI-ish flags often show up when: vocab level jumps, transitions are too uniformly tidy, and examples feel “floaty” (no lived detail). But editing can also remove the messy human parts, so… yeah.

I’d avoid “gotcha” questions. Ask them to annotate their own essay: why each paragraph is there, what they would cut, what they’d expand. People who wrote it can usually do that quickly.

From a publishing angle: we see the same issue with copy that’s been over-edited.

If the student is allowed to use grammarly, your best leverage is process requirements going forward: checkpoints, versioned drafts, short reflections (“what did you change and why”), maybe even a “tool usage note.”

Also: don’t let the detectors dictate consequences. They’re noisy. One false positive and trust is gone, and it’s hard to get back.