The mode shift
When the AI returns an output, most people read it the way they'd read a text from a friend — skim for the vibe, accept if it feels right. That's fine for casual tasks. It's dangerous for anything with stakes.
Discernment means reading an AI output the way a code reviewer reads a pull request. Patient, skeptical, asking one question at a time.
Three discernment checks
Check 1 — Does it answer the actual question?
AI outputs drift. You asked for "three counter-arguments against this policy." You got three well-written paragraphs about the policy, only one of which is actually a counter-argument.
Read once to check: is this answering what I asked?
Check 2 — Are the facts real?
AI outputs fabricate, confidently. Names of people who don't exist. Statutes that don't apply. Citations that look right and aren't.
Spot-check one fact per paragraph. Not all of them — pick the one the reader will rely on most. If that one checks out, you're probably okay. If it doesn't, you have to redo the whole output.
Check 3 — Does it match your taste?
This one takes honesty. Read the output out loud. Listen for:
- Words you wouldn't use.
- Vague hedges ("many experts agree").
- Transitional filler ("It's important to note that…").
- A voice that isn't yours.
If it doesn't sound like you, ship it anyway only if taste doesn't matter for this task. Most of the time, taste matters more than you think.
The hardest case
Discernment is hardest when you can't judge the technical content. This is the cruel asymmetry we named in Delegation: AI is most useful on tasks you can least evaluate.
Two moves that help:
- Ask the AI to critique itself. "What's the weakest claim in this response? What would a sharp reviewer challenge?" This surfaces a surprising amount.
- Find a second opinion. Another AI, another human, or literally Googling one citation. Outsourcing a sanity check is cheap.
Homework
Take one AI output from the last week — a real one. Apply the three checks in order. Write down what you found. Post one sentence on the lesson discussion: "My biggest discernment miss was ___."
Next lesson: the ethics of a good push-back.
Inspired by Anthropic's "AI Fluency: Framework & Foundations".