The problem with most prompts
A typical beginner prompt looks like this:
Write me a blog post about AI in education.
Then the user is frustrated that the output is generic. Of course it's generic — the request was generic.
A good description has four parts.
Part 1 — The goal, in outcome terms
Not "write a blog post." What happens after someone reads it?
Write a 600-word blog post so a skeptical parent of a middle-schooler understands what changed this year in how their kid uses AI at home.
Notice: audience (skeptical parent), format (600 words), outcome (they understand what changed).
Part 2 — The constraints
What must be true about the output?
- Written in plain language, no jargon.
- First-person (I, me) — the author has a kid too.
- Cites 2 sources, one of which is a teacher or parent.
Constraints are where most AI collaborations go right. Without them, you'll get generic competence. With them, you'll get your output.
Part 3 — The non-constraints
This is the part most people skip. Say out loud what's not required:
- You don't have to use the word "AI" in the headline.
- You don't have to defend AI — the piece can be skeptical.
- You can be funny if that fits.
Without explicit permission, the AI will play it safe and produce the blandest plausible version.
Part 4 — The evidence of success
How will you know it's good? Describe the check, not the criterion.
- A skeptical parent reading this should not feel talked down to.
- I should be able to read it to my partner without cringing.
- A teacher should recognize the scenarios.
This sounds abstract. It works anyway — the AI uses these cues to steer the draft.
Homework
Take a real request you've made of AI recently. Rewrite it with all four parts. Paste the rewritten version into the AI tutor on this lesson's sidebar and ask: "Is there any ambiguity left?"
The tutor will find something. Rewrite again.
Next lesson: handing over context.
Inspired by Anthropic's "AI Fluency: Framework & Foundations".