Free tool · no signup

AI prompt clarity scorer.
Eight dimensions, no AI roundtrip.

Paste any prompt for ChatGPT, Claude, Gemini, or anything else. Get a transparent breakdown — does it have a role, a task verb, a format spec, constraints, examples? Is it focused on one thing or jamming five together? No LLM call. No data sent anywhere.

Try a sample
77 words · 529 chars
Score
88 / 100
Strong
7 of 8 dimensions passed. See the breakdown below for what to add.
Eight-dimension breakdown
  • Length12 pts

    Right ballpark (77 words). Specific enough to be useful, focused enough to stay coherent.

  • Role / persona12 pts

    Has a role instruction — anchors the model in a specific perspective.

  • Task verb14 pts

    Clear task verb ("write") — the model knows what to do.

  • Output format12 pts

    Specifies the output format — fewer surprises in the response.

  • Constraints12 pts

    No constraints (length cap, tone, audience, what NOT to do). Most prompts benefit from at least one.

  • Examples / few-shot10 pts

    Includes example(s) — strongest single lever for shaping the answer's structure.

  • Specificity14 pts

    Concrete language — minimal vague filler (stuff / things / good / nice / etc).

  • Single goal14 pts

    Looks focused on a single ask.

Vyrable's editorial brain ships pre-tuned prompt templates per persona, content type, and platform — so every brief that goes to a frontier LLM already passes all eight checks. Not "prompt engineering" — prompt hygiene baked into the workflow.