Everything runs locally. Two runtime dependencies. No API keys needed for the optimizer itself.
Seven pillars of deterministic prompt optimization
0–100 score across five dimensions: clarity, specificity, completeness, constraints, and efficiency.
Deterministic rules catch scope explosion, missing constraints, hallucination risk, and more.
Outputs prompts with role, goal, and constraints — targeting Claude XML, OpenAI system/user, or Markdown.
Multi-stage compression pipeline strips irrelevant content and reports token savings. Zone-aware, preserves structure.
Token and cost estimates across 11 models from 4 providers: Anthropic, OpenAI, Google, and Perplexity.
Ed25519-signed keys verified locally. No accounts, no network calls, no tracking. Your prompts stay private.
import { optimize } — pure functions, zero side effects. Use as a library, not just a server.
Fully offline. Zero LLM calls. Extensively tested. Run the suite yourself to verify.
Every tool available in the MCP server — 16 free, 3 metered
| # | Tool | Tier | Purpose |
|---|---|---|---|
| 1 | optimize_prompt | Metered | Analyze, score, compile, estimate cost → PreviewPack |
| 2 | refine_prompt | Metered | Answer questions, add edits → updated PreviewPack |
| 3 | approve_prompt | Free | Sign-off gate → final compiled prompt |
| 4 | estimate_cost | Free | Multi-provider token + cost estimator (incl. Perplexity) |
| 5 | compress_context | Free | Smart multi-stage compression pipeline |
| 6 | check_prompt | Free | Quick pass/fail + score + top issues |
| 7 | configure_optimizer | Free | Set mode, threshold, strictness, target |
| 8 | get_usage | Free | Usage count, limits, remaining quota |
| 9 | prompt_stats | Free | Aggregated stats: avg score, top tasks, savings |
| 10 | set_license | Free | Activate Pro/Power/Enterprise license key |
| 11 | license_status | Free | Check license, tier, expiry |
| 12 | classify_task | Free | Classify by task type, complexity, risk, profile |
| 13 | route_model | Free | Route to optimal model with decision_path audit |
| 14 | pre_flight | Metered | Full pipeline: classify → risk → route → score |
| 15 | prune_tools | Free | Score/rank tools by relevance, optionally prune |
| 16 | list_sessions | Free | List session history (metadata only, no raw prompts) |
| 17 | export_session | Free | Full session export with rule-set hash + policy hash |
| 18 | delete_session | Free | Delete a single session by ID |
| 19 | purge_sessions | Free | Bulk purge by age policy, with dry-run + keep_last |
Who benefits from deterministic prompt optimization
Score and compile prompts before they reach production. Catch ambiguity that leads to unpredictable LLM output.
Multi-provider cost estimates across 11 models. Know exactly what each prompt will cost before you send it.
Run prompt-lint as a quality gate. Fail builds when prompt quality drops below your threshold.
Integrates natively with Claude Desktop, Claude Code, and any MCP-compatible client. No configuration needed.
Context compression strips irrelevant tokens. Tool pruning removes unnecessary tools from the context window.
Alternatives and their trade-offs
| Method | Pros | Cons |
|---|---|---|
| Manual prompt rewriting | Full control, no tooling needed | Inconsistent quality, not scalable, no scoring |
| Fine-tuning models | Optimized for your domain | Expensive, slow iteration, requires ML expertise |
| Trial-and-error | Quick to start | No structured feedback, wastes tokens, not reproducible |
| Prompt Control Plane | Deterministic scoring, 19 tools, multi-provider cost, offline, CI/CD ready | Deterministic rules only — no semantic understanding of domain context |
Use as a library — no MCP server required
npm install claude-prompt-optimizer-mcp