Features

Built for people who take prompts seriously

Everything runs locally. Two runtime dependencies. No API keys needed for the optimizer itself.

Core Capabilities

Seven pillars of deterministic prompt optimization

🔍

Quality Scoring

0–100 score across five dimensions: clarity, specificity, completeness, constraints, and efficiency.

🛡️

Ambiguity Detection

Deterministic rules catch scope explosion, missing constraints, hallucination risk, and more.

🎯

Structured Compilation

Outputs prompts with role, goal, and constraints — targeting Claude XML, OpenAI system/user, or Markdown.

Context Compression

Multi-stage compression pipeline strips irrelevant content and reports token savings. Zone-aware, preserves structure.

💰

Multi-Provider Cost

Token and cost estimates across 11 models from 4 providers: Anthropic, OpenAI, Google, and Perplexity.

🔒

Offline License

Ed25519-signed keys verified locally. No accounts, no network calls, no tracking. Your prompts stay private.

📦

Programmatic API

import { optimize } — pure functions, zero side effects. Use as a library, not just a server.

Why You Can Trust the Output

Reproducible: Same prompt always produces the same score, routing, and cost estimate. No randomness.
Accurate Pricing: Cost estimates verified against live provider rates across Anthropic, OpenAI, Google, and Perplexity.
Risk Detection: Catches scope explosion, missing constraints, hallucination risk, and underspecified prompts before they reach an LLM.
Safe Compression: Guaranteed to never increase token count. Structured content (code, tables, lists) is always protected.
Multi-Provider: Routes to the right model at the right price, with a full decision trail explaining why.
Auditable: Every decision is logged with tamper-evident integrity. Enterprise teams get cryptographic proof of compliance.

Fully offline. Zero LLM calls. Extensively tested. Run the suite yourself to verify.

All 19 Tools

Every tool available in the MCP server — 16 free, 3 metered

# Tool Tier Purpose
1 optimize_prompt Metered Analyze, score, compile, estimate cost → PreviewPack
2 refine_prompt Metered Answer questions, add edits → updated PreviewPack
3 approve_prompt Free Sign-off gate → final compiled prompt
4 estimate_cost Free Multi-provider token + cost estimator (incl. Perplexity)
5 compress_context Free Smart multi-stage compression pipeline
6 check_prompt Free Quick pass/fail + score + top issues
7 configure_optimizer Free Set mode, threshold, strictness, target
8 get_usage Free Usage count, limits, remaining quota
9 prompt_stats Free Aggregated stats: avg score, top tasks, savings
10 set_license Free Activate Pro/Power/Enterprise license key
11 license_status Free Check license, tier, expiry
12 classify_task Free Classify by task type, complexity, risk, profile
13 route_model Free Route to optimal model with decision_path audit
14 pre_flight Metered Full pipeline: classify → risk → route → score
15 prune_tools Free Score/rank tools by relevance, optionally prune
16 list_sessions Free List session history (metadata only, no raw prompts)
17 export_session Free Full session export with rule-set hash + policy hash
18 delete_session Free Delete a single session by ID
19 purge_sessions Free Bulk purge by age policy, with dry-run + keep_last

Use Cases

Who benefits from deterministic prompt optimization

Developers building AI features

Score and compile prompts before they reach production. Catch ambiguity that leads to unpredictable LLM output.

Engineering teams managing LLM spend

Multi-provider cost estimates across 11 models. Know exactly what each prompt will cost before you send it.

CI/CD pipelines

Run prompt-lint as a quality gate. Fail builds when prompt quality drops below your threshold.

MCP-powered workflows

Integrates natively with Claude Desktop, Claude Code, and any MCP-compatible client. No configuration needed.

Reducing LLM costs

Context compression strips irrelevant tokens. Tool pruning removes unnecessary tools from the context window.

How It Compares

Alternatives and their trade-offs

Method Pros Cons
Manual prompt rewriting Full control, no tooling needed Inconsistent quality, not scalable, no scoring
Fine-tuning models Optimized for your domain Expensive, slow iteration, requires ML expertise
Trial-and-error Quick to start No structured feedback, wastes tokens, not reproducible
Prompt Control Plane Deterministic scoring, 19 tools, multi-provider cost, offline, CI/CD ready Deterministic rules only — no semantic understanding of domain context

Programmatic API

Use as a library — no MCP server required

Install
npm install claude-prompt-optimizer-mcp
import { optimize } from 'claude-prompt-optimizer-mcp'; const result = await optimize('Summarize this document'); console.log(result.quality_score, result.compiled);
// Target any LLM const forOpenAI = await optimize('Analyze sales data', { target: 'openai' }); const forClaude = await optimize('Analyze sales data', { target: 'claude' }); const generic = await optimize('Analyze sales data', { target: 'generic' });

Start optimizing prompts today

Get Started Free →