AI
`mandu ai` is a terminal playground for streaming chat with Claude, OpenAI, Gemini, or a local Ollama/LM-Studio. Plus the prompt template system, 4 new MCP tools, loop closure framework, and auto skills generator.
On this page
AI
Mandu ships first-class terminal-based AI tooling. A streaming chat
playground, a non-interactive eval harness, a prompt template system,
four new MCP tools for agents, a pure loop-closure framework, and an
auto skills generator — all under the mandu ai namespace or wired
into @mandujs/mcp and @mandujs/skills.
# Streaming chat with a local Ollama instance (no API key required)
mandu ai chat --provider=local
# One-shot eval across multiple providers
mandu ai eval --prompt="Summarize this repo" \
--providers=local,claude,openai
Providers at a glance
| Provider | Env var | When to use |
|---|---|---|
claude |
MANDU_CLAUDE_API_KEY |
Long-context reasoning, tool use |
openai |
MANDU_OPENAI_API_KEY |
Diverse model catalog, structured output |
gemini |
MANDU_GEMINI_API_KEY |
Multimodal, free tier available |
local |
(optional) MANDU_LOCAL_BASE_URL |
Ollama / LM-Studio / deterministic echo |
The local provider ships a deterministic echo responder that
works with no API key — ideal for CI, smoke tests, and demos. Set
MANDU_LOCAL_BASE_URL=http://127.0.0.1:11434 to hit a real Ollama or
LM-Studio instance.
Pages
- Chat —
mandu ai chatREPL, slash commands, history schema. - Eval —
mandu ai evalnon-interactive harness, cross-provider comparison. - Prompts — prompt template system, adapters architecture, presets.
- MCP tools — 4 new tools:
run.tests,deploy.preview,ai.brief,loop.close. - Loop closure — pure detector/emitter framework for recognizing stall patterns.
- Skills generator —
@mandujs/skillsauto-skill-manifest generator.
Security posture
- API keys never logged. All error messages mask tokens as
sk-***. The adapter layer never surfaces the raw key. - Bun.secrets fallback. If the env var is absent,
mandu aireads from OS keychain (macOS Keychain, Windows Credential Manager, Linux libsecret). - Slash-command arguments are escaped.
/preset ../etcis rejected by a strict alphanumeric allow-list before any file open. - Non-UTF8 / NUL-containing input is rejected with CLI_E308 so it never hits the adapter's HTTP body.
mandu.loop.closeis pure. Nofs, nospawn, nofetch. Output is advisory text — an orchestrator decides whether to feed it back into the agent.
Troubleshooting
| Code | Meaning | Fix |
|---|---|---|
CLI_E300 |
API key missing | Export the right MANDU_*_API_KEY |
CLI_E301 |
Stream failed | Check network / provider status |
CLI_E302 |
Malformed history | Regenerate with /save |
CLI_E303 |
Preset not found | Check docs/prompts/<name>.md |
CLI_E307 |
Timeout | Raise MANDU_AI_TIMEOUT_MS |
CLI_E308 |
Non-UTF8 / NUL input | Sanitize before re-prompting |
🤖 Agent Prompt
Apply the guidance from the Mandu docs page at https://mandujs.com/docs/ai/index to my project.
Summary of the page:
AI category hub. Four providers: claude, openai, gemini, local (echo + Ollama/LM-Studio). Two CLI commands: `mandu ai chat` (REPL) + `mandu ai eval` (non-interactive). Phase 14 ships: 4 new MCP tools (run.tests, deploy.preview, ai.brief, loop.close), prompt templates, loop closure framework.
Required invariants — must hold after your changes:
- Four providers: `claude`, `openai`, `gemini`, `local` (Ollama/LM-Studio compatible)
- API keys NEVER logged — errors mask them as `sk-***`
- Local provider echo responder works offline with no API key — deterministic, CI-safe
- History JSON schema v1 — `/save` produces portable exports
- MCP tool `mandu.loop.close` is pure — no I/O, no spawn, advisory text only
Then:
1. Make the change in my codebase consistent with the page.
2. Run `bun run guard` and `bun run check` to verify nothing
in src/ or app/ breaks Mandu's invariants.
3. Show me the diff and any guard violations.
Related
- MCP tools — how the 4 new tools plug into
@mandujs/mcp. - Build with Agents — MCP setup — wire Claude Code / Cursor / Continue / Aider.
- CLI —
mcp register— automated merge-safe registration.
For Agents
{
"schema": "mandu.ai/v0.25",
"commands": {
"chat": "mandu ai chat",
"eval": "mandu ai eval"
},
"providers": ["claude", "openai", "gemini", "local"],
"env_vars": {
"claude": "MANDU_CLAUDE_API_KEY",
"openai": "MANDU_OPENAI_API_KEY",
"gemini": "MANDU_GEMINI_API_KEY",
"local": "MANDU_LOCAL_BASE_URL (optional)"
},
"mcp_tools": [
"mandu.run.tests",
"mandu.deploy.preview",
"mandu.ai.brief",
"mandu.loop.close"
],
"rules": [
"API keys are never logged — adapter layer masks as sk-***",
"Local provider works offline with deterministic echo — CI-safe",
"loop.close is pure: no I/O, advisory text only"
]
}For Agents
AI category hub. Four providers: claude, openai, gemini, local (echo + Ollama/LM-Studio). Two CLI commands: `mandu ai chat` (REPL) + `mandu ai eval` (non-interactive). Phase 14 ships: 4 new MCP tools (run.tests, deploy.preview, ai.brief, loop.close), prompt templates, loop closure framework.
- Four providers: `claude`, `openai`, `gemini`, `local` (Ollama/LM-Studio compatible)
- API keys NEVER logged — errors mask them as `sk-***`
- Local provider echo responder works offline with no API key — deterministic, CI-safe
- History JSON schema v1 — `/save` produces portable exports
- MCP tool `mandu.loop.close` is pure — no I/O, no spawn, advisory text only