Works with: Claude Code, Codex CLI, OpenCode, Gemini CLI, pi-agent, and more.
Getting Started | Usage Guide | Handbook - Skills, Agents, Templates

ace-llm gives developers and coding agents one command surface for querying any LLM provider. Address models by alias (gflash, sonnet), explicit provider:model notation, or with thinking levels (codex:gpt-5:high) and execution presets (cc@ro). Pass prompts and system instructions inline or as file paths. Fallback routing and retry behavior keep prompt workflows resilient.
How It Works
- Select a model — by alias,
provider:model, with a thinking level suffix (:low/:medium/:high), or an@preset— and submit a prompt. - The provider router resolves the target through ace-llm-providers-cli adapters, applying fallback and retry rules from the config cascade.
- The response is returned as text, markdown, or JSON with optional token usage metadata.
Use Cases
Switch providers with aliases - use short names like gflash, sonnet, opus instead of full provider:model notation. Aliases resolve through versioned YAML in .ace-defaults/.
Control reasoning depth - append a thinking level (codex:gpt-5:high, claude:sonnet:low) to tune reasoning budgets. Supported CLI providers: claude, codex (levels: low, medium, high, xhigh).
Run preset-driven prompts - apply execution profiles with @preset or --preset. Built-in presets for CLI providers: @ro (read-only), @rw (read-write), @yolo (full autonomy). Supported by: claude, codex, gemini, opencode, pi.
Build resilient prompt workflows - configure fallback chains and retry behavior through the config cascade so transient provider issues do not block work.
Power LLM-enhanced flows in sibling packages - serve as the execution backend for ace-git-commit, ace-idea, ace-review, ace-sim, ace-prompt-prep, and more.
Getting Started | Usage Guide | Handbook - Skills, Agents, Templates | Part of ACE