CLI Runners
CLI Runners extend GenieBuilder's capabilities by integrating external command-line AI agents. These CLI tools often have their own MCP (Model Context Protocol) support, allowing them to use the same tool ecosystem as built-in providers.
Supported CLI Runners
GenieBuilder supports the following CLI-based AI agents:
| CLI Runner | Provider | MCP Support | Best For |
|---|---|---|---|
| Claude Code | Anthropic | ✅ Yes | Complex implementation tasks |
| Kimi CLI | Moonshot AI | ✅ Yes | Fast, efficient coding |
| Codex CLI | OpenAI | ✅ Yes | Code review and analysis |
| Gemini CLI | ✅ Yes | Multi-modal capabilities | |
| GitHub Copilot CLI | GitHub | ✅ Yes | IDE-integrated workflows |
| Custom CLI | Any | Configurable | Bring your own CLI tool |
Why Use CLI Runners?
CLI Runners offer several advantages:
- Extended capabilities - CLI tools may have features not available via API
- Local processing - Some CLI tools run entirely locally
- MCP integration - Access to file system, git, and custom tools
- Cost control - Some CLI tools offer different pricing models
- Specialized agents - Different CLI tools excel at different tasks
Configuration
Adding a CLI Runner
- Open Settings → CLI Runners
- Click Add CLI Runner
- Select the CLI type or choose "Custom" for unsupported tools
- Configure the executable path and optional arguments
- Test the connection
- Enable the runner for use in chat and workflows
Configuration Options
Each CLI runner supports these settings:
| Setting | Description | Example |
|---|---|---|
| Name | Display name in the UI | "Claude (Local)" |
| Executable | Path to the CLI binary | /usr/local/bin/claude |
| Arguments | Additional CLI arguments | ["--verbose"] |
| Environment Variables | Custom env vars | {"ANTHROPIC_API_KEY": "..."} |
| MCP Config Flag | Flag for MCP config path | --mcp-config |
| Enabled | Whether the runner is active | ✅ |
MCP Configuration Generation
When a CLI runner is used, GenieBuilder automatically:
- Generates a temporary MCP config JSON file
- Includes all enabled MCP servers (Backlog, custom servers)
- Passes the config path to the CLI via the configured flag
- Cleans up the temporary file after execution
Example generated MCP config:
{
"mcpServers": {
"backlog": {
"command": "backlog",
"args": ["mcp", "start"]
},
"custom-server": {
"command": "custom-mcp",
"args": ["--port", "8080"]
}
}
}
CLI Runner Details
Claude Code
Anthropic's Claude Code CLI provides agentic coding capabilities.
Installation: npm install -g @anthropic-ai/claude-code
Default configuration:
- Executable:
claude - Prompt delivery:
stdin(piped) - Output format:
stream-json - MCP flag:
--mcp-config
Example usage:
echo "prompt" | claude -p --mcp-config config.json --output-format stream-json
Features:
- Rich streaming JSON output
- Built-in tool use capabilities
- File editing and shell command execution
- Automatic context management
Kimi CLI
Moonshot AI's Kimi CLI offers efficient code generation and editing.
Installation: Follow instructions at kimi-cli
Default configuration:
- Executable:
kimi - Prompt delivery:
args(--promptflag) - Output format:
stream-json - MCP flag:
--mcp-config
Example usage:
kimi --yolo --print --output-format stream-json --prompt "hello"
Features:
--yolomode for automatic execution- Clean JSONL output
- Fast response times
Codex CLI
OpenAI's Codex CLI is designed for code review and implementation.
Installation: npm install -g @openai/codex
Default configuration:
- Executable:
codex - Prompt delivery:
args(trailing positional) - Output format:
json - MCP flag:
--mcp-config
Example usage:
codex exec --model gpt-5.3-codex --full-auto --json "prompt"
Features:
--full-automode for automatic execution- JSONL event stream
- Acceptable exit codes:
[1](may exit 1 after successful completion)
Gemini CLI
Google's Gemini CLI provides access to Gemini models.
Default configuration:
- Executable:
gemini - Prompt delivery:
stdin(piped) - Output format: Plain text
- MCP flag:
--mcp-config
Example usage:
echo "prompt" | gemini --mcp-config config.json
GitHub Copilot CLI
GitHub Copilot's CLI for command-line AI assistance.
Default configuration:
- Executable:
copilot - Prompt delivery:
stdin(piped) - Output format: Plain text
- MCP flag:
--additional-mcp-config
Example usage:
echo "prompt" | copilot --allow-all --additional-mcp-config @config.json
Note: MCP config path is prefixed with @ for Copilot CLI.
Custom CLI Runner
For CLI tools not natively supported, use the Custom CLI Runner type.
Configuration options:
- Prompt Delivery:
stdinorargs - Output Parsing:
jsonorplain - Line Delimiter: Character for splitting output lines
- Exit Code Handling: Which exit codes indicate success
Example custom configuration:
Name: My Custom AI
Executable: /path/to/my-ai-cli
Arguments: ["--mode", "agent"]
Environment Variables:
MY_AI_KEY: "sk-..."
Prompt Delivery: stdin
MCP Config Flag: --mcp-config
Output Format: plain
Using CLI Runners
In Chat
CLI runners appear in the provider dropdown alongside cloud providers:
- Open a chat tab
- Click the provider selector
- Choose a CLI runner (e.g., "Claude")
- Send messages as normal
The CLI runner executes the prompt and streams back the response.
In Workflows
CLI runners can be assigned to workflow steps:
- Create or edit a workflow
- Select a step
- Set executor type to "CLI Runner"
- Choose the runner from the dropdown
- Optionally specify a model (if the CLI supports it)
Example workflow step configuration:
{
"id": "step-implement",
"type": "step",
"label": "Implement",
"executor": {
"type": "cli-runner",
"id": "claude"
},
"toolSet": {
"git": true,
"backlog": true,
"shell": true
},
"promptTemplate": "Implement the following task..."
}
Multi-Agent Workflows
A powerful pattern is using different CLI runners for different steps:
[Implement Step] → Uses Claude (implementation focus)
↓
[Review Step] → Uses Codex (review focus)
↓
[Fix Step] → Uses Claude (implementation focus)
This leverages each tool's strengths while maintaining a cohesive workflow.
Output Parsing
Each CLI runner type has specialized output parsing:
JSON Stream Parsing
Claude, Kimi, and Codex output structured JSON events:
{"type": "assistant", "message": {"content": [{"type": "text", "text": "Hello"}]}}
GenieBuilder extracts text content from these events for display.
Plain Text Parsing
Gemini and Copilot output plain text that is displayed directly.
ANSI Handling
All CLI runners strip ANSI escape codes from output for clean display.
Troubleshooting
Connection Test Failed
If the connection test fails:
- Verify executable path - Run
which <cli-name>in terminal - Check permissions - Ensure the binary is executable
- Test manually - Try running the CLI directly in terminal
- Check environment - Verify required env vars are set
No MCP Tools Available
If the CLI runner doesn't have access to MCP tools:
- Verify the MCP config flag is correct for your CLI version
- Check that MCP servers are enabled in Settings
- Review the generated MCP config file (enable debug logging)
Empty or Garbled Output
If output appears empty or garbled:
- Check output format - Some CLIs need explicit
--jsonor--output-formatflags - Verify prompt delivery - Ensure stdin vs args is correct for the CLI
- Review exit codes - Some CLIs exit non-zero on success (e.g., Codex)
Timeout Issues
CLI runners have a 30-second timeout for connection tests. For long-running operations:
- Use the chat interface rather than quick tests
- Ensure the CLI is warmed up (first run may be slower)
- Check network connectivity for cloud-based CLIs
Security Considerations
- CLI runners execute with the same permissions as the GenieBuilder process
- Environment variables may contain sensitive API keys
- Temporary MCP config files are cleaned up after execution
- Tool approvals still apply for mutating MCP operations
Best Practices
- Use absolute paths for executables to avoid PATH issues
- Test each runner individually before using in workflows
- Start with simple prompts to verify basic connectivity
- Leverage tool sets - Only grant necessary permissions
- Monitor conversation logs for debugging complex workflows