Skip to main content

CLI Runners

CLI Runners extend GenieBuilder's capabilities by integrating external command-line AI agents. These CLI tools often have their own MCP (Model Context Protocol) support, allowing them to use the same tool ecosystem as built-in providers.

Supported CLI Runners

GenieBuilder supports the following CLI-based AI agents:

CLI RunnerProviderMCP SupportBest For
Claude CodeAnthropic✅ YesComplex implementation tasks
Kimi CLIMoonshot AI✅ YesFast, efficient coding
Codex CLIOpenAI✅ YesCode review and analysis
Gemini CLIGoogle✅ YesMulti-modal capabilities
GitHub Copilot CLIGitHub✅ YesIDE-integrated workflows
Custom CLIAnyConfigurableBring your own CLI tool

Why Use CLI Runners?

CLI Runners offer several advantages:

  • Extended capabilities - CLI tools may have features not available via API
  • Local processing - Some CLI tools run entirely locally
  • MCP integration - Access to file system, git, and custom tools
  • Cost control - Some CLI tools offer different pricing models
  • Specialized agents - Different CLI tools excel at different tasks

Configuration

Adding a CLI Runner

  1. Open SettingsCLI Runners
  2. Click Add CLI Runner
  3. Select the CLI type or choose "Custom" for unsupported tools
  4. Configure the executable path and optional arguments
  5. Test the connection
  6. Enable the runner for use in chat and workflows

Configuration Options

Each CLI runner supports these settings:

SettingDescriptionExample
NameDisplay name in the UI"Claude (Local)"
ExecutablePath to the CLI binary/usr/local/bin/claude
ArgumentsAdditional CLI arguments["--verbose"]
Environment VariablesCustom env vars{"ANTHROPIC_API_KEY": "..."}
MCP Config FlagFlag for MCP config path--mcp-config
EnabledWhether the runner is active

MCP Configuration Generation

When a CLI runner is used, GenieBuilder automatically:

  1. Generates a temporary MCP config JSON file
  2. Includes all enabled MCP servers (Backlog, custom servers)
  3. Passes the config path to the CLI via the configured flag
  4. Cleans up the temporary file after execution

Example generated MCP config:

{
"mcpServers": {
"backlog": {
"command": "backlog",
"args": ["mcp", "start"]
},
"custom-server": {
"command": "custom-mcp",
"args": ["--port", "8080"]
}
}
}

CLI Runner Details

Claude Code

Anthropic's Claude Code CLI provides agentic coding capabilities.

Installation: npm install -g @anthropic-ai/claude-code

Default configuration:

  • Executable: claude
  • Prompt delivery: stdin (piped)
  • Output format: stream-json
  • MCP flag: --mcp-config

Example usage:

echo "prompt" | claude -p --mcp-config config.json --output-format stream-json

Features:

  • Rich streaming JSON output
  • Built-in tool use capabilities
  • File editing and shell command execution
  • Automatic context management

Kimi CLI

Moonshot AI's Kimi CLI offers efficient code generation and editing.

Installation: Follow instructions at kimi-cli

Default configuration:

  • Executable: kimi
  • Prompt delivery: args (--prompt flag)
  • Output format: stream-json
  • MCP flag: --mcp-config

Example usage:

kimi --yolo --print --output-format stream-json --prompt "hello"

Features:

  • --yolo mode for automatic execution
  • Clean JSONL output
  • Fast response times

Codex CLI

OpenAI's Codex CLI is designed for code review and implementation.

Installation: npm install -g @openai/codex

Default configuration:

  • Executable: codex
  • Prompt delivery: args (trailing positional)
  • Output format: json
  • MCP flag: --mcp-config

Example usage:

codex exec --model gpt-5.3-codex --full-auto --json "prompt"

Features:

  • --full-auto mode for automatic execution
  • JSONL event stream
  • Acceptable exit codes: [1] (may exit 1 after successful completion)

Gemini CLI

Google's Gemini CLI provides access to Gemini models.

Default configuration:

  • Executable: gemini
  • Prompt delivery: stdin (piped)
  • Output format: Plain text
  • MCP flag: --mcp-config

Example usage:

echo "prompt" | gemini --mcp-config config.json

GitHub Copilot CLI

GitHub Copilot's CLI for command-line AI assistance.

Default configuration:

  • Executable: copilot
  • Prompt delivery: stdin (piped)
  • Output format: Plain text
  • MCP flag: --additional-mcp-config

Example usage:

echo "prompt" | copilot --allow-all --additional-mcp-config @config.json

Note: MCP config path is prefixed with @ for Copilot CLI.

Custom CLI Runner

For CLI tools not natively supported, use the Custom CLI Runner type.

Configuration options:

  • Prompt Delivery: stdin or args
  • Output Parsing: json or plain
  • Line Delimiter: Character for splitting output lines
  • Exit Code Handling: Which exit codes indicate success

Example custom configuration:

Name: My Custom AI
Executable: /path/to/my-ai-cli
Arguments: ["--mode", "agent"]
Environment Variables:
MY_AI_KEY: "sk-..."
Prompt Delivery: stdin
MCP Config Flag: --mcp-config
Output Format: plain

Using CLI Runners

In Chat

CLI runners appear in the provider dropdown alongside cloud providers:

  1. Open a chat tab
  2. Click the provider selector
  3. Choose a CLI runner (e.g., "Claude")
  4. Send messages as normal

The CLI runner executes the prompt and streams back the response.

In Workflows

CLI runners can be assigned to workflow steps:

  1. Create or edit a workflow
  2. Select a step
  3. Set executor type to "CLI Runner"
  4. Choose the runner from the dropdown
  5. Optionally specify a model (if the CLI supports it)

Example workflow step configuration:

{
"id": "step-implement",
"type": "step",
"label": "Implement",
"executor": {
"type": "cli-runner",
"id": "claude"
},
"toolSet": {
"git": true,
"backlog": true,
"shell": true
},
"promptTemplate": "Implement the following task..."
}

Multi-Agent Workflows

A powerful pattern is using different CLI runners for different steps:

[Implement Step] → Uses Claude (implementation focus)

[Review Step] → Uses Codex (review focus)

[Fix Step] → Uses Claude (implementation focus)

This leverages each tool's strengths while maintaining a cohesive workflow.

Output Parsing

Each CLI runner type has specialized output parsing:

JSON Stream Parsing

Claude, Kimi, and Codex output structured JSON events:

{"type": "assistant", "message": {"content": [{"type": "text", "text": "Hello"}]}}

GenieBuilder extracts text content from these events for display.

Plain Text Parsing

Gemini and Copilot output plain text that is displayed directly.

ANSI Handling

All CLI runners strip ANSI escape codes from output for clean display.

Troubleshooting

Connection Test Failed

If the connection test fails:

  1. Verify executable path - Run which <cli-name> in terminal
  2. Check permissions - Ensure the binary is executable
  3. Test manually - Try running the CLI directly in terminal
  4. Check environment - Verify required env vars are set

No MCP Tools Available

If the CLI runner doesn't have access to MCP tools:

  1. Verify the MCP config flag is correct for your CLI version
  2. Check that MCP servers are enabled in Settings
  3. Review the generated MCP config file (enable debug logging)

Empty or Garbled Output

If output appears empty or garbled:

  1. Check output format - Some CLIs need explicit --json or --output-format flags
  2. Verify prompt delivery - Ensure stdin vs args is correct for the CLI
  3. Review exit codes - Some CLIs exit non-zero on success (e.g., Codex)

Timeout Issues

CLI runners have a 30-second timeout for connection tests. For long-running operations:

  1. Use the chat interface rather than quick tests
  2. Ensure the CLI is warmed up (first run may be slower)
  3. Check network connectivity for cloud-based CLIs

Security Considerations

  • CLI runners execute with the same permissions as the GenieBuilder process
  • Environment variables may contain sensitive API keys
  • Temporary MCP config files are cleaned up after execution
  • Tool approvals still apply for mutating MCP operations

Best Practices

  1. Use absolute paths for executables to avoid PATH issues
  2. Test each runner individually before using in workflows
  3. Start with simple prompts to verify basic connectivity
  4. Leverage tool sets - Only grant necessary permissions
  5. Monitor conversation logs for debugging complex workflows