Geoffrussy User Manual

The official guide to operating the Ussyverse's AI development orchestrator.

Full documentation on GitHub

01. Installation

Geoffrussy is written in Go and distributed as a single binary with no external dependencies beyond an internet connection for API access. You can install via the Go toolchain or download a pre-built binary from GitHub Releases.

# Install via Go toolchain (requires Go 1.21+)
$ go install github.com/mojomast/geoffrussy@latest

# Or build from source
$ git clone https://github.com/mojomast/geoffrussy.git
$ cd geoffrussy && go build -o geoffrussy .

# Verify installation
$ geoffrussy version

02. Initialization

Before your first run, initialize the configuration. This sets up your model routing table — which AI models to use for the Planner (architecture and reasoning) and the Executor (code generation).

Strategy Guide: Use a high-reasoning model for the Planner — Claude, GPT-4, or Gemini Pro work well. For the Executor, configure a fast, cheap model like Llama or GLM via Ollama to keep costs low while maintaining velocity. Geoffrussy supports OpenAI, Anthropic, GLM, and Ollama providers.

$ geoffrussy init

This creates a ~/.geoffrussy/config.yaml file where you define your model routing, API keys, and preferences.

03. The Interview (DevUssy Protocol)

This is where the magic happens. Instead of writing a prompt, you enter a dialogue. Geoffrussy uses the integrated DevUssy engine to run a five-phase interactive interview about your project.

mojo@linux:~$ geoffrussy interview
[GEOFF] Initializing Interview Protocol... [GEOFF] Hello! I'm ready to architect your next system. What are we building today?
> I want to build a CLI tool that tracks my coffee consumption.
[GEOFF] Interesting. A few questions to clarify scope: 1. Should this data be stored locally (SQLite/JSON) or in the cloud? 2. Do you need visualization (charts/graphs) in the terminal? 3. What specific metrics matter? (Caffeine content, brand, time of day?)

After the interview, Geoffrussy generates a devplan.md — a structured development plan with phases, tasks, acceptance criteria, and architecture decisions. This is the core artifact that drives everything downstream.

04. Execution

Once you approve the plan, Geoffrussy begins the orchestration loop. The full pipeline follows: interview -> design -> plan -> review -> develop.

The Planner

Reads the DevPlan and uses your configured high-reasoning model (Claude, GPT-4, Gemini) to reason about dependencies, architectural integrity, and task ordering.

The Executor

Dispatches implementation tasks to fast, cost-effective models (Llama, GLM, or any Ollama-compatible model). Built-in cost tracking shows exactly what each step costs.

Additional Features

  • >Checkpoint/Rollback: SQLite-backed state persistence lets you save and restore progress.
  • >MCP Integration: Model Context Protocol support for autonomous AI agent workflows.
  • >Cost Tracking: Real-time monitoring of token usage and API costs across all configured models.
  • >Rate Limiting: Built-in rate limit monitoring and backoff to avoid API throttling.