Automate-E Platform¶
Automate-E is the shared agent runtime. Same Docker image, different character.json per agent. All rig agents run on Automate-E.
Repo: Stig-Johnny/automate-e (existing, to be evolved)
Three-Layer Architecture¶
┌──────────────────────────────────────────────────────────────┐
│ Layer 1: Automate-E Runtime (shared image, Node.js) │
│ │
│ Messaging Adapters Agent Loop LLM Gateway │
│ ┌──────────┐ ┌─────────┐ ┌──────────────┐ │
│ │ Discord │◀───┐ │ Process │ │ OpenClaw │ │
│ │ Adapter │ │ │ message │──▶│ (all models │ │
│ └──────────┘ │ │ → tools │ │ Claude, │ │
│ ┌──────────┐ ├───▶│ → reply │ │ GPT, etc.) │ │
│ │ Slack │ │ └─────────┘ └──────────────┘ │
│ │ Adapter │◀───┘ │
│ └──────────┘ ┌─────────┐ ┌──────────────┐ │
│ ┌──────────┐ │ Memory │ │ Dashboard │ │
│ │ Future │ │ (Postgres│ │ (WebSocket) │ │
│ │ (Teams) │ │ + Redis)│ │ │ │
│ └──────────┘ └─────────┘ └──────────────┘ │
│ │
│ Webhooks CronJobs Health Tool Executor │
└──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│ Layer 2: Agent Config (per agent, no code) │
│ │
│ character.json values.yaml tools config │
│ (personality, (k8s resources, (HTTP endpoints, │
│ platform, secrets, MCP servers) │
│ model, replicas) │
│ channels) │
└──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│ Layer 3: Backend APIs (per agent, when needed) │
│ │
│ Conductor-E API (.NET 10) Book-E API (existing) │
│ - Marten event store - Accounting integration │
│ - Assignment engine - Receipt processing │
│ - Escalation logic - Fiken/Folio API │
│ - Human gate checker │
│ - Priority queue │
└──────────────────────────────────────────────────────────────┘
Current State → Target State¶
| Feature | Current (v0.1) | Target |
|---|---|---|
| Messaging | Discord only (discord.js) |
Adapter interface: Discord, Slack |
| LLM | @anthropic-ai/sdk direct |
OpenClaw gateway (all models) |
| Config | Discord-specific character.json |
Platform-agnostic config |
| Production | Book-E on GCP k3s | Book-E + all rig agents |
Platform Evolution¶
Messaging Adapter Interface¶
Current Automate-E has Discord hardcoded throughout. Target: abstract behind an adapter interface.
interface MessagingAdapter {
connect(): Promise<void>
sendMessage(channel: string, content: string): Promise<void>
onMessage(handler: (msg: Message) => void): void
createThread(channel: string, name: string): Promise<string>
addReaction(channel: string, messageId: string, emoji: string): Promise<void>
}
class DiscordAdapter implements MessagingAdapter { ... }
class SlackAdapter implements MessagingAdapter { ... }
Configured in character.json:
{
"messaging": {
"platform": "discord",
"config": {
"guildId": "1477260986854498477",
"channels": {
"tasks": "1477261755125075969",
"admin": "1477263870845517879"
}
}
}
}
Swap to Slack:
{
"messaging": {
"platform": "slack",
"config": {
"workspace": "dashecorp",
"channels": {
"tasks": "C0123TASKS",
"admin": "C0123ADMIN"
}
}
}
}
LLM via OpenClaw¶
Current: direct @anthropic-ai/sdk calls. Target: route through OpenClaw.
{
"llm": {
"provider": "openclaw",
"endpoint": "http://openclaw.gateway:8080/v1",
"model": "claude-opus-4-6",
"fallbackModel": "claude-sonnet-4-6",
"temperature": 0.3,
"maxTokens": 8192
}
}
Benefits:
- Model flexibility — switch models per agent without code changes
- Cost tracking — OpenClaw tracks usage per agent
- Routing — OpenClaw can route to cheapest/fastest model per task
- Fallback — automatic fallback on model unavailability
- All providers — Claude, GPT, Gemini, local models — whatever OpenClaw supports
character.json Full Schema¶
{
"name": "Conductor-E",
"description": "Engineering rig coordinator",
"personality": "Precise, efficient coordinator. Speaks in short, factual messages.",
"messaging": {
"platform": "discord | slack",
"config": { }
},
"llm": {
"provider": "openclaw | anthropic | openai",
"endpoint": "http://...",
"model": "claude-opus-4-6",
"temperature": 0.3,
"maxTokens": 8192
},
"memory": {
"conversationRetention": "30d",
"patternRetention": "indefinite",
"historyRetention": "5y"
},
"tools": [
{
"type": "http",
"name": "event-store",
"description": "Submit and query events from the engineering rig event store",
"baseUrl": "http://conductor-e-api:8080",
"endpoints": [
{ "method": "POST", "path": "/api/events", "description": "Submit an event" },
{ "method": "GET", "path": "/api/queue", "description": "Get priority queue" },
{ "method": "GET", "path": "/api/agents", "description": "Get agent status" }
]
},
{
"type": "mcp",
"name": "github",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"]
}
],
"cron": [
{
"schedule": "*/1 * * * *",
"prompt": "Run the coordination loop: check milestones, agent health, stuck agents, human gates, assign work."
}
],
"webhooks": [
{
"path": "/github",
"events": ["issues", "pull_request", "check_run"],
"prompt": "Process this GitHub event: {{payload}}"
}
]
}
How Each Agent Uses Automate-E¶
| Agent | character.json | Backend API | Notes |
|---|---|---|---|
| Conductor-E | Coordinator personality, cron every 60s, GitHub webhook, event store tools | .NET API (Marten event store, assignment, escalation) | The brain |
| Dev-E | Developer personality, Claude Code CLI tool, GitHub tools | None — runs Claude Code directly | The hands |
| Book-E | Accountant personality, accounting tools | Existing accounting API | Already running |
| Review-E | — | — | Stays on agent-runner (Pi4-02) for now |
| Monitor-E | Monitor personality, health check tools, alerting | Health check API (Phase 3) | Phase 3 |
| Architect-E | Architect personality, analytics tools | Analytics API (Phase 4) | Phase 4 |
Dashboard¶
Automate-E's built-in dashboard provides per-agent:
- Live log stream (WebSocket)
- Active sessions
- Tool call history
- Token usage + cost tracking
- Conversation threads
Conductor-E's backend API adds rig-specific views:
- Priority queue (kanban)
- Agent assignment status
- Escalation panel
- Human gate queue
- Milestone progress
- Event store browser
Both dashboards accessible at rig.dashecorp.com (Cloudflare Access).
Deployment¶
All agents deploy as Helm releases on the same Automate-E chart:
# rig-gitops/apps/conductor-e/helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: conductor-e
namespace: rig
spec:
chart:
spec:
chart: charts/automate-e
sourceRef:
kind: GitRepository
name: automate-e
values:
character: conductor-e
image:
repository: ghcr.io/stig-johnny/automate-e
tag: latest
resources:
requests: { cpu: 100m, memory: 256Mi }
limits: { cpu: 500m, memory: 512Mi }
persistence:
enabled: true
size: 1Gi
# rig-gitops/apps/dev-e/helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: dev-e
namespace: rig
spec:
chart:
spec:
chart: charts/automate-e
sourceRef:
kind: GitRepository
name: automate-e
values:
character: dev-e
image:
repository: ghcr.io/stig-johnny/automate-e
tag: latest
resources:
requests: { cpu: 500m, memory: 512Mi }
limits: { cpu: "2", memory: 4Gi }