v0.1.0 — phase 1 prototype ✓

Ship deploys by talking
to your agent.

An AI-agent-driven deployment orchestrator for multi-host bare metal. One YAML, one Python control plane, one Go daemon per host. That's it.

version: 1
hosts:
  - name: web-01
    addr: 10.0.0.12
components:
  - name: api
    runner: docker
    image: ghcr.io/acme/api:v2
    env:
      DATABASE_URL: ${secrets.pg}
  - name: caddy
    runner: systemd
    unit: caddy.service
~/ops $
$ curl -fsSL playmaestro.cloud/install.sh \
    | sudo bash
  ✓ control plane installed
  ✓ listening on :7070
  → token: mst_a91f…  (keep this)

$ maestro apply deployment.yaml
  validate  ok
  diff      2 changes
  apply     ✓ web-01
agent
claude code · cursor · copilot
MCP
control plane
python · sqlite · ws hub
WS
daemons
host₁ · host₂ · host₃

Three reasons, no more.

01

Fewer tokens, more deploys.

Agents don't shell every command. They speak MCP verbs — validate, diff, apply, rollback — and get back structured results. Simpler than Ansible on the common cases, less chatty than a free-range LLM.

02

One YAML. No DSL.

A readable schema, git-diffable, no Jinja where it isn't earned. If you can read a Docker Compose file, you can read a Maestro deployment.

03

Bare-metal-native.

A static Go daemon per host, wired up via systemd or launchd. No cluster. No agent proliferation. The daemon is 12 MB and has one job.

Three commands.

  1. 01
    Install the control plane

    On a host you can reach from your laptop. SQLite-backed, listens on :7070.

    $ curl -fsSL playmaestro.cloud/install.sh | sudo bash
  2. 02
    Install the daemon on each host

    Copy the token from the control plane logs. Today this is a manual step — Layer 2 enrollment arrives in Phase 2.

    $ curl -fsSL playmaestro.cloud/daemon.sh | sudo MAESTRO_TOKEN=mst_… bash
  3. 03
    Apply your first deployment

    Write a YAML, then apply. validate and diff run automatically before apply.

    $ maestro apply deployment.yaml

validate → diff → apply → observe

Every change goes through the same four-step cycle. Validate type-checks the YAML against the schema. Diff asks each daemon what would change on its host and returns a structured plan. Apply executes the plan. Observe streams runner output back to the control plane.

Agents can stop at any stage. Most conversations end after diff — the agent describes the plan, the human says "go", then apply runs. Rollback uses the same pipeline with the previous spec as target.

deployment.yaml
source of truth · git
control plane
python · sqlite · mcp
WebSocket · validate / diff / apply / observe
daemon
host₁ · go
daemon
host₂ · go
daemon
host₃ · go

What's actually running.

Control plane
python 3.11 · fastapi · sqlite
Stateful. Owns the spec, the plan, and the event log. PostgreSQL in Phase 3.
WebSocket hub
ws · mTLS in phase 3
Long-lived bidirectional channel between CP and each daemon. Today: bearer-token auth.
MCP server
model-context-protocol
Exposes the verb surface to LLM agents. Speaks JSON-RPC over stdio or SSE.
Daemon
go · static binary · ~12 MB
Runs on each managed host under systemd or launchd. No external dependencies.
Runners
docker · systemd · kubernetes phase 3
Per-component execution backends. Pick the right tool for the component, not the whole cluster.

Built to be driven by an LLM.

Maestro exposes a finite verb surface over MCP. Agents don't shell every command — they call the verbs, get structured results, and decide what to do next. The bundled SKILL.md teaches them the conventions.

chat agent ⟶ maestro-mcp
you
Deploy the api to staging.
agent
Reading deployment.yaml. I'll validate, then show you the diff before applying.
maestro_validateenv="staging" → ok
maestro_diffenv="staging" → 2 changes
Two changes: bump api from v1.4 to v1.5, and add the new caddy component on both hosts. Shall I apply?
you
Go.
agent
maestro_applyenv="staging" running
Streaming from maestro watch
$ maestro watch --env staging live
14:02:11 validate env=staging
14:02:11   → ok (2 hosts, 3 components)
14:02:12 diff env=staging
14:02:12   ~ api   v1.4 → v1.5
14:02:12   + caddy (new)
14:02:18 apply web-01
14:02:21   ✓ api    (3.1s)
14:02:23   ✓ caddy  (1.4s)
14:02:23 apply web-02
14:02:26   ✓ api    (2.9s)
14:02:28   ✓ caddy  (1.1s)
14:02:28 done (17s) · 2 hosts · 0 errors

Works with any MCP-capable client. Tested with Claude Code and Cursor. JSON-RPC over stdio; SSE for streaming.

When it's ready.

No dates. Phase 1 shipped as v0.1.0. Phase 2 is in progress on main.

✓ phase 1

Prototype

v0.1.0 · shipped
  • Control plane + daemon + MCP server
  • YAML schema v1
  • Docker and systemd runners
  • One-command install
  • validate / diff / apply / rollback
◐ phase 2

Beta

in progress
  • Token-based enrollment (Layer 2)
  • Auth + RBAC for CP
  • Structured observability
  • Secret references
  • CLI ergonomics pass
○ phase 3

Production

planned
  • PostgreSQL backend
  • Kubernetes runner
  • mTLS CP ↔ daemon
  • HA control plane
  • Dedicated CLI binary

Honest caveat: today's enrollment flow is "copy the token from the CP logs." Phase 2 fixes that. We won't pretend otherwise.