Gazebo Simulation Stack

The Gazebo demo stack is a three-service Docker Compose configuration that runs a headless simulation without GPU or a real Gazebo installation. The robot-state-sim service posts synthetic tool calls to the AutonomyOps runtime; the runtime evaluates each call against the gazebo-demo bundle policy and logs allow / deny decisions to stdout and the WAL.

CE-safe by design

The stack runs end-to-end on a single laptop with the Community Edition binary surface and Docker:

  • No GPU. No --gpus, no device mappings, no nvidia-runtime. The compose file has GAZEBO_HEADLESS=true and ships zero CUDA dependencies.

  • No real Gazebo install. The robot-state-sim service is a synthetic tool-call emitter; nothing in the stack links against libgazebo.

  • Deterministic ticks. SIM_TICK_MS (default 500 ms) and SIM_MAX_TICKS drive a fixed cadence; identical runs produce identical decision streams.

  • No separate orchestrator install. The orchestrator service in the stack is a Docker container running the autonomy-orchestrator binary in-place; you do not install or operate a fleet orchestrator to run this demo. Everything starts with docker compose up.

What this demo proves

A first run produces three observable proof signals — the same signals production governance relies on:

Proof axis

What the demo shows

Governed tool execution

Every tool.gazebo.* / tool.ros2.* / lifecycle.* / telemetry.emit call is evaluated against the gazebo-demo Rego bundle before the runtime acts on it.

Allow / deny behaviour

Allowed kinds flow through with allow; tool.shell is denied on every tick (the fail-closed signal) regardless of context.

WAL evidence

Every decision is appended to the runtime’s WAL and surfaced via the in-stack orchestrator’s REST API on localhost:8889, queryable by curl or autonomy logs.

Services

Service

Port (host)

Purpose

orchestrator

8889

Event store (SQLite, REST API) — runs as a Docker container, not a separate install

runtime

7780

Policy-gated tool execution; loads gazebo.tar on startup

robot-state-sim

Synthetic joint/sim-step event emitter

Ports are offset from the main demo stack (8888/7777) to allow concurrent use.

Prerequisites

  • Docker and Docker Compose V2

  • autonomy CLI on PATH (for bundle inspection)

Quick start

docker compose -f demo/docker-compose.gazebo.yml up --build

Expected output (first 30 seconds):

orchestrator  | {"level":"info","msg":"listening","addr":"0.0.0.0:8888"}
runtime       | policy loaded: gazebo-demo v0.1.0
runtime       | {"level":"info","msg":"listening","addr":"0.0.0.0:7777"}
robot-state-sim | tick 1: tool.gazebo.step → allow
robot-state-sim | tick 1: tool.ros2.topic.publish /joint_states → allow
robot-state-sim | tick 1: telemetry.emit → allow
robot-state-sim | tick 1: tool.gazebo.get_model_state → allow
robot-state-sim | tick 1: tool.ros2.topic.publish /cmd_vel → allow
robot-state-sim | tick 1: tool.shell → deny
robot-state-sim | tick 2: …

The tool.shell deny on every tick demonstrates the fail-closed policy — shell execution is always rejected even in the sim context.

Stopping

docker compose -f demo/docker-compose.gazebo.yml down -v

The -v flag removes named volumes (gazebo-orchestrator-data, gazebo-wal-data) so the next run starts from a clean state.

Gazebo bundle policy

The gazebo-demo bundle (demo/bundles/gazebo.tar) carries demo/bundles/gazebo/policies/gazebo_safety.rego:

package autonomy
import rego.v1

default allow := false

allow if { startswith(input.kind, "tool.ros2.") }
allow if { startswith(input.kind, "tool.gazebo.") }
allow if { startswith(input.kind, "lifecycle.") }
allow if { input.kind == "telemetry.emit" }
allow if { input.kind == "tool.echo" }

Allowed tool kinds:

Kind prefix

Example actions

tool.ros2.*

topic publish/subscribe, service call, launch

tool.gazebo.*

step, pause, reset, get_model_state, spawn, delete

lifecycle.*

start, stop, health, ready

telemetry.emit

WAL drain, OTLP export

tool.echo

health probes, heartbeats

tool.shell and all other kinds are denied by the default allow := false.

Sim event cycle

The robot-state-sim service runs demo/robot-state-sim/sim-loop.sh. Each tick (controlled by SIM_TICK_MS, default 500 ms) posts six tool calls in order:

tool.gazebo.step             → allow
tool.ros2.topic.publish      → allow   (/joint_states)
telemetry.emit               → allow
tool.gazebo.get_model_state  → allow
tool.ros2.topic.publish      → allow   (/cmd_vel)
tool.shell                   → deny    (demonstrates fail-closed)

The loop runs until SIM_MAX_TICKS ticks complete (0 = run forever).

Environment variables

Variable

Default

Description

AUTONOMY_RUNTIME_URL

http://runtime:7777

Runtime endpoint

SIM_WORLD

demo_world

World name sent in gazebo.step params

SIM_TICK_MS

500

Milliseconds between ticks

SIM_MAX_TICKS

0

Max ticks before exit (0 = infinite)

Inspecting the bundle

# Inspect without a registry
autonomy bundle inspect demo/bundles/gazebo.tar --local

# Show the full Rego policy
autonomy bundle inspect demo/bundles/gazebo.tar --local --show-policy

Inspecting decisions

The in-stack orchestrator at localhost:8889 is the host-side window into the runtime’s WAL — it surfaces every allow/deny decision the runtime recorded. No paid-tier components, no external orchestrator install: the service is a Docker container started by docker compose up.

With the stack running:

# Fetch recent log entries
curl http://localhost:8889/v1/logs | python3 -m json.tool | head -40

# Stream new entries (SSE)
curl -H "Accept: text/event-stream" "http://localhost:8889/v1/logs?follow=true"

Using the CLI (the in-stack orchestrator URL is the same localhost:8889):

autonomy logs --orchestrator-url http://localhost:8889 --limit 20

Customising the simulation

Changing tick rate

SIM_TICK_MS=100 SIM_MAX_TICKS=200 \
  docker compose -f demo/docker-compose.gazebo.yml up

Adding your own tool calls

Edit demo/robot-state-sim/sim-loop.sh and add wget POST calls to the runtime endpoint:

# Example: add a custom gazebo action
wget -qO- -T 3 \
  --post-data '{"kind":"tool.gazebo.spawn","params":{"model":"my_robot"}}' \
  --header "Content-Type: application/json" \
  "${AUTONOMY_RUNTIME_URL}/v1/tool" || true

Then rebuild the image:

docker compose -f demo/docker-compose.gazebo.yml up --build

If tool.gazebo.spawn is not in the policy, the runtime returns a deny decision and the event is recorded in the WAL — the loop continues.

Runtime startup: policy loading

The runtime service runs demo/runtime/runtime-start.sh which:

  1. Runs autonomy policy build --in /data/policies/tmp/gazebo-policy.tar.gz

  2. Runs autonomy policy load --bundle /tmp/gazebo-policy.tar.gz/data/policy/current/

  3. Execs autonomy runtime start --policy-dir /data/policy

This means the Rego policy file is compiled and loaded before the first tool call arrives. Changes to gazebo_safety.rego are picked up on the next docker compose up --build.

Share snippet

A compact, copy-pasteable summary of this demo. Suitable for an email, issue, sales note, or proof artifact.

Prerequisites

  • Docker Engine and Docker Compose V2 on PATH

  • No GPU, no real Gazebo install required (this stack is headless and synthetic)

Run it

docker compose -f demo/docker-compose.gazebo.yml up --build

Expected proof markers

  • runtime       | policy loaded: gazebo-demo v0.1.0

  • Per-tick lines from robot-state-sim showing tool.gazebo.step allow, tool.ros2.topic.publish ... allow, telemetry.emit allow

  • A tool.shell deny line on every tick (the fail-closed signal)

What this proves

A policy-gated headless simulation runs without a GPU or a real Gazebo install. Every tool call is evaluated against the gazebo-demo Rego bundle before the runtime acts on it; allowed kinds (tool.gazebo.*, tool.ros2.*, lifecycle.*, telemetry.emit, tool.echo) flow through, and tool.shell is denied on every tick — the same fail-closed semantics that govern production runs.

See also