autonomy demo nvidia¶
Demo NVIDIA SONIC inference governance with real container self-test
Synopsis¶
Demonstrates AutonomyOps policy governance over an NVIDIA SONIC inference node using a real container stub built for Jetson Orin (aarch64).
Container path (default):
Validates demo/bundles/nvidia/manifest.json, parses the image from the
manifest entrypoint (no hardcoded fallback), then runs:
docker run –rm –runtime=nvidia –device nvidia.com/gpu=all
The image is published to GHCR and pulled automatically: docker pull ghcr.io/autonomyops/nvidia-demo:latest Or build locally on the target device: docker build -t ghcr.io/autonomyops/nvidia-demo:local demo/nvidia-demo/
Run from the repository root.
Local path (–local): Builds a policy bundle from demo/bundles/nvidia/policies/ in-process, starts a temporary ToolServer, and runs a 5-call SONIC inference scenario: lifecycle.start → ALLOW (lifecycle events always permitted) tool.infer.run (model_id) → ALLOW (attributed inference) tool.infer.run (no id) → DENY (unattributed inference blocked) telemetry.emit → ALLOW (WAL drain telemetry) tool.shell → DENY (shell never permitted in inference containers)
Markers emitted (context=sim): nvidia-demo-start / nvidia-allow-demonstrated / nvidia-inference-gated / nvidia-unattributed-blocked / nvidia-deny-demonstrated / nvidia-policy-block / nvidia-demo-complete
Run from the repository root so the policy source directory is reachable.
Usage¶
autonomy demo nvidia [flags]
Options¶
--local in-process simulation without Docker or GPU (loads policy from demo/bundles/nvidia/policies/)
--policy-path string policy source for --local: Rego directory or pre-built .tar bundle (default: demo/bundles/nvidia/policies/)
See also¶
autonomy demo— Run self-contained demos that require no Docker or control plane