Hardware Adaptation Guide

The arm demo ships with a simulated proximity sensor (distance_sensor.py) that uses a sine-wave model to produce periodic obstacle approaches. This guide explains how to replace the simulation with a real hardware driver and how to prepare for deployment on production hardware targets.

Prerequisites

  • Robotics Quickstart completed and passing

  • Target hardware available: Jetson Orin, Raspberry Pi 4, or an industrial PC with a real sensor

Step 1 — Switch to hardware mode

The arm_demo.launch.py launch file accepts a robot_mode argument. In sim mode (default) the distance_sensor node is started. In hardware mode it is not.

Container path

autonomy ros2 run \
    --image ghcr.io/autonomyops/adk-ros2-runtime:local \
    launch demo_robot arm_demo.launch.py \
    robot_mode:=hardware \
    max_velocity:=0.5 \
    safety_stop_distance:=0.5

Native path

autonomy ros2 run launch demo_robot arm_demo.launch.py \
    robot_mode:=hardware

In hardware mode the distance_sensor node is absent; arm_controller waits for an external std_msgs/Float64 message on /arm/distance.

Step 2 — Connect your sensor driver

Add your real sensor driver to the launch file in the UnlessCondition(is_sim) block. Your driver must publish std_msgs/Float64 on /<namespace>/distance at 5 Hz or faster.

Example using a URG-04LX lidar minimum-distance filter:

# demo_robot/launch/arm_demo.launch.py  (hardware mode addition)
from launch.conditions import UnlessCondition
from launch_ros.actions import Node as RosNode
from launch.substitutions import LaunchConfiguration

is_sim = LaunchConfiguration('robot_mode') == 'sim'

urg_node = RosNode(
    package='urg_node2',
    executable='urg_node2_node',
    name='lidar',
    condition=UnlessCondition(is_sim),
    parameters=[{'serial_port': '/dev/ttyACM0'}],
    remappings=[('/scan', '/raw_scan')],
)

# Add a relay node to convert LaserScan min-range to Float64 on /arm/distance
# (implement as a small Python node in your package)
distance_relay = RosNode(
    package='demo_robot',
    executable='lidar_distance_relay',
    name='distance_relay',
    condition=UnlessCondition(is_sim),
)

After editing the launch file, rebuild the workspace in the container:

docker build -t ghcr.io/autonomyops/adk-ros2-runtime:local demo/ros2-runtime/

Step 3 — Update the policy for your sensor topics

If your sensor driver publishes on a different topic namespace, update demo/bundles/ros2/policies/ros2_safety.rego:

# Allow subscribe on /base/** (mobile base) instead of /arm/**
allow if {
    input.kind == "tool.ros2.topic.subscribe"
    startswith(input.params.topic, "/base/")
}

Rebuild the bundle after editing the policy:

./demo/bundles/build.sh
autonomy bundle stage demo/bundles/ros2.tar

Run the policy unit tests to confirm no regressions:

go test ./policy/... -run TestRos2DemoPolicy -v

Step 4 — DDS domain isolation (multi-robot)

Containers on the same Docker bridge network share DDS multicast by default. Multiple robots running simultaneously must be isolated using ROS_DOMAIN_ID.

Rules of thumb:

  • Assign one unique domain ID (0–232) per robot or per session.

  • Stop all sim-mode containers before starting a hardware-mode container on the same network, or use different domain IDs.

  • The entrypoint logs ROS_DOMAIN_ID=<n> at startup for confirmation.

Override the domain ID at container start with docker run -e:

docker run --rm \
    -e ROS_DOMAIN_ID=10 \
    ghcr.io/autonomyops/adk-ros2-runtime:local \
    bash -c "source /adk_ros2_entrypoint.sh && ros2 topic list"

autonomy ros2 run forwards only the flags it defines (--image, --workspace, --policy, --runtime-url). There is no --env flag. To set ROS_DOMAIN_ID for the governed container, bake the variable into a domain-specific image tag or override it in your host environment before calling the CLI:

# Build a domain-10 image variant (recommended for fixed-domain deployments)
docker build \
    --build-arg ROS_DOMAIN_ID=10 \
    -t ghcr.io/autonomyops/adk-ros2-runtime:domain10 \
    demo/ros2-runtime/

autonomy ros2 run \
    --image ghcr.io/autonomyops/adk-ros2-runtime:domain10 \
    launch demo_robot arm_demo.launch.py robot_mode:=hardware

Platform-specific build notes

Jetson Orin (aarch64 / JetPack 6)

# Build the image on the Orin itself
docker build -t ghcr.io/autonomyops/adk-ros2-runtime:orin-local demo/ros2-runtime/

# Or cross-build from x86_64 using buildx
docker buildx build \
    --platform linux/arm64 \
    --tag ghcr.io/autonomyops/adk-ros2-runtime:orin \
    --push demo/ros2-runtime/

For NVIDIA GPU access on Orin use docker run --device directly to verify CDI access, then bake the device requirement into a wrapper image or use the autonomy demo nvidia self-test (see NVIDIA GPU Integration):

# Verify CDI access directly with docker run
docker run --rm \
    --runtime=nvidia \
    --device nvidia.com/gpu=all \
    ghcr.io/autonomyops/adk-ros2-runtime:orin-local \
    bash -c "source /adk_ros2_entrypoint.sh && ros2 topic list"

autonomy ros2 run does not accept a --device flag; device mapping must be embedded in the image entrypoint or handled by a wrapper docker run invocation outside the CLI.

See NVIDIA GPU Integration for CDI setup details.

Raspberry Pi 4 (arm64 / Ubuntu 22.04)

# On Pi 4 (arm64 Ubuntu)
docker build -t ghcr.io/autonomyops/adk-ros2-runtime:pi4-local demo/ros2-runtime/

# Run on native ROS2 path (no Docker governance) if Docker daemon is absent
autonomy ros2 run \
    launch demo_robot arm_demo.launch.py robot_mode:=hardware

Without Docker, the native fallback path prints the REDUCED-GOVERNANCE warning. For production use on Pi 4, install Docker and use the container path.

Industrial PC (x86_64 / Ubuntu 22.04)

No special build steps — the standard linux/amd64 image works:

docker build -t ghcr.io/autonomyops/adk-ros2-runtime:local demo/ros2-runtime/
autonomy ros2 run \
    --image ghcr.io/autonomyops/adk-ros2-runtime:local \
    launch demo_robot arm_demo.launch.py robot_mode:=hardware

Step 5 — Register the edge node with the orchestrator

Edge nodes register by sending heartbeats to the orchestrator. Configure the edged daemon on the robot:

# Generate a node config (adjust fleet_id, orchestrator URL, and cert paths)
edgectl init \
    --fleet-id my-fleet \
    --orchestrator-url http://orch.internal:8888 \
    --data-dir /var/lib/edged \
    --state-dir /var/lib/edged/state

The generated edge.toml is placed in the data directory. Start the daemon:

# Precheck (validates config, certs, state root)
edged precheck --config /var/lib/edged/edge.toml

# Start daemon (managed by systemd in production)
edged serve --config /var/lib/edged/edge.toml

After the first heartbeat the node appears in fleet status:

autonomy fleet status \
    --orchestrator-url http://orch.internal:8888 \
    --channel stable

See Edge Deployment (Operator Guide) for the full systemd setup.

Step 6 — Generate a scaffold for your own package

bash scripts/new_robot_package.sh <package_name> <topic_namespace>
# e.g.
bash scripts/new_robot_package.sh mobile_base base
bash scripts/new_robot_package.sh camera_arm cam

The scaffold replaces all demo_robot / /arm/ references with your values and produces a complete bundle directory skeleton.

Smoke test in hardware mode

container=$(docker run -d --rm \
    --network=host \
    -e ROS_DOMAIN_ID=5 \
    ghcr.io/autonomyops/adk-ros2-runtime:local)

sleep 5

docker exec "$container" bash -c "
  source /opt/ros/humble/setup.bash
  source /opt/ros2_ws/install/setup.bash
  ros2 run demo_robot demo_check
"

docker stop "$container"

demo_check exits 0 when liveness, wait-for-idle, e-stop activate, and e-stop clear all pass. Wire this into your CI pipeline as a post-deployment smoke test.

See also