Try Live
Add Docs
Rankings
Pricing
Enterprise
Docs
Install
Theme
Install
Docs
Pricing
Enterprise
More...
More...
Try Live
Rankings
Create API Key
Add Docs
AgenticROS
https://github.com/agenticros/agenticros
Admin
AgenticROS is a ROS2 integration platform that connects robots to AI agent platforms (OpenClaw,
...
Tokens:
32,866
Snippets:
234
Trust Score:
3.9
Update:
1 month ago
Context
Skills
Chat
Benchmark
73.5
Suggestions
Latest
Show doc for...
Code
Info
Show Results
Context Summary (auto-generated)
Raw
Copy
Link
# AgenticROS AgenticROS is a ROS2 integration platform that connects robots to AI agent platforms, enabling natural language control and querying of ROS2 robots. It provides a platform-agnostic core library with transport adapters (rosbridge, Zenoh, WebRTC, local DDS) and implements the plugin/extension contracts for multiple AI platforms including OpenClaw, Claude Code (MCP), and Google Gemini. The architecture separates concerns into a **core** package (`@agenticros/core`) containing transport abstractions, configuration schemas, and shared types, with **adapters** that implement platform-specific integrations. This design allows robots to be controlled via messaging apps through OpenClaw, terminal commands through Claude Code CLI, the Claude desktop app (including Claude Dispatch on iOS), or Google Gemini—all using the same underlying ROS2 tools and safety constraints. ## Configuration AgenticROS uses a Zod-validated configuration schema that covers transport mode selection, connection settings, robot namespace, camera topics, teleop parameters, and safety limits. ```typescript import { parseConfig, getTransportConfig, createTransport } from "@agenticros/core"; // Parse and validate configuration with defaults const config = parseConfig({ transport: { mode: "zenoh" }, zenoh: { routerEndpoint: "ws://localhost:10000", domainId: 0, keyFormat: "ros2dds" // or "rmw_zenoh" }, robot: { name: "MyRobot", namespace: "robot3946b404c33e4aa39a8d16deb1c5c593", cameraTopic: "/camera/camera/color/image_raw/compressed" }, safety: { maxLinearVelocity: 1.0, maxAngularVelocity: 1.5, workspaceLimits: { xMin: -10, xMax: 10, yMin: -10, yMax: 10 } }, teleop: { cameraTopic: "", cmdVelTopic: "", speedDefault: 0.3, cameraPollMs: 150 }, skillPackages: ["agenticros-skill-followme"], skillPaths: ["/path/to/custom-skills"] }); // Build transport-specific config and create transport const transportConfig = getTransportConfig(config); const transport = await createTransport(transportConfig); await transport.connect(); ``` ## RosTransport Interface The unified transport interface abstracts all ROS2 communication regardless of the underlying transport (rosbridge, Zenoh, WebRTC, or local DDS). ```typescript import { createTransport, type RosTransport, type TransportConfig } from "@agenticros/core"; // Create transport based on mode const config: TransportConfig = { mode: "zenoh", zenoh: { routerEndpoint: "ws://localhost:10000" } }; const transport: RosTransport = await createTransport(config); // Connection lifecycle await transport.connect(); console.log("Status:", transport.getStatus()); // "connected" | "disconnected" | "connecting" // Register connection status handler const cleanup = transport.onConnection((status) => { console.log("Connection status changed:", status); }); // Cleanup when done await transport.disconnect(); cleanup(); ``` ## ros2_list_topics Tool Discovers all available ROS2 topics and their message types at runtime. ```typescript // Tool execution (MCP/OpenClaw adapter) const result = await transport.listTopics(); // Returns: TopicInfo[] = [{ name: "/cmd_vel", type: "geometry_msgs/msg/Twist" }, ...] // Example output { "success": true, "topics": [ { "name": "/cmd_vel", "type": "geometry_msgs/msg/Twist" }, { "name": "/odom", "type": "nav_msgs/msg/Odometry" }, { "name": "/camera/camera/color/image_raw/compressed", "type": "sensor_msgs/msg/CompressedImage" }, { "name": "/scan", "type": "sensor_msgs/msg/LaserScan" } ], "total": 42, "truncated": false } ``` ## ros2_publish Tool Publishes messages to any ROS2 topic, with automatic namespace resolution and safety validation. ```typescript import { toNamespacedTopic } from "@agenticros/core"; // Publish velocity command to move robot forward const topic = toNamespacedTopic(config, "/cmd_vel"); // Applies robot.namespace if configured transport.publish({ topic, type: "geometry_msgs/msg/Twist", msg: { linear: { x: 0.3, y: 0.0, z: 0.0 }, angular: { x: 0.0, y: 0.0, z: 0.0 } } }); // Stop the robot (emergency stop pattern) transport.publish({ topic: toNamespacedTopic(config, "/cmd_vel"), type: "geometry_msgs/msg/Twist", msg: { linear: { x: 0.0, y: 0.0, z: 0.0 }, angular: { x: 0.0, y: 0.0, z: 0.0 } } }); // Publish navigation goal transport.publish({ topic: "/goal_pose", type: "geometry_msgs/msg/PoseStamped", msg: { header: { frame_id: "map" }, pose: { position: { x: 2.0, y: 1.5, z: 0.0 }, orientation: { x: 0.0, y: 0.0, z: 0.0, w: 1.0 } } } }); ``` ## ros2_subscribe_once Tool Subscribes to a ROS2 topic and returns the next message received, useful for reading sensor data or robot state. ```typescript // Subscribe and wait for next message with timeout const result = await new Promise<Record<string, unknown>>((resolve, reject) => { const subscription = transport.subscribe( { topic: "/battery_state", type: "sensor_msgs/msg/BatteryState" }, (msg) => { subscription.unsubscribe(); resolve({ success: true, topic: "/battery_state", message: msg }); } ); setTimeout(() => { subscription.unsubscribe(); reject(new Error("Timeout waiting for message on /battery_state")); }, 5000); }); // Example response { "success": true, "topic": "/battery_state", "message": { "voltage": 12.6, "percentage": 0.85, "power_supply_status": 2 } } ``` ## ros2_camera_snapshot Tool Captures a single image from a ROS2 camera topic, supporting both CompressedImage and raw Image formats. ```typescript // Capture compressed image (JPEG/PNG) const result = await new Promise<Record<string, unknown>>((resolve, reject) => { const subscription = transport.subscribe( { topic: "/camera/camera/color/image_raw/compressed", type: "sensor_msgs/msg/CompressedImage" }, (msg) => { subscription.unsubscribe(); const data = msg.data; const base64 = Buffer.isBuffer(data) ? data.toString("base64") : typeof data === "string" ? data : ""; resolve({ success: true, topic: "/camera/camera/color/image_raw/compressed", format: msg.format ?? "jpeg", data: base64 }); } ); setTimeout(() => { subscription.unsubscribe(); reject(new Error("Timeout waiting for camera frame")); }, 10000); }); // Returns base64-encoded image data { "success": true, "topic": "/camera/camera/color/image_raw/compressed", "format": "jpeg", "data": "/9j/4AAQSkZJRgABAQEASABI..." } ``` ## ros2_depth_distance Tool Reads distance in meters from a ROS2 depth image topic (e.g., RealSense), sampling the center of the depth image. ```typescript // Get depth distance from RealSense camera const depthResult = await getDepthDistance( transport, "/camera/camera/depth/image_rect_raw", 5000 // timeout ms ); // Example response { "valid": true, "distance_m": 1.25, "min_m": 0.8, "max_m": 2.1, "sample_count": 100, "topic": "/camera/camera/depth/image_rect_raw", "width": 640, "height": 480, "encoding": "16UC1" } // If no valid depth: { "valid": false, "distance_m": 0, ... } ``` ## ros2_service_call Tool Calls a ROS2 service and returns the response, useful for request/response operations. ```typescript // Call a service const response = await transport.callService({ service: "/spawn_entity", type: "gazebo_msgs/srv/SpawnEntity", args: { name: "my_robot", xml: "<robot>...</robot>", initial_pose: { position: { x: 0, y: 0, z: 0 } } } }); // Example response { "success": true, "service": "/spawn_entity", "response": { "success": true, "status_message": "Entity spawned successfully" } } ``` ## ros2_action_goal Tool Sends a goal to a ROS2 action server for long-running operations like navigation. ```typescript // Send navigation goal const actionResult = await transport.sendActionGoal({ action: "/navigate_to_pose", actionType: "nav2_msgs/action/NavigateToPose", args: { pose: { header: { frame_id: "map" }, pose: { position: { x: 5.0, y: 3.0, z: 0.0 }, orientation: { x: 0, y: 0, z: 0.707, w: 0.707 } } } } }); // Example response { "success": true, "action": "/navigate_to_pose", "result": { "result": { /* action-specific result */ } } } // Cancel an in-progress action await transport.cancelActionGoal("/navigate_to_pose"); ``` ## ros2_param_get and ros2_param_set Tools Gets and sets ROS2 node parameters at runtime. ```typescript // Get parameter value const getResponse = await transport.callService({ service: "/turtlebot3/controller/get_parameters", type: "rcl_interfaces/srv/GetParameters", args: { names: ["max_velocity"] } }); // Response: { "success": true, "node": "/turtlebot3/controller", "parameter": "max_velocity", "value": { "values": [{ "double_value": 0.5 }] } } // Set parameter value const setResponse = await transport.callService({ service: "/turtlebot3/controller/set_parameters", type: "rcl_interfaces/srv/SetParameters", args: { parameters: [{ name: "max_velocity", value: { double_value: 0.8 } }] } }); // Response: { "success": true, "node": "/turtlebot3/controller", "parameter": "max_velocity" } ``` ## Claude Code MCP Integration Register AgenticROS as an MCP server for Claude Code CLI or Claude desktop app. ```bash # Register MCP server (project scope) claude mcp add --transport stdio --scope project agenticros -- \ node packages/agenticros-claude-code/dist/index.js # Or add to .mcp.json cat > .mcp.json << 'EOF' { "mcpServers": { "agenticros": { "type": "stdio", "command": "sh", "args": ["-c", "node packages/agenticros-claude-code/dist/index.js 2>>/tmp/agenticros-mcp.log"], "env": {} } } } EOF # Claude desktop app config (macOS: ~/Library/Application Support/Claude/claude_desktop_config.json) cat << 'EOF' { "mcpServers": { "agenticros": { "command": "sh", "args": ["-c", "node /absolute/path/to/agenticros/packages/agenticros-claude-code/dist/index.js 2>>/tmp/agenticros-mcp.log"], "env": { "AGENTICROS_ROBOT_NAMESPACE": "robot3946b404c33e4aa39a8d16deb1c5c593" } } } } EOF # Auto-allow AgenticROS tools (add to ~/.claude/settings.json or project .claude/settings.json) { "permissions": { "allow": ["mcp__agenticros"] } } ``` ## Gemini CLI Integration Chat with your robot using Google Gemini and function calling. ```typescript import { chatWithRobot } from "@agenticros/gemini"; import { parseConfig } from "@agenticros/core"; const config = parseConfig({ transport: { mode: "zenoh" }, zenoh: { routerEndpoint: "ws://localhost:10000" }, robot: { namespace: "robot123" } }); // Single-turn chat with automatic tool execution const response = await chatWithRobot( "What do you see from the camera?", config, { apiKey: process.env.GEMINI_API_KEY, model: "gemini-2.5-flash", systemInstruction: "You are controlling a ROS2 robot. Be concise and safe." } ); console.log(response); // "I can see a person standing approximately 1.5 meters away in front of the robot..." ``` ```bash # Run from command line GEMINI_API_KEY=xxx pnpm --filter @agenticros/gemini exec agenticros-gemini "List ROS2 topics" GEMINI_API_KEY=xxx pnpm --filter @agenticros/gemini exec agenticros-gemini "What do you see?" GEMINI_API_KEY=xxx pnpm --filter @agenticros/gemini exec agenticros-gemini "Move forward 0.5 meters" ``` ## Skills API Create custom skills that extend AgenticROS with additional tools and behaviors. ```typescript // package.json for a skill { "name": "agenticros-skill-followme", "agenticrosSkill": true, "main": "dist/index.js" } // Skill implementation (src/index.ts) import type { RegisterSkill, SkillContext } from "@agenticros/agenticros"; import type { AgenticROSConfig } from "@agenticros/core"; import { Type } from "@sinclair/typebox"; export const registerSkill: RegisterSkill = (api, config, context) => { // Read skill-specific config from config.skills.followme const skillConfig = config.skills?.followme as { targetDistance?: number } ?? {}; const targetDistance = skillConfig.targetDistance ?? 1.5; api.registerTool({ name: "follow_robot", label: "Follow Me", description: "Start or stop person-following behavior", parameters: Type.Object({ action: Type.Union([Type.Literal("start"), Type.Literal("stop"), Type.Literal("status")]) }), async execute(_toolCallId, params) { const action = params.action as string; const transport = context.getTransport(); if (action === "start") { // Get depth to track person const depth = await context.getDepthDistance( transport, "/camera/camera/depth/image_rect_raw", 5000 ); context.logger.info(`Follow me started, distance: ${depth.distance_m}m`); return { content: [{ type: "text", text: `Following started at ${depth.distance_m}m` }] }; } // ... handle stop/status } }); }; ``` ## Teleop HTTP Routes The OpenClaw plugin exposes HTTP routes for web-based teleoperation with camera streaming and velocity control. ```bash # Camera streaming (returns JPEG/PNG binary) curl http://localhost:18789/plugins/agenticros/teleop/camera?topic=/camera/camera/color/image_raw/compressed # List available camera sources curl http://localhost:18789/plugins/agenticros/teleop/sources # Returns: [{"topic":"/camera/camera/color/image_raw/compressed","label":"camera / camera / color / image_raw / compressed"}] # Send velocity command (POST) curl -X POST http://localhost:18789/plugins/agenticros/teleop/twist \ -H "Content-Type: application/json" \ -d '{"linear_x": 0.3, "angular_z": 0.5}' # Returns: {"ok":true,"topic":"/robot123/cmd_vel"} # Send velocity command (GET with query params) curl "http://localhost:18789/plugins/agenticros/teleop/twist?linear_x=0.3&angular_z=0.0" # Check transport status curl http://localhost:18789/plugins/agenticros/teleop/status # Returns: {"mode":"zenoh","connected":true} # Trigger reconnect curl -X POST http://localhost:18789/plugins/agenticros/teleop/reconnect ``` ## OpenClaw Plugin Registration Register the plugin with an OpenClaw gateway for full agent integration. ```typescript // Plugin entry point (packages/agenticros/src/index.ts) export default { id: "agenticros", name: "AgenticROS", async register(api) { // Parse config from file or gateway pluginConfig const config = readAgenticROSConfigFromFile(); // Register HTTP routes (config, teleop) registerRoutes(api, config); // Register ROS2 transport as managed service registerService(api, config); // Register AI tools (ros2_publish, ros2_subscribe_once, etc.) registerTools(api, config); // Load optional skills from skillPackages/skillPaths await loadSkills(api, config); // Register safety validation hook registerSafetyHook(api, config); // Register robot context injection registerRobotContext(api, config); // Register direct commands (e-stop, transport control) registerEstopCommand(api, config); registerTransportCommand(api, config); } }; ``` ```bash # Install and configure the plugin ./scripts/setup_gateway_plugin.sh # OpenClaw config (~/.openclaw/openclaw.json) { "plugins": { "entries": { "agenticros": { "path": "/path/to/agenticros/packages/agenticros", "config": { "transport": { "mode": "zenoh" }, "zenoh": { "routerEndpoint": "ws://localhost:10000" }, "robot": { "namespace": "robot123" }, "skillPackages": ["agenticros-skill-followme"] } } } } } ``` ## Summary AgenticROS enables seamless integration between AI agent platforms and ROS2 robots through a unified transport abstraction and comprehensive tool set. The primary use cases include natural language robot control via messaging apps (OpenClaw), terminal-based interaction through Claude Code or Gemini CLI, and web-based teleoperation with camera streaming. The modular architecture supports rosbridge WebSocket, Zenoh, WebRTC, and local DDS transports, allowing deployment in various network configurations from local development to cloud-connected robots. Integration patterns center around the core transport interface for direct programmatic control, MCP servers for Claude Code/desktop integration, function calling for Gemini CLI, and the OpenClaw plugin API for full gateway integration with config UI and teleop web pages. The skills system enables extensibility with third-party packages that can add specialized behaviors like person following, while safety constraints (velocity limits, workspace bounds) are enforced at the transport layer to ensure safe robot operation regardless of the AI agent's commands.