Introduction to 3we
Welcome to the 3we Robotics Infrastructure documentation. This guide will help you set up your environment, connect your hardware, and deploy your first AI-driven robot controller in minutes.
Installation
3we provides a high-level Python API for real-time robot control. We recommend using a virtual environment to avoid dependency conflicts. Requirements: Python 3.10+, Linux/macOS.
# Create a new environment python -m venv .3we-env source .3we-env/bin/activate # Install the 3we core SDK pip install threewe
To verify the installation with the mock backend (no hardware needed):
python -c "from threewe import Robot; print('OK')"
Your First Robot
The following script initializes a robot with the mock backend and commands a simple navigation. This is the "Hello World" of the 3we ecosystem — it runs instantly with zero hardware.
from threewe import Robot # Initialize with mock backend (no hardware needed) async with Robot(backend="mock") as robot: # Get camera image (480x640x3 uint8 array) image = robot.get_camera_image() # Navigate to coordinates result = await robot.move_to(x=2.0, y=1.0) # Check current pose pose = robot.get_pose() print(f"Robot at: {pose}")
Backend Note
The mock backend uses pure NumPy for 2D kinematic simulation. For physics-accurate results, switch to backend="gazebo" which requires Gazebo Harmonic installed separately.
Backend Switching
One of 3we's core features is the ability to swap between simulation and hardware backends with a single parameter. This ensures complete parity between your development code and production deployment.
backend="mock"
Zero-dependency environment for logic testing and CI/CD pipelines.
backend="gazebo"
Full physics simulation with Nav2, SLAM, and sensor emulation.
backend="isaac"
GPU-accelerated parallel training with domain randomization.
backend="real"
Direct deployment to Pi 5 + ESP32-S3 hardware over WiFi/USB.
Core API Reference
The Robot class is the primary interface. All methods are available regardless of which backend you use.
Perception
image = robot.get_camera_image() # (480, 640, 3) uint8 rgbd = robot.get_rgbd_image() # RGB + depth array scan = robot.get_lidar_scan() # 360° range data pose = robot.get_pose() # (x, y, theta) imu = robot.get_imu() # acceleration + gyro
Navigation
await robot.move_to(x=2.0, y=1.0) # point-to-point await robot.move_forward(0.5) # relative motion await robot.rotate(angle=90.0) # degrees await robot.follow_path(waypoints) # multi-point await robot.explore(timeout=60) # frontier exploration
AI Integration
# VLM Navigation — natural language commands await robot.execute_instruction("find the red door") # VLA Policy — deploy from HuggingFace from threewe.vla import VLARunner runner = VLARunner.from_pretrained("lerobot/act_3we_nav") await runner.run(robot) # Gymnasium — standard RL interface import gymnasium as gym env = gym.make("3we/Navigation-v1") obs, info = env.reset()
Gymnasium Environments
3we provides four standardized Gymnasium environments with increasing difficulty:
| Environment | Task | Metrics |
|---|---|---|
| 3we/Navigation-v1 | Point-to-point navigation | Success Rate + SPL |
| 3we/ObjectNav-v1 | Semantic object goal | SR + SPL + Discovery Distance |
| 3we/Exploration-v1 | Frontier-based coverage | Coverage % + Efficiency |
| 3we/VLN-v1 | Vision-Language Navigation | SR + Oracle SR + nDTW |
VLM Navigation
3we supports Vision-Language Model control out of the box. The robot can interpret natural language instructions using GPT-4o, Qwen-VL, or any OpenAI-compatible VLM API.
from threewe import Robot from threewe.vlm import VLMNavigator async with Robot(backend="gazebo") as robot: nav = VLMNavigator(robot, model="gpt-4o") # Natural language navigation await nav.execute("Go to the kitchen and find the blue mug") # Multi-step instruction await nav.execute("Turn left, go through the door, stop at the desk")
Hardware Guide
For hardware assembly, firmware flashing, and physical deployment, see the dedicated hardware page.
settings_input_component View Hardware Guide