Robotics Lab
MuJoCo simulation, cuRobo motion planning, OpenVLA policies, and AnyGrasp detection — four robotics pillars under one API key. Skip the environment setup, jump straight to research.
What's inside
MuJoCo Simulation
Rollout policies against MuJoCo and PyBullet environments. Returns per-step joint positions, rewards, and episode summaries. LeRobot crossed 10K+ GitHub stars; MuJoCo went open-source in 2022.
cuRobo Motion Planning
Collision-free joint-space trajectory planning using cuRobo, OMPL, RRT*, or PRM. Pass start and goal joints + obstacles, get back a waypoint list.
Vision-Language-Action Policies
Run OpenVLA, RT-2, π₀, or Octo directly via API. Pass an observation + language instruction, get the next 7-dim action. Foundation models for embodied AI are finally API-accessible.
AnyGrasp Grasp Detection
Detect ranked grasp candidates from a point cloud. Returns position, orientation quaternion, grasp width, and confidence for the top K graspable points.
FAQ
Why a robotics API? Can't I just install MuJoCo?›
You can — and you'll spend a week configuring environments, dependencies, and GPU drivers. SciRouter gives you one API call instead. Perfect for university labs and weekend hackers who want to focus on the research, not the DevOps.
What about real robots?›
These endpoints are for simulation and policy inference. For real-robot deployment, export the policy weights and run them on-device. The APIs accelerate the research loop, not the production loop.
Which VLA models are supported?›
OpenVLA (OpenX-Embodiment trained), RT-2, π₀ (Physical Intelligence), and Octo. You can switch by changing the `policy` parameter. Each has different strengths — OpenVLA is the most open, π₀ is state-of-the-art on dexterous tasks.
Is this production-ready?›
The API surface and deterministic mock mode are production-ready. Real GPU workers activate as RunPod endpoints are provisioned. Every response includes a `dispatch_mode` field so you always know which backend served your call.
What about grasping?›
AnyGrasp is the current state-of-the-art for 6-DoF grasp detection from point clouds. It handles novel objects without retraining. For higher-fidelity grasping on specific objects, fine-tuning is still recommended.