🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning https://huggingface.co/docs/lerobot
Find a file
2026-04-09 16:20:01 +02:00
.github chore(ci): proper claude args workflow (#3338) 2026-04-09 16:20:01 +02:00
benchmarks/video chore: remove usernames + use entrypoints in docs, comments & sample commands (#2988) 2026-02-18 22:46:12 +01:00
docker docs(ci): add readme for dockerfile (#3295) 2026-04-06 13:22:45 +02:00
docs feat(envs): lazy env init + AsyncVectorEnv as default for n_envs > 1 (#3274) 2026-04-09 10:29:20 +02:00
examples feat(dagger): Add HIL/Dagger/HG-Dagger/RaC style data collection (#2833) 2026-04-02 19:53:59 +02:00
media/readme feat(docs): modernize readme (#2660) 2025-12-18 19:45:13 +01:00
src/lerobot feat(envs): lazy env init + AsyncVectorEnv as default for n_envs > 1 (#3274) 2026-04-09 10:29:20 +02:00
tests feat(envs): lazy env init + AsyncVectorEnv as default for n_envs > 1 (#3274) 2026-04-09 10:29:20 +02:00
.dockerignore Organize test folders (#856) 2025-03-13 14:05:55 +01:00
.gitattributes Hardware API redesign (#777) 2025-06-05 17:48:43 +02:00
.gitignore feat(ci): add uv.lock (#3292) 2026-04-06 12:23:37 +02:00
.pre-commit-config.yaml feat(dependencies): require Python 3.12+ as minimum version (#3023) 2026-03-06 10:15:13 +01:00
AGENTS.md feat(ci): add agent assitance workflow (#3332) 2026-04-09 12:06:25 +02:00
AI_POLICY.md chore: add AI policy (#3055) 2026-02-28 14:41:28 +01:00
CLAUDE.md feat(ci): add agent assitance workflow (#3332) 2026-04-09 12:06:25 +02:00
CODE_OF_CONDUCT.md chore(ci): update PR template (#2665) 2025-12-17 17:10:04 +01:00
CONTRIBUTING.md fix(links): replacing relative links with absolute links in the contribution guide (#3141) 2026-03-12 20:46:05 -07:00
docs-requirements.txt feat(ci): release workflow publish to pypi test + lock files (#1643) 2025-08-01 17:14:15 +02:00
LICENSE Add simxarm license 2024-03-25 12:28:07 +01:00
Makefile feat(sim): add metaworld env (#2088) 2025-10-14 17:21:18 +02:00
MANIFEST.in Fix metaworld_config.json not bundled in pip installs and AttributeError crash (#3017) 2026-02-25 12:29:10 +01:00
pyproject.toml chore(dependencies): Bump lerobot to 0.5.2 (#3307) 2026-04-07 17:17:33 +02:00
README.md chore(docs): new badge for readme (#3303) 2026-04-07 10:47:03 +02:00
requirements-macos.txt chore(deps): update requirements file (#3114) 2026-03-09 11:18:05 +01:00
requirements-ubuntu.txt chore(deps): update requirements file (#3114) 2026-03-09 11:18:05 +01:00
requirements.in chore(deps): update requirements file (#3114) 2026-03-09 11:18:05 +01:00
SECURITY.md chore: add security policy (#2809) 2026-01-16 14:38:42 +01:00
setup.py chore: adds dynamic README handling and setup script (#2724) 2025-12-28 01:45:06 +01:00
uv.lock chore(dependencies): Bump lerobot to 0.5.2 (#3307) 2026-04-07 17:17:33 +02:00

LeRobot, Hugging Face Robotics Library

Tests Tests Python versions License Status Version Contributor Covenant Discord

LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry so that everyone can contribute to and benefit from shared datasets and pretrained models.

🤗 A hardware-agnostic, Python-native interface that standardizes control across diverse platforms, from low-cost arms (SO-100) to humanoids.

🤗 A standardized, scalable LeRobotDataset format (Parquet + MP4 or images) hosted on the Hugging Face Hub, enabling efficient storage, streaming and visualization of massive robotic datasets.

🤗 State-of-the-art policies that have been shown to transfer to the real-world ready for training and deployment.

🤗 Comprehensive support for the open-source ecosystem to democratize physical AI.

Quick Start

LeRobot can be installed directly from PyPI.

pip install lerobot
lerobot-info

Important

For detailed installation guide, please see the Installation Documentation.

Robots & Control

Reachy 2 Demo

LeRobot provides a unified Robot class interface that decouples control logic from hardware specifics. It supports a wide range of robots and teleoperation devices.

from lerobot.robots.myrobot import MyRobot

# Connect to a robot
robot = MyRobot(config=...)
robot.connect()

# Read observation and send action
obs = robot.get_observation()
action = model.select_action(obs)
robot.send_action(action)

Supported Hardware: SO100, LeKiwi, Koch, HopeJR, OMX, EarthRover, Reachy2, Gamepads, Keyboards, Phones, OpenARM, Unitree G1.

While these devices are natively integrated into the LeRobot codebase, the library is designed to be extensible. You can easily implement the Robot interface to utilize LeRobot's data collection, training, and visualization tools for your own custom robot.

For detailed hardware setup guides, see the Hardware Documentation.

LeRobot Dataset

To solve the data fragmentation problem in robotics, we utilize the LeRobotDataset format.

  • Structure: Synchronized MP4 videos (or images) for vision and Parquet files for state/action data.
  • HF Hub Integration: Explore thousands of robotics datasets on the Hugging Face Hub.
  • Tools: Seamlessly delete episodes, split by indices/fractions, add/remove features, and merge multiple datasets.
from lerobot.datasets.lerobot_dataset import LeRobotDataset

# Load a dataset from the Hub
dataset = LeRobotDataset("lerobot/aloha_mobile_cabinet")

# Access data (automatically handles video decoding)
episode_index=0
print(f"{dataset[episode_index]['action'].shape=}\n")

Learn more about it in the LeRobotDataset Documentation

SoTA Models

LeRobot implements state-of-the-art policies in pure PyTorch, covering Imitation Learning, Reinforcement Learning, and Vision-Language-Action (VLA) models, with more coming soon. It also provides you with the tools to instrument and inspect your training process.

Gr00t Architecture

Training a policy is as simple as running a script configuration:

lerobot-train \
  --policy=act \
  --dataset.repo_id=lerobot/aloha_mobile_cabinet
Category Models
Imitation Learning ACT, Diffusion, VQ-BeT, Multitask DiT Policy
Reinforcement Learning HIL-SERL, TDMPC & QC-FQL (coming soon)
VLAs Models Pi0Fast, Pi0.5, GR00T N1.5, SmolVLA, XVLA

Similarly to the hardware, you can easily implement your own policy & leverage LeRobot's data collection, training, and visualization tools, and share your model to the HF Hub

For detailed policy setup guides, see the Policy Documentation.

Inference & Evaluation

Evaluate your policies in simulation or on real hardware using the unified evaluation script. LeRobot supports standard benchmarks like LIBERO, MetaWorld and more to come.

# Evaluate a policy on the LIBERO benchmark
lerobot-eval \
  --policy.path=lerobot/pi0_libero_finetuned \
  --env.type=libero \
  --env.task=libero_object \
  --eval.n_episodes=10

Learn how to implement your own simulation environment or benchmark and distribute it from the HF Hub by following the EnvHub Documentation

Resources

Citation

If you use LeRobot in your project, please cite the GitHub repository to acknowledge the ongoing development and contributors:

@misc{cadene2024lerobot,
    author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascal, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
    title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
    howpublished = "\url{https://github.com/huggingface/lerobot}",
    year = {2024}
}

If you are referencing our research or the academic paper, please also cite our ICLR publication:

ICLR 2026 Paper
@inproceedings{cadenelerobot,
  title={LeRobot: An Open-Source Library for End-to-End Robot Learning},
  author={Cadene, Remi and Alibert, Simon and Capuano, Francesco and Aractingi, Michel and Zouitine, Adil and Kooijmans, Pepijn and Choghari, Jade and Russi, Martino and Pascal, Caroline and Palma, Steven and Shukor, Mustafa and Moss, Jess and Soare, Alexander and Aubakirova, Dana and Lhoest, Quentin and Gallou\'edec, Quentin and Wolf, Thomas},
  booktitle={The Fourteenth International Conference on Learning Representations},
  year={2026},
  url={https://arxiv.org/abs/2602.22818}
}

Contribute

We welcome contributions from everyone in the community! To get started, please read our CONTRIBUTING.md guide. Whether you're adding a new feature, improving documentation, or fixing a bug, your help and feedback are invaluable. We're incredibly excited about the future of open-source robotics and can't wait to work with you on what's next—thank you for your support!

SO101 Video

Built by the LeRobot team at Hugging Face with ❤️