DayDreamer: World Models for Physical Robot Learning

Hyper data-efficient robot learning with world models



Publication info


  1. Philipp Wu*,  Alejandro Escontrela*, Danijar Hafner*, Ken Goldberg, and Pieter Abbeel
    Proceedings of Machine Learning Research 2022


Learning to walk in the real world in under an hour


The A1 quadruped shown below learns to walk with DayDreamer in ~1 hour, from scratch, with no human intervention.


Abstract


To solve tasks in complex environments, robots need to learn from experience. Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn, limiting its deployment in the physical world. As a consequence, many advances in robot learning rely on simulators. On the other hand, learning inside of simulators fails to capture the complexity of the real world, is prone to simulator inaccuracies, and the resulting behaviors do not adapt to changes in the world. The Dreamer algorithm has recently shown great promise for learning from small amounts of interaction by planning within a learned world model, outperforming pure reinforcement learning in video games. Learning a world model to predict the outcomes of potential actions enables planning in imagination, reducing the amount of trial and error needed in the real environment. However, it is unknown whether Dreamer can facilitate faster learning on physical robots. In this paper, we apply Dreamer to 4 robots to learn online and directly in the real world, without any simulators. Dreamer trains a quadruped robot to roll off its back, stand up, and walk from scratch and without resets in only 1 hour. We then push the robot and find that Dreamer adapts within 10 minutes to withstand perturbations or quickly roll over and stand back up. On two different robotic arms, Dreamer learns to pick and place multiple objects directly from camera images and sparse rewards, approaching human performance. On a wheeled robot, Dreamer learns to navigate to a goal position purely from camera images, automatically resolving ambiguity about the robot orientation. Using the same hyperparameters across all experiments, we find that Dreamer is capable of online learning in the real world, which establishes a strong baseline. We release our infrastructure for future applications of world models to robot learning.


Robots


Card image cap
A1 Quadruped Walking
Card image cap
UR5 Multi-Object Visual Pick Place
Card image cap
XArm Visual Pick and Place
Card image cap
Sphero Ollie Visual Navigation


All Media Coverage



Acknowledgements


We thank Stephen James and Justin Kerr for helpful suggestions and help with printing the protective shell of the quadruped robot. We thank Ademi Adeniji for help with setting up the XArm robot and Raven Huang for help with setting up the UR5 robot. This work was supported in part by an NSF Fellowship, NSF NRI #2024675, and the Vanier Canada Graduate Scholarship.


How to cite


  1. @article{Wu22CoRL_DayDreamer,
      title = {Daydreamer: World Models for Physical Robot Learning},
      author = {Wu, Philipp and Escontrela, Alejandro and Hafner, Danijar and Goldberg, Ken and Abbeel, Pieter},
      journal = {Proceedings of Machine Learning Research},
      year = {2022},
    }