About Me
![]() |
Michael Lutter Research Scientist & Technical Lead at Boston Dynamics Contact: mail(at)mlutter.eu LinkedIn | Twitter | Google Scholar Sections: Research | Bio | Boston Dynamics | News |
Research Interests
Dexterous Manipulation:
I am excited to work on contact-rich and dexterous manipulation tasks that are essential
for reliable real-world deployments. Current methods are often not applicable to such tasks.
Reinforcement Learning:
I use reinforcement learning to reduce the reliance on real-world demonstrations.
RL requires less human demonstrations and works with sub-optimal demonstrations.
Large-Scale Physics Simulation:
I leverage large-scale physics simulation to scale robot data.
Only synthetic data allows us to truly scale data for humanoid robots in an cost effective way.
Sim-to-real for Manipulation:
I am driven to prove that sim-to-real can work for dexterous manipulation.
This approach became the norm for whole-body control in recent years and
I think the same is possible for manipulation.
Bio
I lead the Atlas Dexterous Manipulation team at Boston Dynamics. This team works on developing vision-based dexterous manipulation policies for humanoids with reinforcement learning and synthetic data. Previously, I worked on learning reactive quadruped locomotion over slippery terrain with Spot using reinforcement learning.
Before Boston Dynamics, I completed his Ph.D. supervised by Jan Peters at TU Darmstadt. My research focused on inductive biases for robot learning. I completed a research internship at Google DeepMind, NVIDIA Research and received multiple awards including the George Giralt Ph.D. Award (2022) for the best robotics Ph.D. thesis in Europe and the AI newcomer award (2019) of the German computer science foundation. In addition, my Ph.D. thesis was published as a book within the Springer STAR series.
I completed a Bachelors in Engineering Management at University of Duisburg Essen and a Masters in Electrical Engineering at TU Munich. During my undergraduate studies I spent one semester abroad at MIT studying electrical engineering and computer science. Within my studies, I received multiple scholarships for academic excellence and ranked among the top three students within my graduation year.
Boston Dynamics
02/25-present - Technical Manager of the Atlas RL Manipulation Team
03/24-02/25 - Tech Lead for Reinforcement Learning
05/22-02/25 - Senior Staff Research Scientist
Dexterous Manipulation
Solving dexterous manipulation with behavioral cloning is challenging as demonstrations are rarely optimal for such tasks. To leverage sub-optimal demonstrations, I use reinforcement learning and simulation to improve these demonstrations and train vision-based manipulation policies. I am excited to show that sim-2-real for manipulation can be as successful as for whole-body control.
Besides leading this project, I am deeply involved in the day to day technical work within this project. I have specifically developed the reinforcement learning training infrastructure, setup the physics simulation and trained policies for insertion & extraction tasks. In addition, I was part of the BD & NVIDIA collaboration on object grasping [video] [blog].
Quadruped Locomotion
As one of the first reinforcement learning hires at Boston Dynamics, I worked on learning quadruped locomotion on challenging slippery terrain. This initial prototype was so convincing that we shipped a reinforcement learning policy to thousands of customer robots. Nowadays, reinforcement learning is the core technology for whole-body control at Boston Dynamics.
Within this project, I led the development of a policy that can traverse slippery terrain where the existing model-based controller struggled. I trained the policies using sim-2-real reinforcement learning and deployed these policies to Spot. This work was presented at multiple conference workshops including RSS, CoRL and IROS. In addition, I contributed to the reinforcement learning and physics simulation infrastructure. I also worked on the Spot controller selection policy that got integrated into the Spot release [blog] and consulted on the 2025 Spot performance on America got Talent [video].
News
- (07.Feb 25) - Invited Talk at the MIT Sloan AI Conference on RL for robotics
- (29.Dec 24) - New preprint on the diminishing returns of value expansion [Arxiv]
- (09.Nov 24) - Invited Talk at CoRL WS on Learning for Locomotion
- (15.Jul 24) - Invited Talk at RSS WS on Embodiement-Aware Robot Learning
- (15.Jul 24) - Invited Talk at RSS WS on Structural Priors for Robot Learning
- (15.Jul 24) - Invited Talk at the Freiburg Robotics and Biology conference
- (01.Oct 23) - Invited Talk at IROS WS on Reinforcement Learning
- (01.Aug 23) - Published my Ph.D. thesis in the Springer STAR series [Springer]
- (19.Apr 23) - Accepted IJRR Journal Paper on Deep Lagrangian Networks [IJRR][Arxiv]
- (22.Jan 23) - Accepted ICLR Paper Diminishing Return of Value Expansion [Arxiv]
- (14.Dec 22) - Invited Talk CoRL WS on Inductive Bias in Robot Learning
- (19.Oct 22) - Accepted TPAMI Journal Paper on Value Iteration [IEEE][Arxiv]
- (29.Jun 22) - Received the George Giralt Award for best robotics Ph.D. thesis in Europe
- (23.May 22) - Joined the Boston Dynamics Atlas team to work on RL for locomotion
-
(19.Nov 21) - Defended my Ph.D. on Robot Learning
[Thesis]
- (29.Sep 21) - New pre-print on learning dynamics models for MPC [Arxiv]
- (10.May 21) - Accepted RSS Paper Robust Value Iteration for Continuous Control [Arxiv]
- (08.May 21) - Accepted ICML Paper Value Iteration in continuous space and time [Arxiv]
- (01.Apr 21) - Accepted ICRA Paper Model-Learning for offline RL [Arxiv]
- (14.Jan 21) - Started my Research Internship with the DeepMind Robotics Team
- (11.Dec 20) - Organizing NeurIPS WS on Inductive Biases and Physically Structured Learning
- (25.Oct 20) - Invited Talk IROS WS on Trends and Advances in ML and Automated Reasoning
- (14.Oct 20) - Robot Juggling paper accepted at CoRL 2020 [Arxiv]
- (01.Oct 20) - My Homepage is finally live :)
