Applying reinforcement learning (RL) to real-world robotics, especially in high-precision industrial assembly, faces major challenges, including low sample efficiency, complex system integration, and difficulties in planning long-horizon tasks. Jianlan Luo is dedicated to overcoming these barriers and advancing the application of RL in robotic automation.
During his PhD, he tackled the challenge of applying RL to high-precision robotic assembly. By creatively integrating RL with impedance control and feedback controllers, he was the first to demonstrate RL’s potential for handling fine manipulation tasks that require real-time responsiveness to external disturbances. At Google X, Luo led the development of SHIELD, the first industrial-grade RL system. SHIELD achieved a 100% success rate on complex assembly tasks within just 2–3 hours of training, outperforming both human experts and classical methods. The system was hailed as “a milestone for RL in industrial applications.”
Returning to academia, Jianlan joined Berkeley Artificial Intelligence Research (BAIR) and developed SERL/HIL-SERL, an open-source RL framework that transformed real-world robot learning. By combining efficient data collection, distributed training, and modular controllers, SERL reduced training time for vision-based assembly tasks to just 20 minutes while achieving near-perfect success rates. It also significantly outperformed imitation learning in terms of robustness and autonomous recovery.