Learn to Navigate in Dynamic Environments with Normalized LiDAR Scans

Image credit: Unsplash

Abstract

The latest robot navigation methods for dynamic environments assume that the states of obstacles, including their geometries and trajectories, are fully observable. While it’s easy to obtain these states accurately in simulations, it’s exceedingly challenging in the real world. Therefore, a viable alternative is to directly map raw sensor observations into robot actions. However, acquiring skills from high-dimensional raw observations demands massive neural networks and extended training periods. Furthermore, there are discrepancies between simulated and real environments that impede real-world implementations. To overcome these limitations, we propose a Learning framework for robot Navigation in Dynamic environments that uses sequential Normalized LiDAR (LNDNL) scans. We employ long-short-term memory (LSTM) to propagate historical environmental information from the sequential LiDAR observations. Additionally, we customize a LiDAR-integrated simulator to speed up sampling and normalize the geometry of real-world obstacles to match that of simulated objects, thereby bridging the sim-to-real gap. Our extensive comparisons with state-of-the-art baselines and real-world implementations demonstrate the potentials of learning to navigate in dynamic environments using raw sensor observations and sim-to-real transfer.

Publication
In 2024 International Conference on Robotics and Automation (ICRA)
Wei Zhu
Wei Zhu
Postdoc

My research interests include deep reinforcement learning, snake robot, wheeled bipedal robot, robotic arm, quadruped robot, and autonomous navigation.