R$^2$LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping

Introduction

R2LIVE is a robust, real-time tightly-coupled multi-sensor fusion framework, which fuses the measurement from the LiDAR, inertial sensor, visual camera to achieve robust, accurate state estimation. Taking advantage of measurement from all individual sensors, our algorithm is robust enough to various visual failure, LiDAR-degenerated scenarios, and is able to run in real time on an on-board computation platform, as shown by extensive experiments conducted in indoor, outdoor, and mixed environment of different scale.

The reconstructed 3D maps of HKU main building are shown in (d), and the detail point cloud with the correspondence panorama images are shown in (a) and (b). (c) shows that our algorithm can close the loop by itself (returning the starting point) without any additional processing (e.g. loop closure). In (e), we merge our map with the satellite image to further examine the accuracy of our system.

Our related video: our related video is now available on YouTube (click below images to open):

video
Jiarong Lin
Jiarong Lin
Ph.D. candidate in Robotics🤖

My research interests include Simultaneous localization and mapping (SLAM), Multi-sensor (i.e., LiDAR-Inertial-Visual) Fusion, and 3D reconstruction. My popular works include: R3LIVE, FAST-LIO, loam-livox, R2LIVE, and ImMesh🆕.