Seminar: Toward Solving Occlusion and Sparsity in Detecting Objects in Point Clouds, 1st September, 1pm

When: Friday 1st of September, 1pm AEST

Where: This seminar will be partially presented at the Rose Street Seminar area (J04) and partially online via Zoom. RSVP

Speaker: Minh Quan Dao

Title: Toward Solving Occlusion and Sparsity in Detecting Objects in Point Clouds 

Abstract:

Autonomous driving is an exciting technology that has the potential to transform our transportation to be safer and more efficient. Object detection plays a fundamental role in providing inputs to safety-critical modules such as navigation and motion planning. Because autonomous vehicles (AV) conduct themselves in a 3D world, the detection of objects needs to be in 3D. To this end, AVs are equipped with advanced sensing systems essentially made of cameras, LiDARs, and RADARs. While which type of sensors is sufficient for 3D object detection is still an open question, it is undeniable that LiDAR-based methods achieve superior performance on public benchmarks, compared to other modalities, especially cameras. The reason is LiDARs can provide accurate depth, which is lacking in RGB images due to perspective projection, at a sufficient density, which can not yet be matched by RADAR. However, LiDARs have two inherent drawbacks that are (i) the space behind each LiDAR point is unobservable, and (ii) the void between two laser beams increases with respect to the distance from the LiDAR.

These drawbacks make LiDAR measurements, i.e., point clouds, severely affected by occlusion and sparsity. The result is a low-fidelity measurement of some foreground objects making them easy to be missed by detection models.

A potential solution to this issue is to leverage point clouds collected from multiple perspectives. Each object appears differently on point clouds according to their relative distance and the observation angles relative to the LiDAR and surrounding objects. Aggregating point clouds collected at different perspectives can minimize the low-fidelity regions in each of them. Such multiple perspectives can be derived from (i) the ego vehicle’s motion or (ii) the presence of other vehicles at different locations in the scene. This talk identifies the challenges in using either of these methods for gaining multiple perspectives and presents a detection model that unifies them both.

Bio:

Minh-Quan Dao obtained his M.Sc in Robotics at Ecole Centrale de Nantes (ECN), France in 2020. Since then, he has continued his study in ECN by pursuing a Ph.D. on the topic of 3D object detection for autonomous driving. His research focuses on the challenges posed by occlusion and sparsity in point clouds collected by automotive LiDARs to 3D object detection models. In May 2023, he started his 3-month visit to the ITS team of ACFR to research collaborative perception and unsupervised 3D object detections.

Contacts

Australian Centre for Robotics
info@acfr.usyd.edu.au