3D Object Detection for Autonomous Vehicles Perception Based on a Combination of Lidar, Radar, and Image Data

Sahba, Ramin
Journal Title
Journal ISSN
Volume Title

One of the topics that is highly regarded and researched in the field of artificial intelligence and machine learning is object detection. Its use is especially important in autonomous vehicles. The various methods used to detect objects are based on different types of data, including image, radar, and lidar. Using point clouds is one of the new methods for 3D object detection proposed in some recent work. One of the recently presented efficient methods is PointPillars network. It is an encoder that can learn from data available in a point cloud and then organize it as a representation in vertical columns (pillars). This representation can be used for 3D object detection. in this work, we try to develop a high-performance model for 3D object detection based on PointPillars network exploiting a combination of lidar, radar, and image data to be used for autonomous vehicles perception. We use lidar, radar, and image data in nuScenes dataset to predict 3D boxes for three classes of objects that are car, pedestrian, and bus. First, we obtain probability map for each class for different types of data. Then we use an element-wise product to calculate the fused probability map from the obtained probability maps for each type of data. We also suggest a method to combine different types of input data (lidar, radar, image) using a weighting system that can be used as the input for the encoder. To measure and compare results, we use nuScenes detection score (NDS) that is a combined metric for detection task as well as mean Average Precision. Results show that increasing the number of lidar sweeps, and combining them with radar and image data, significantly improve the performance of the 3D object detector.

This item is available only to currently enrolled UTSA students, faculty or staff.
Electrical and Computer Engineering