SELF-CALIBRATING FUSION OF MULTI-SENSOR SYSTEM FOR AUTONOMOUS VEHICLE OBSTACLE DETECTION AND TRACKING
dc.contributor.advisor | Cheok, Ka C. | |
dc.contributor.author | Tian, Kaiqiao | |
dc.contributor.other | Mirza, Khalid | |
dc.contributor.other | Radovnikovich, Micho Tomislav | |
dc.contributor.other | Louie, Wing-Yue Geoffrey | |
dc.contributor.other | Cesmelioglu, Aycil | |
dc.date.accessioned | 2024-10-02T13:31:38Z | |
dc.date.available | 2024-10-02T13:31:38Z | |
dc.date.issued | 2023-01-01 | |
dc.description.abstract | Mobile robots have gained significant attention due to their ability to undertake complex tasks in various applications, ranging from autonomous vehicles to robotics and augmented reality. To achieve safe and efficient navigation, these robots rely on sensor data from RADAR, LiDAR, and Cameras to understand their surroundings. However, the integration of data from these sensors presents challenges, including data inconsistencies and sensor limitations. This thesis proposes a novel LiDAR and Camera sensor fusion algorithm that addresses these challenges, enabling more accurate and reliable perception for mobile robots and autonomous vehicles. The proposed algorithm leverages the unique strengths of both LiDAR and camera sensors to create a holistic representation of the environment. It adopts a multi-sensor data fusion (MSDF) approach, combining the complementary characteristics of LiDAR's precise 3D Point Cloud Data and the rich visual information provided by cameras. The fusion process involves sensor data registration, calibration, and synchronization, ensuring accurate alignment and temporal coherence. The algorithm introduces a robust data association technique that mateches LiDAR points with visual features extracted from camera images. By fusing these data, the algorithm enhances object detection and recognition capabilities, enabling the robot to perceive the environment with higher accuracy and efficiency. Additionally, the fusion technique compensates for sensor-specific limitations, such as LiDAR's susceptibility to adverse weather conditions and the camera's vulnerability to lighting changes, resulting in a more reliable perception system. The thesis contributes to advancing mobile robot perception by providing a comprehensive and practical LiDAR and camera sensor fusion algorithm. This novel approach has significant implications for autonomous vehicles, robotics, and augmented reality applications, where accurate and reliable perception is vital for successful navigation and task execution. By addressing the limitations of individual sensors and offering a more unified and coherent perception system, the proposed algorithm paves the way for safer, more efficient, and intelligent mobile robot solutions in various real-world settings. | |
dc.identifier.uri | https://hdl.handle.net/10323/18237 | |
dc.relation.department | Electrical and Computer Engineering | |
dc.subject | Autonomous vehicles | |
dc.subject | LiDAR and Camera sensor fusion | |
dc.subject | Multi-sensor data fusion | |
dc.subject | Self-calibration algorithm | |
dc.subject | Sensor data mis-alignment | |
dc.subject | Tracking algorithm | |
dc.title | SELF-CALIBRATING FUSION OF MULTI-SENSOR SYSTEM FOR AUTONOMOUS VEHICLE OBSTACLE DETECTION AND TRACKING |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Tian_oakland_0446E_10366.pdf
- Size:
- 8.31 MB
- Format:
- Adobe Portable Document Format