In recent years, LiDAR (Light Detection and Ranging) technology has played a pivotal role in robotics for advanced automation and navigation. Leveraging the Time of Flight (TOF) principle, LiDAR provides high-precision spatial data for obstacle avoidance, object detection, and 2D environmental mapping. However, conventional 360-degree rotational LiDAR systems often introduce inefficiencies due to redundant data collection in areas of limited interest.
This study proposes a novel non-continuous rotational LiDAR system integrated with the Maximus Tracking Algorithm to enable selective data capture at predefined positions. The algorithm is using to enhance edge detection and object clustering while minimizing noise. Validation using depth camera data ensures accurate identification of natural landmarks such as trees and walls.
Experimental results demonstrate that the proposed system achieves a 30% reduction in mapping time, a 15% increase in object detection precision, and reliable AGV (Automated Guilded Vehicle) navigation from point-to-point with return capability in outdoor environments. This integration of hardware and algorithmic innovation advances LiDAR-based mapping efficiency for real-world robotic applications, including autonomous navigation and disaster response scenarios.
Keywords: LiDAR, Maximus Tracking Algorithm, AGV Rover, SLAM.