This research focuses on the development of a real-time semantic mapping system that integrates object detection with Simultaneous Localization and Mapping (SLAM) for indoor robotic navigation. The system fuses data from a camera and LiDAR, enabling the generation of maps containing both geometric and semantic information. By employing the YOLOv3 deep learning model for object detection and the Gmapping algorithm for SLAM, the system accurately identifies and localizes objects such as doors, bicycles, and trash cans within the environment. The produced semantic map enhances the robot’s ability to navigate and interact effectively with its surroundings. The system is designed to ensure real-time performance without compromising computational efficiency. Experimental results demonstrate the robustness of the system in fusing object detection with SLAM, leading to a comprehensive and detailed representation of the environment. Future work will focus on the incorporation of scene classification techniques to prov