Abstract:
Autonomous mining trucks often operate autonomously in low-light environments, where accurate and reliable detection performance is critical to ensuring operational safety. However, the open-pit mining environment is highly complex, frequently involving large-scale occlusion and multi-scale feature interaction between the trucks and other objects such as personnel, which poses significant challenges for detection under low-light conditions. To address these issues, LECODNet is proposed, an occluded object detection network tailored for autonomous mining trucks in low-light environments. Firstly, a Multi-Receptive Field Edge Perception Module is designed to extract edge features rich in local detail and global semantic spatial information, enhancing object boundary representation. Secondly, an Edge-Guided Feature Enhancement Module is introduced, using the extracted edge features as structural priors to guide the model’s focus on object regions. Thirdly, a Channel-Aware Mapping Attention mechanism is further embedded to enhance the expressive power of the object features. Finally, a Bidirectional Spatial C2f module is incorporated into the neck, capturing spatial contextual information in both horizontal and vertical directions to improve the model’s ability to perceive spatial structural cues. Extensive experimental results on the self-constructed LAOMD dataset for low-light occluded object detection demonstrate that LECODNet achieves mAP@0.5 of 83.5% and mAP@0.5∶0.95 of 71.2%, surpassing the baseline YOLOv8 by 3.3% and 2.3%, respectively. Compared to the state-of-the-art low-light occlusion detection model FeatEnhancer, LECODNet shows improvements of 1.9% and 1.5% in mAP@0.5 and mAP@0.5∶0.95, respectively. These results indicate that the proposed method effectively enhances object region perception and feature representation while improving spatial structural modeling, significantly boosting detection performance under low-light occluded conditions.