低光环境下无人矿卡遮挡目标检测网络

Detection network for autonomous mining trucks under low-light condition

  • 摘要: 无人矿卡常需在低光环境下进行自主作业,而准确可靠的目标检测性能是保证其安全作业的前提。然而,露天矿区环境极为复杂,常出现目标被大面积遮挡,且无人矿卡与其他待检目标,如工作人员等存在多尺度特征互扰,这给低光环境下无人矿卡的遮挡目标检测带来了极大挑战。为此,提出了一种低光环境下无人矿卡遮挡目标检测网络LECODNet。首先,该网络设计了一个多感受野边缘感知模块,提取出富含局部细节信息与全局语义位置信息的边缘特征,强化目标边界信息;其次,构建了一个边缘引导特征增强模块,将提取的边缘特征作为结构先验,引导模型更聚焦目标区域;然后,在此基础上嵌入了通道感知映射注意力机制,增强了目标特征的表达能力;最后,在网络颈部部分设计了一个双向空间感知C2f,从水平方向与垂直方向同时捕捉目标空间上下文信息,增加模型对目标空间结构位置信息的感知能力。大量的试验结果表明,在自制低光环境下无人矿卡遮挡目标检测数据集LAOMD上,LECODNet的检测精度mAP@0.5与mAP@0.5∶0.95达到了83.5%与71.2%。相较于基线模型YOLOv8,分别提高了3.3%与2.3%;相较于目前主流的低光照遮挡目标检测模型FeatEnhancer,其mAP@0.5与mAP@0.5∶0.95分别提高了1.9%与1.5%。试验结果表明,所提方法可以有效增加目标区域的感知与特征表达,同时增强对空间结构关系的建模能力,有效提高低光遮挡环境下无人矿卡的目标检测性能。

     

    Abstract: Autonomous mining trucks often operate autonomously in low-light environments, where accurate and reliable detection performance is critical to ensuring operational safety. However, the open-pit mining environment is highly complex, frequently involving large-scale occlusion and multi-scale feature interaction between the trucks and other objects such as personnel, which poses significant challenges for detection under low-light conditions. To address these issues, LECODNet is proposed, an occluded object detection network tailored for autonomous mining trucks in low-light environments. Firstly, a Multi-Receptive Field Edge Perception Module is designed to extract edge features rich in local detail and global semantic spatial information, enhancing object boundary representation. Secondly, an Edge-Guided Feature Enhancement Module is introduced, using the extracted edge features as structural priors to guide the model’s focus on object regions. Thirdly, a Channel-Aware Mapping Attention mechanism is further embedded to enhance the expressive power of the object features. Finally, a Bidirectional Spatial C2f module is incorporated into the neck, capturing spatial contextual information in both horizontal and vertical directions to improve the model’s ability to perceive spatial structural cues. Extensive experimental results on the self-constructed LAOMD dataset for low-light occluded object detection demonstrate that LECODNet achieves mAP@0.5 of 83.5% and mAP@0.5∶0.95 of 71.2%, surpassing the baseline YOLOv8 by 3.3% and 2.3%, respectively. Compared to the state-of-the-art low-light occlusion detection model FeatEnhancer, LECODNet shows improvements of 1.9% and 1.5% in mAP@0.5 and mAP@0.5∶0.95, respectively. These results indicate that the proposed method effectively enhances object region perception and feature representation while improving spatial structural modeling, significantly boosting detection performance under low-light occluded conditions.

     

/

返回文章
返回