{"title":"A lightweight multi-scaled semantic segmentation for underground mine images","authors":"Yuanbin Wang, Wenqing He, Qianxi Li, Xiaolong Wang, Wenjian Chang","doi":"10.1016/j.patrec.2025.08.019","DOIUrl":null,"url":null,"abstract":"<div><div>Analyzing complex scene images in underground mines is crucial for ensuring safe coal mining.The semantic segmentation of underground objects can contribute to grasp the complex information of underground mine. However, most existing approaches lose the details of small objects, and do not utilize features across different scales effectively. Simultaneously, the extended runtime of the model hinders the timely delivery of segmentation information for the mining task. Thus, a lightweight semantic segmentation method based on the DeeplabV3+ model is proposed for image segmentation. Firstly, to reduce model complexity while improving model segmentation performance, MobileNetV2 is utilized as the backbone network. Secondly, Atrous Spatial Pyramid Mixed Pooling (ASPMP) with mixed pooling module is presented, which leverages multiscaled features extracted from different scale objects under the mine. Meanwhile, the void rate of ASPMP is optimized for better extraction of smaller underground objects. Finally, in the stage of the decoder, Feature Fusion Module(FFM) containing the channel attention mechanism is constructed for the fusion of high and low features, and the residual network structure further reduces the computational load. Experimental results show that the proposed method substantially reduces the quantity of parameters and the amount of calculation while the segmentation precision is guaranteed. The proposed method achieves a balance between accuracy and efficiency on CUMT-CMUID dataset and Cityscapes dataset.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"197 ","pages":"Pages 325-331"},"PeriodicalIF":3.3000,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525002983","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Analyzing complex scene images in underground mines is crucial for ensuring safe coal mining.The semantic segmentation of underground objects can contribute to grasp the complex information of underground mine. However, most existing approaches lose the details of small objects, and do not utilize features across different scales effectively. Simultaneously, the extended runtime of the model hinders the timely delivery of segmentation information for the mining task. Thus, a lightweight semantic segmentation method based on the DeeplabV3+ model is proposed for image segmentation. Firstly, to reduce model complexity while improving model segmentation performance, MobileNetV2 is utilized as the backbone network. Secondly, Atrous Spatial Pyramid Mixed Pooling (ASPMP) with mixed pooling module is presented, which leverages multiscaled features extracted from different scale objects under the mine. Meanwhile, the void rate of ASPMP is optimized for better extraction of smaller underground objects. Finally, in the stage of the decoder, Feature Fusion Module(FFM) containing the channel attention mechanism is constructed for the fusion of high and low features, and the residual network structure further reduces the computational load. Experimental results show that the proposed method substantially reduces the quantity of parameters and the amount of calculation while the segmentation precision is guaranteed. The proposed method achieves a balance between accuracy and efficiency on CUMT-CMUID dataset and Cityscapes dataset.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.