{"title":"基于YOLOv8的全口病变检测改进算法","authors":"Xinchen Jiao , Shanshan Gao , Faqiang Huang , WenHan Dou , YuanFeng Zhou , Caiming Zhang","doi":"10.1016/j.gmod.2025.101302","DOIUrl":null,"url":null,"abstract":"<div><div>In medical imaging detection of oral Cone Beam Computed Tomography (CBCT), there exist tiny lesions that are challenging to detect with low accuracy. The existing detection models are relatively complex. To address this, this paper presents a dual-stage YOLO detection method improved based on YOLOv8. Specifically, we first reconstruct the backbone network based on MobileNetV3 to enhance computational speed and efficiency. Second, we improve detection accuracy from three aspects: we design a composite feature fusion network to enhance the model’s feature extraction capability, addressing the issue of decreased detection accuracy for small lesions due to the loss of shallow information during the fusion process; we further combine spatial and channel information to design the C2f-SCSA module, which delves deeper into the lesion information. To tackle the problem of limited types and insufficient samples of lesions in existing CBCT images, our team collaborated with a professional dental hospital to establish a high-quality dataset, which includes 15 types of lesions and over 2000 accurately labeled oral CBCT images, providing solid data support for model training. Experimental results indicate that the improved method enhances the accuracy of the original algorithm by 3.5 percentage points, increases the recall rate by 4.7 percentage points, and raises the mean Average Precision (mAP) by 3.3 percentage points, a computational load of only 7.6 GFLOPs. This demonstrates a significant advantage in intelligent diagnosis of full-mouth lesions while improving accuracy and reducing computational load.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"142 ","pages":"Article 101302"},"PeriodicalIF":2.2000,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An improved algorithm for full-mouth lesion detection based on YOLOv8\",\"authors\":\"Xinchen Jiao , Shanshan Gao , Faqiang Huang , WenHan Dou , YuanFeng Zhou , Caiming Zhang\",\"doi\":\"10.1016/j.gmod.2025.101302\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In medical imaging detection of oral Cone Beam Computed Tomography (CBCT), there exist tiny lesions that are challenging to detect with low accuracy. The existing detection models are relatively complex. To address this, this paper presents a dual-stage YOLO detection method improved based on YOLOv8. Specifically, we first reconstruct the backbone network based on MobileNetV3 to enhance computational speed and efficiency. Second, we improve detection accuracy from three aspects: we design a composite feature fusion network to enhance the model’s feature extraction capability, addressing the issue of decreased detection accuracy for small lesions due to the loss of shallow information during the fusion process; we further combine spatial and channel information to design the C2f-SCSA module, which delves deeper into the lesion information. To tackle the problem of limited types and insufficient samples of lesions in existing CBCT images, our team collaborated with a professional dental hospital to establish a high-quality dataset, which includes 15 types of lesions and over 2000 accurately labeled oral CBCT images, providing solid data support for model training. Experimental results indicate that the improved method enhances the accuracy of the original algorithm by 3.5 percentage points, increases the recall rate by 4.7 percentage points, and raises the mean Average Precision (mAP) by 3.3 percentage points, a computational load of only 7.6 GFLOPs. This demonstrates a significant advantage in intelligent diagnosis of full-mouth lesions while improving accuracy and reducing computational load.</div></div>\",\"PeriodicalId\":55083,\"journal\":{\"name\":\"Graphical Models\",\"volume\":\"142 \",\"pages\":\"Article 101302\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Graphical Models\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1524070325000499\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Graphical Models","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1524070325000499","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
An improved algorithm for full-mouth lesion detection based on YOLOv8
In medical imaging detection of oral Cone Beam Computed Tomography (CBCT), there exist tiny lesions that are challenging to detect with low accuracy. The existing detection models are relatively complex. To address this, this paper presents a dual-stage YOLO detection method improved based on YOLOv8. Specifically, we first reconstruct the backbone network based on MobileNetV3 to enhance computational speed and efficiency. Second, we improve detection accuracy from three aspects: we design a composite feature fusion network to enhance the model’s feature extraction capability, addressing the issue of decreased detection accuracy for small lesions due to the loss of shallow information during the fusion process; we further combine spatial and channel information to design the C2f-SCSA module, which delves deeper into the lesion information. To tackle the problem of limited types and insufficient samples of lesions in existing CBCT images, our team collaborated with a professional dental hospital to establish a high-quality dataset, which includes 15 types of lesions and over 2000 accurately labeled oral CBCT images, providing solid data support for model training. Experimental results indicate that the improved method enhances the accuracy of the original algorithm by 3.5 percentage points, increases the recall rate by 4.7 percentage points, and raises the mean Average Precision (mAP) by 3.3 percentage points, a computational load of only 7.6 GFLOPs. This demonstrates a significant advantage in intelligent diagnosis of full-mouth lesions while improving accuracy and reducing computational load.
期刊介绍:
Graphical Models is recognized internationally as a highly rated, top tier journal and is focused on the creation, geometric processing, animation, and visualization of graphical models and on their applications in engineering, science, culture, and entertainment. GMOD provides its readers with thoroughly reviewed and carefully selected papers that disseminate exciting innovations, that teach rigorous theoretical foundations, that propose robust and efficient solutions, or that describe ambitious systems or applications in a variety of topics.
We invite papers in five categories: research (contributions of novel theoretical or practical approaches or solutions), survey (opinionated views of the state-of-the-art and challenges in a specific topic), system (the architecture and implementation details of an innovative architecture for a complete system that supports model/animation design, acquisition, analysis, visualization?), application (description of a novel application of know techniques and evaluation of its impact), or lecture (an elegant and inspiring perspective on previously published results that clarifies them and teaches them in a new way).
GMOD offers its authors an accelerated review, feedback from experts in the field, immediate online publication of accepted papers, no restriction on color and length (when justified by the content) in the online version, and a broad promotion of published papers. A prestigious group of editors selected from among the premier international researchers in their fields oversees the review process.