{"title":"YOLO-DKR:基于内核重用的可微分体系结构搜索","authors":"Yu Xue , Chenhang Yao , Mohamed Wahib , Moncef Gabbouj","doi":"10.1016/j.ins.2025.122180","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, neural architecture search (NAS) has been used to search high-performance convolutional neural networks (CNNs) for object detection. However, existing NAS methods need expensive computational costs. To address this issue, this paper proposes YOLO-DKR (differentiable architecture search (DARTS) based on kernel reusing for YOLOv7). To reduce GPU memory and search time during NAS training, we employ DARTS based on kernel reusing technique, which shares the weights of candidate convolutions on a single edge and merges them into a single fused convolution. Additionally, we design supernets based on YOLOv7-tiny and YOLOv7 networks, using a joint search strategy for optimal solutions. Finally, during retraining phase, we introduce the coordinate attention (CA) modules to further enhance the model's ability to detect small objects. Experiments demonstrate that our method achieves exceptional search efficiency and accuracy. Our lightweight model, YOLO-DKR-tiny, requires only 0.4 GPU-days, two-thirds of MAE-DET-L's search time, and outperforms most NAS-based methods in search efficiency. When transferred to MS COCO 2017 dataset, YOLO-DKR-tiny surpasses DeepMAD(FCOS) by 0.3% in [email protected]:0.95 with fewer parameters. For small object detection (AP<span><math><msubsup><mrow></mrow><mrow><mi>S</mi></mrow><mrow><mi>v</mi><mi>a</mi><mi>l</mi></mrow></msubsup></math></span>), YOLO-DKR achieves 36.3%, exceeding YOLOv10-C by 0.5%. These results demonstrate our method's effectiveness in achieving superior detection with optimized computational resources.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"713 ","pages":"Article 122180"},"PeriodicalIF":6.8000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"YOLO-DKR: Differentiable architecture search based on kernel reusing for object detection\",\"authors\":\"Yu Xue , Chenhang Yao , Mohamed Wahib , Moncef Gabbouj\",\"doi\":\"10.1016/j.ins.2025.122180\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, neural architecture search (NAS) has been used to search high-performance convolutional neural networks (CNNs) for object detection. However, existing NAS methods need expensive computational costs. To address this issue, this paper proposes YOLO-DKR (differentiable architecture search (DARTS) based on kernel reusing for YOLOv7). To reduce GPU memory and search time during NAS training, we employ DARTS based on kernel reusing technique, which shares the weights of candidate convolutions on a single edge and merges them into a single fused convolution. Additionally, we design supernets based on YOLOv7-tiny and YOLOv7 networks, using a joint search strategy for optimal solutions. Finally, during retraining phase, we introduce the coordinate attention (CA) modules to further enhance the model's ability to detect small objects. Experiments demonstrate that our method achieves exceptional search efficiency and accuracy. Our lightweight model, YOLO-DKR-tiny, requires only 0.4 GPU-days, two-thirds of MAE-DET-L's search time, and outperforms most NAS-based methods in search efficiency. When transferred to MS COCO 2017 dataset, YOLO-DKR-tiny surpasses DeepMAD(FCOS) by 0.3% in [email protected]:0.95 with fewer parameters. For small object detection (AP<span><math><msubsup><mrow></mrow><mrow><mi>S</mi></mrow><mrow><mi>v</mi><mi>a</mi><mi>l</mi></mrow></msubsup></math></span>), YOLO-DKR achieves 36.3%, exceeding YOLOv10-C by 0.5%. These results demonstrate our method's effectiveness in achieving superior detection with optimized computational resources.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":\"713 \",\"pages\":\"Article 122180\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2025-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0020025525003123\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020025525003123","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
YOLO-DKR: Differentiable architecture search based on kernel reusing for object detection
In recent years, neural architecture search (NAS) has been used to search high-performance convolutional neural networks (CNNs) for object detection. However, existing NAS methods need expensive computational costs. To address this issue, this paper proposes YOLO-DKR (differentiable architecture search (DARTS) based on kernel reusing for YOLOv7). To reduce GPU memory and search time during NAS training, we employ DARTS based on kernel reusing technique, which shares the weights of candidate convolutions on a single edge and merges them into a single fused convolution. Additionally, we design supernets based on YOLOv7-tiny and YOLOv7 networks, using a joint search strategy for optimal solutions. Finally, during retraining phase, we introduce the coordinate attention (CA) modules to further enhance the model's ability to detect small objects. Experiments demonstrate that our method achieves exceptional search efficiency and accuracy. Our lightweight model, YOLO-DKR-tiny, requires only 0.4 GPU-days, two-thirds of MAE-DET-L's search time, and outperforms most NAS-based methods in search efficiency. When transferred to MS COCO 2017 dataset, YOLO-DKR-tiny surpasses DeepMAD(FCOS) by 0.3% in [email protected]:0.95 with fewer parameters. For small object detection (AP), YOLO-DKR achieves 36.3%, exceeding YOLOv10-C by 0.5%. These results demonstrate our method's effectiveness in achieving superior detection with optimized computational resources.
期刊介绍:
Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions.
Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.