{"title":"Faster-ActionNet: Deep Partial Convolutional Neural Networks for Volleyball Action Detection on Edge Devices","authors":"Shaohua Wang","doi":"10.1002/itl2.70091","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>To address the challenges of low accuracy in volleyball individual action recognition caused by complex scenarios in volleyball sports, Faster-ActionNet was proposed based on the backbone of YOLOv11. In this network, partial convolutions are adopted in both the backbone and neck modules to amplify critical feature representations while minimizing redundant computational and memory overhead. In the backbone network, the Feature Refinement and Fusion Network (FRFN) attention mechanism is integrated, which employs optimized and streamlined operations to reduce feature redundancy across channels. This enhancement significantly boosts the reconstruction quality of latent sharp images and alleviates the risk of critical feature degradation. Experiments evaluating the individual action recognition model on volleyball-specific tasks have revealed superior performance, with the model of mAP attaining 88.2% accuracy and 75.6 frames per second (FPS) in individual action recognition. These results have surpassed state-of-the-art benchmarks. This model demonstrates outstanding performance in real-world applications, providing valuable technical insights for improving sports action recognition and advancing computer vision technologies.</p>\n </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 5","pages":""},"PeriodicalIF":0.5000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Technology Letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/itl2.70091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
To address the challenges of low accuracy in volleyball individual action recognition caused by complex scenarios in volleyball sports, Faster-ActionNet was proposed based on the backbone of YOLOv11. In this network, partial convolutions are adopted in both the backbone and neck modules to amplify critical feature representations while minimizing redundant computational and memory overhead. In the backbone network, the Feature Refinement and Fusion Network (FRFN) attention mechanism is integrated, which employs optimized and streamlined operations to reduce feature redundancy across channels. This enhancement significantly boosts the reconstruction quality of latent sharp images and alleviates the risk of critical feature degradation. Experiments evaluating the individual action recognition model on volleyball-specific tasks have revealed superior performance, with the model of mAP attaining 88.2% accuracy and 75.6 frames per second (FPS) in individual action recognition. These results have surpassed state-of-the-art benchmarks. This model demonstrates outstanding performance in real-world applications, providing valuable technical insights for improving sports action recognition and advancing computer vision technologies.