{"title":"李然:一种用于田间植物有害生物识别的轻量级剩余注意网络","authors":"Sivasubramaniam Janarthan;Selvarajah Thuseethan;Sutharshan Rajasegarar;Qiang Lyu;Yongqiang Zheng;John Yearwood","doi":"10.1109/TAFE.2024.3496798","DOIUrl":null,"url":null,"abstract":"Plant pests are a major threat to sustainable food supply, causing damage to food production and agriculture industries around the world. Despite these negative impacts, on several occasions, plant pests have also been used to improve the quality of agricultural products. Although deep learning-based automated plant pest identification techniques have shown tremendous success in the recent past, they are often limited by increased computational cost, large training data requirements, and impaired performance when they present in complex backgrounds. Therefore, to alleviate these challenges, a lightweight attention-based convolutional neural network architecture, called LiRAN, based on a novel simplified attention mask module and an extended MobileNetV2 architecture, is proposed in this study. The experimental results reveal that the proposed architecture can attain 96.25%, 98.9%, and 91% accuracies on three variants of publicly available datasets with 5869, 545, and 500 sample images, respectively, showcasing high performance consistently in large and small data conditions. More importantly, this model can be deployed on smartphones or other resource-constrained embedded devices for in-field realization, only requiring <inline-formula><tex-math>$\\approx$</tex-math></inline-formula> 9.3 MB of storage space with around 2.37 M parameters and 0.34 giga multiply-and-accumulate FLOPs with an input image size of 224 × 224.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"167-178"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LiRAN: A Lightweight Residual Attention Network for In-Field Plant Pest Recognition\",\"authors\":\"Sivasubramaniam Janarthan;Selvarajah Thuseethan;Sutharshan Rajasegarar;Qiang Lyu;Yongqiang Zheng;John Yearwood\",\"doi\":\"10.1109/TAFE.2024.3496798\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Plant pests are a major threat to sustainable food supply, causing damage to food production and agriculture industries around the world. Despite these negative impacts, on several occasions, plant pests have also been used to improve the quality of agricultural products. Although deep learning-based automated plant pest identification techniques have shown tremendous success in the recent past, they are often limited by increased computational cost, large training data requirements, and impaired performance when they present in complex backgrounds. Therefore, to alleviate these challenges, a lightweight attention-based convolutional neural network architecture, called LiRAN, based on a novel simplified attention mask module and an extended MobileNetV2 architecture, is proposed in this study. The experimental results reveal that the proposed architecture can attain 96.25%, 98.9%, and 91% accuracies on three variants of publicly available datasets with 5869, 545, and 500 sample images, respectively, showcasing high performance consistently in large and small data conditions. More importantly, this model can be deployed on smartphones or other resource-constrained embedded devices for in-field realization, only requiring <inline-formula><tex-math>$\\\\approx$</tex-math></inline-formula> 9.3 MB of storage space with around 2.37 M parameters and 0.34 giga multiply-and-accumulate FLOPs with an input image size of 224 × 224.\",\"PeriodicalId\":100637,\"journal\":{\"name\":\"IEEE Transactions on AgriFood Electronics\",\"volume\":\"3 1\",\"pages\":\"167-178\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-12-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on AgriFood Electronics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10774154/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on AgriFood Electronics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10774154/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LiRAN: A Lightweight Residual Attention Network for In-Field Plant Pest Recognition
Plant pests are a major threat to sustainable food supply, causing damage to food production and agriculture industries around the world. Despite these negative impacts, on several occasions, plant pests have also been used to improve the quality of agricultural products. Although deep learning-based automated plant pest identification techniques have shown tremendous success in the recent past, they are often limited by increased computational cost, large training data requirements, and impaired performance when they present in complex backgrounds. Therefore, to alleviate these challenges, a lightweight attention-based convolutional neural network architecture, called LiRAN, based on a novel simplified attention mask module and an extended MobileNetV2 architecture, is proposed in this study. The experimental results reveal that the proposed architecture can attain 96.25%, 98.9%, and 91% accuracies on three variants of publicly available datasets with 5869, 545, and 500 sample images, respectively, showcasing high performance consistently in large and small data conditions. More importantly, this model can be deployed on smartphones or other resource-constrained embedded devices for in-field realization, only requiring $\approx$ 9.3 MB of storage space with around 2.37 M parameters and 0.34 giga multiply-and-accumulate FLOPs with an input image size of 224 × 224.