Thomas Kinyanjui Njoroge, Edwin Juma Omol, Vincent Omollo Nyangaresi
{"title":"作物健康监测的深度学习和物联网融合:智能农业高精度、边缘优化模型","authors":"Thomas Kinyanjui Njoroge, Edwin Juma Omol, Vincent Omollo Nyangaresi","doi":"10.1049/ipr2.70208","DOIUrl":null,"url":null,"abstract":"<p>Crop diseases and adverse field conditions threaten global food security, particularly in resource-limited regions. Current deep-learning models for disease detection suffer from insufficient accuracy, high prediction instability under field noise, and a lack of integration with environmental context. To address these limitations, we present a hybrid deep learning architecture combining EfficientNetV2, MobileNetV2, and Vision Transformers, augmented with attention mechanisms and multiscale feature fusion. Optimised for edge deployment via TensorFlow Lite and integrated with IoT sensors for real-time soil and field monitoring, the model achieved state-of-the-art performance with 99.2% accuracy, 0.993 precision, 0.993 recall, and a near-perfect AUC of 0.999998, outperforming benchmarks like DenseNet50 (88.4%) and ShuffleNet (95.8%). Training on 76 classes (22 diseases) demonstrated rapid convergence and robustness, with validation accuracy reaching 98.7% and minimal overfitting. Statistical validation confirmed superior stability, with 69% lower prediction variance (0.000010) than DenseNet50 (0.000035), ensuring reliable performance under real-world noise. Bayesian testing showed a 100% probability of superiority over DenseNet50 and 85.1% over ShuffleNet, while field trials on 249 real-world images achieved 97.97% accuracy, highlighting strong generalisation. IoT integration reduced false diagnoses by 92% through environmental correlation, and edge optimisation enabled real-time inference via a 30.4 MB mobile application (0.094-second latency). This work advances precision agriculture through a scalable, cloud-independent framework that unifies hybrid deep learning with edge-compatible IoT sensing. By addressing critical gaps in accuracy, stability, and contextual awareness, the system enhances crop health management in low-resource settings, offering a statistically validated tool for sustainable farming practices.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70208","citationCount":"0","resultStr":"{\"title\":\"Deep Learning and IoT Fusion for Crop Health Monitoring: A High-Accuracy, Edge-Optimised Model for Smart Farming\",\"authors\":\"Thomas Kinyanjui Njoroge, Edwin Juma Omol, Vincent Omollo Nyangaresi\",\"doi\":\"10.1049/ipr2.70208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Crop diseases and adverse field conditions threaten global food security, particularly in resource-limited regions. Current deep-learning models for disease detection suffer from insufficient accuracy, high prediction instability under field noise, and a lack of integration with environmental context. To address these limitations, we present a hybrid deep learning architecture combining EfficientNetV2, MobileNetV2, and Vision Transformers, augmented with attention mechanisms and multiscale feature fusion. Optimised for edge deployment via TensorFlow Lite and integrated with IoT sensors for real-time soil and field monitoring, the model achieved state-of-the-art performance with 99.2% accuracy, 0.993 precision, 0.993 recall, and a near-perfect AUC of 0.999998, outperforming benchmarks like DenseNet50 (88.4%) and ShuffleNet (95.8%). Training on 76 classes (22 diseases) demonstrated rapid convergence and robustness, with validation accuracy reaching 98.7% and minimal overfitting. Statistical validation confirmed superior stability, with 69% lower prediction variance (0.000010) than DenseNet50 (0.000035), ensuring reliable performance under real-world noise. Bayesian testing showed a 100% probability of superiority over DenseNet50 and 85.1% over ShuffleNet, while field trials on 249 real-world images achieved 97.97% accuracy, highlighting strong generalisation. IoT integration reduced false diagnoses by 92% through environmental correlation, and edge optimisation enabled real-time inference via a 30.4 MB mobile application (0.094-second latency). This work advances precision agriculture through a scalable, cloud-independent framework that unifies hybrid deep learning with edge-compatible IoT sensing. By addressing critical gaps in accuracy, stability, and contextual awareness, the system enhances crop health management in low-resource settings, offering a statistically validated tool for sustainable farming practices.</p>\",\"PeriodicalId\":56303,\"journal\":{\"name\":\"IET Image Processing\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70208\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Image Processing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/ipr2.70208\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Image Processing","FirstCategoryId":"94","ListUrlMain":"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/ipr2.70208","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Deep Learning and IoT Fusion for Crop Health Monitoring: A High-Accuracy, Edge-Optimised Model for Smart Farming
Crop diseases and adverse field conditions threaten global food security, particularly in resource-limited regions. Current deep-learning models for disease detection suffer from insufficient accuracy, high prediction instability under field noise, and a lack of integration with environmental context. To address these limitations, we present a hybrid deep learning architecture combining EfficientNetV2, MobileNetV2, and Vision Transformers, augmented with attention mechanisms and multiscale feature fusion. Optimised for edge deployment via TensorFlow Lite and integrated with IoT sensors for real-time soil and field monitoring, the model achieved state-of-the-art performance with 99.2% accuracy, 0.993 precision, 0.993 recall, and a near-perfect AUC of 0.999998, outperforming benchmarks like DenseNet50 (88.4%) and ShuffleNet (95.8%). Training on 76 classes (22 diseases) demonstrated rapid convergence and robustness, with validation accuracy reaching 98.7% and minimal overfitting. Statistical validation confirmed superior stability, with 69% lower prediction variance (0.000010) than DenseNet50 (0.000035), ensuring reliable performance under real-world noise. Bayesian testing showed a 100% probability of superiority over DenseNet50 and 85.1% over ShuffleNet, while field trials on 249 real-world images achieved 97.97% accuracy, highlighting strong generalisation. IoT integration reduced false diagnoses by 92% through environmental correlation, and edge optimisation enabled real-time inference via a 30.4 MB mobile application (0.094-second latency). This work advances precision agriculture through a scalable, cloud-independent framework that unifies hybrid deep learning with edge-compatible IoT sensing. By addressing critical gaps in accuracy, stability, and contextual awareness, the system enhances crop health management in low-resource settings, offering a statistically validated tool for sustainable farming practices.
期刊介绍:
The IET Image Processing journal encompasses research areas related to the generation, processing and communication of visual information. The focus of the journal is the coverage of the latest research results in image and video processing, including image generation and display, enhancement and restoration, segmentation, colour and texture analysis, coding and communication, implementations and architectures as well as innovative applications.
Principal topics include:
Generation and Display - Imaging sensors and acquisition systems, illumination, sampling and scanning, quantization, colour reproduction, image rendering, display and printing systems, evaluation of image quality.
Processing and Analysis - Image enhancement, restoration, segmentation, registration, multispectral, colour and texture processing, multiresolution processing and wavelets, morphological operations, stereoscopic and 3-D processing, motion detection and estimation, video and image sequence processing.
Implementations and Architectures - Image and video processing hardware and software, design and construction, architectures and software, neural, adaptive, and fuzzy processing.
Coding and Transmission - Image and video compression and coding, compression standards, noise modelling, visual information networks, streamed video.
Retrieval and Multimedia - Storage of images and video, database design, image retrieval, video annotation and editing, mixed media incorporating visual information, multimedia systems and applications, image and video watermarking, steganography.
Applications - Innovative application of image and video processing technologies to any field, including life sciences, earth sciences, astronomy, document processing and security.
Current Special Issue Call for Papers:
Evolutionary Computation for Image Processing - https://digital-library.theiet.org/files/IET_IPR_CFP_EC.pdf
AI-Powered 3D Vision - https://digital-library.theiet.org/files/IET_IPR_CFP_AIPV.pdf
Multidisciplinary advancement of Imaging Technologies: From Medical Diagnostics and Genomics to Cognitive Machine Vision, and Artificial Intelligence - https://digital-library.theiet.org/files/IET_IPR_CFP_IST.pdf
Deep Learning for 3D Reconstruction - https://digital-library.theiet.org/files/IET_IPR_CFP_DLR.pdf