Ziang Zhao , Yulia Hicks , Xianfang Sun , Chaoxi Luo
{"title":"FruitQuery: A lightweight query-based instance segmentation model for in-field fruit ripeness determination","authors":"Ziang Zhao , Yulia Hicks , Xianfang Sun , Chaoxi Luo","doi":"10.1016/j.atech.2025.101068","DOIUrl":"10.1016/j.atech.2025.101068","url":null,"abstract":"<div><div>Accurate fruit instance segmentation at different ripeness stages is critical for developing autonomous harvesting robots, particularly given the unstructured in-field conditions. In this paper, we combine two in-field fruit datasets of peaches and strawberries for multiple ripeness stages determination, and propose a lightweight query-based instance segmentation model named FruitQuery.</div><div>The combined dataset contains 3 peach ripeness stages and 4 strawberry ripeness stages, covering various unstructured conditions of two popular fruits. The model FruitQuery consists of three parts: a backbone, a pixel decoder and Transformer decoders. Efficient multi-head self-attention modules are introduced to the backbone to reduce computational overhead, and a pyramid pooling module is added to the pixel decoder to enhance multi-scale feature fusion. Transformer decoders are then applied to learn a fixed number of queries from features and generate instance masks, avoiding postprocessing like non-maximum suppression. FruitQuery runs in an end-to-end way and incorporates the convolution and Transformer to capture fine-grained features related to different fruits at different ripeness stages.</div><div>Extensive experiments on the combined fruit dataset demonstrate that our FruitQuery achieves the highest average precision of 67.02 with only 14.08M parameters, outperforming 13 state-of-the-art models with 33 variants. It is noted that FruitQuery surpasses three series of YOLO (v8, v9 and v10) by a large margin. Ablation studies and visualizations also show its robust feature extraction with fewer parameter usage, indicating that the query-based design is effective in localizing fruit. These results highlight FruitQuery's compelling balance between segmentation performance and model size, offering the potential for in-field application.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101068"},"PeriodicalIF":6.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joaquin J. Casanova , Nicolas T. Bergmann , Jessica E.R. Kalin , Garett C. Heineck , Ian C. Burke
{"title":"A comparison of protocols for high-throughput weeds mapping","authors":"Joaquin J. Casanova , Nicolas T. Bergmann , Jessica E.R. Kalin , Garett C. Heineck , Ian C. Burke","doi":"10.1016/j.atech.2025.101076","DOIUrl":"10.1016/j.atech.2025.101076","url":null,"abstract":"<div><div>Increasing herbicide resistance in the US demands novel approaches to integrated weed management, including targeted chemical use and non-chemical methods. More targeted chemical applications and non-chemical alternative methods help expose weeds to multiple modes of action, decreasing the formation of resistant populations. However, generating prescription maps and evaluating non-chemical methods require field-scale mapping of weeds. Typical methods for weeds mapping either involve laborious mapping on the ground or impractical low-altitude UAV imaging. Additionally, the literature describes an array of imaging techniques demonstrated in very select circumstances. To give clear guidelines for future research, this paper compares three imaging techniques, two weed count model types, and two ground validation methods (quadrat counts and seedbank counts) for remote weeds mapping on five sites experiencing infestations of different common weed species. Overall, the multispectral imaging techniques using Poisson count models and weed counts in quadrats as ground truth outperformed other methods and can be recommended as a pipeline for rapid mapping weeds in field crops. However, though seedbank density did not map well when using imagery, 50 seedbank samples were adequate for assessing seedbank.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101076"},"PeriodicalIF":6.3,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md. Samiul Basir , Yaguang Zhang , Dennis Buckmaster , Ankita Raturi , James V. Krogmeier
{"title":"Meta Ag: An automatic agricultural contextual metadata collection app","authors":"Md. Samiul Basir , Yaguang Zhang , Dennis Buckmaster , Ankita Raturi , James V. Krogmeier","doi":"10.1016/j.atech.2025.101073","DOIUrl":"10.1016/j.atech.2025.101073","url":null,"abstract":"<div><div>Modern agricultural systems produce high-resolution data from remote sensing platforms, in-field sensors, and augmented machinery. However, these datasets often lack contextual information which hinders their utility in decision support systems and limits their applicability for AI-based modeling capacity. Digital metadata—the who, what, where, when, and how of field operations—are essential to transform other “layers of” raw data into actionable and interoperable agricultural knowledge. This paper presents Meta Ag, a smartphone-based metadata collection framework designed to improve the accuracy, completeness, and contextual richness of agricultural field records. The developed Android app integrates automated geofence-based event detection, operator identification, temporal logging, and structured input via dynamic interface and data validation elements. Its modular architecture supports authentication, automatic context generation, real-time validation, and centralized cloud storage. Meta Ag facilitates interoperability by exporting records in CSV, JSON, and RDF (Resource Description Framework) formats. Field evaluations show that the duration captured by Meta Ag differed from the actual recorded duration with a Root Mean Squared Error (RMSE) of 24.7s (range of 0s to 61s) and Meta Ag consistently detected all field access events via geofence triggers. These results highlight its effectiveness as a deployable, efficient solution for agricultural metadata collection. By reducing human error and supporting standardized, high-integrity recordkeeping, the Meta Ag framework enables the production of AI-ready metadata critical for digital agriculture applications.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101073"},"PeriodicalIF":6.3,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marta Veganzones Rodriguez , Thinh Phan , Arthur F.A. Fernandes , Vivian Breen , Jesus Arango , Michael T. Kidd , Ngan Le
{"title":"Facial chick sexing: An automated chick sexing system from chick facial image","authors":"Marta Veganzones Rodriguez , Thinh Phan , Arthur F.A. Fernandes , Vivian Breen , Jesus Arango , Michael T. Kidd , Ngan Le","doi":"10.1016/j.atech.2025.101044","DOIUrl":"10.1016/j.atech.2025.101044","url":null,"abstract":"<div><div>Chick sexing, the process of determining the gender of day-old chicks, is a critical task in the poultry industry due to the distinct roles that each gender plays in production. While effective traditional methods achieve high accuracy, color, and wing feather sexing is exclusive to specific breeds, and vent sexing is invasive and requires trained experts. To address these challenges, we propose a novel approach inspired by facial gender classification techniques in humans: <strong>facial chick sexing</strong>. This new method does not require expert knowledge and aims to reduce training time while enhancing animal welfare by minimizing chick manipulation. We develop a comprehensive system for training and inference that includes data collection, facial and keypoint detection, facial alignment, and classification. We evaluate our model on two sets of images: Cropped Full Face and Cropped Middle Face, both of which maintain essential facial features of the chick for further analysis. Our experiment demonstrates the promising viability, with a final accuracy of 81.89% on the Cropped Full Face set, of this approach for future practices in chick sexing by making them more universally applicable.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101044"},"PeriodicalIF":6.3,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongyuan Liu , Li Zhuo , Chunwang Dong , Jiafeng Li , Yang Li
{"title":"TBD-Y: Automatic tea bud detection with synergistic object-spatial attention and global-local attention guided feature fusion","authors":"Zhongyuan Liu , Li Zhuo , Chunwang Dong , Jiafeng Li , Yang Li","doi":"10.1016/j.atech.2025.101066","DOIUrl":"10.1016/j.atech.2025.101066","url":null,"abstract":"<div><div>Automatic Tea Bud Detection (TBD) is a critical technology in intelligent tea-picking systems. Nevertheless, challenges, such as complex environments and the high visual similarity between tea buds and backgrounds, frequently result in false detection and missed detection, especially for small tea buds. To address these issues, in this paper, an automatic TBD method is proposed, which is built upon the YOLOv11 object detection framework, named TBD-Y. Firstly, a Synergistic Object-Spatial Attention (SOSA) mechanism is proposed, which incorporates the proposed Local Context Attention (LCA) mechanism to enhance the features in both spatial and regional dimensions. It enables the network to focus more on the tea bud regions, and suppress the interference from background noise. Secondly, a Global-local Attention Guided Feature Fusion (GAGFF) strategy is designed. It consists of two branches: one branch enhances low-resolution, high-level features containing rich global semantic information, while the other branch strengthens high-resolution features that preserve low-level visual details. The fusion of these two branches improves the representation capability of the features. The SOSA and GAGFF are integrated into the YOLOv11 framework, constructing three variants of the TBD model with different parameter scales, named TBD-Y-L, TBD-Y-M, and TBD-Y-S. Experimental results on the self-built TBD dataset and the publicly available Global Wheat Head Dataset 2021 (GWHD_2021) demonstrate that the proposed TBD-Y-L outperforms the existing methods, achieving superior detection accuracy. Furthermore, the TBD-Y-S model exhibits improved detection accuracy compared to YOLOv11-L, while maintaining lower model parameters and computational complexity.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101066"},"PeriodicalIF":6.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suma D , Narendra V G , Darshan Holla M , Shreyas , Raviraja Holla M
{"title":"Kernel to computation: identifying optimal feature set for red rice classification","authors":"Suma D , Narendra V G , Darshan Holla M , Shreyas , Raviraja Holla M","doi":"10.1016/j.atech.2025.101065","DOIUrl":"10.1016/j.atech.2025.101065","url":null,"abstract":"<div><div>While existing research focuses extensively on white rice classification with readily available datasets, automated classification of red rice varieties remains largely unexplored with no publicly available datasets, creating a significant research gap in agricultural image processing applications. This research presents a study on red rice classification, a relatively unexplored area with no prior publicly available datasets or focused investigations on red rice variety identification. This study classifies three distinct red rice varieties—Uma, KCP-1, and Jyothi—primarily cultivated in Karnataka and Kerala, using image processing and machine learning techniques. Six ML models were evaluated with seven unique feature combinations derived from size, shape, and texture characteristics to identify the most discriminative feature set. Feature selection was performed using Recursive Feature Elimination and Backward Feature Elimination to enhance model efficiency. Hyperparameter tuning was applied to optimize classification performance, and k-fold cross-validation with statistical significance testing was used to assess generalization and validate model performance differences. The integration of size, shape, and texture features yielded the highest average accuracy across the models, with K-Nearest Neighbours achieving 98.67 % accuracy and Support Vector Machine reaching 97.34 % accuracy with the size and shape combination. The findings emphasize the importance of optimal feature selection and tuning in improving classification accuracy, contributing to the development of automated classification systems for red rice varieties.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101065"},"PeriodicalIF":6.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Numerical modelling of a variable rate spraying drone and comparison to experimental evaluations","authors":"Fatemeh Joudi-Sarighayeh , Hossein Mousazadeh , Mohammad Hasan Sabet Dizavandi , Farzad Mohammadi , Foad Hassanlou , Niloofar Ghasemi , Zahra Hajalioghli , Alireza Tafteh","doi":"10.1016/j.atech.2025.101064","DOIUrl":"10.1016/j.atech.2025.101064","url":null,"abstract":"<div><div>Safe and organic food supplying with decreased cost are two main perspectives for future agriculture. Considering this concept, spraying drones are one of the prominent cutting-edge technologies. This study focuses on the evaluation of a variable rate sprayer drone configured as an X-type quadcopter. Nozzles controlling by PWM, enables for variable rate spraying in precision agriculture concept. Therefor main objective is evaluating various research parameters through experimental and simulation methodologies. Numerical simulations were conducted using X-Flow software, which employs Lattice Boltzmann Methods (LBM) to effectively model fluid behaviour within a specified computational domain. The experimental evaluation encompassed some tests on nozzle flow rates across different PWM frequencies and duty cycles. Besides, some assessments of spray patterns are performed in both static and dynamic scenarios. The results demonstrated that with a measured nozzle's spray angle about 30 degrees, the spraying system efficiently atomizes liquid into fine droplets, that will enhance drift potential. Field tests performed at altitudes of 1.5 m and 1.8 m illustrated the stability and sensitivity of the spraying system in an open environment. These findings underscore the significance of precise adjustments in operational parameters to optimize spraying efficiency. Future research should explore additional influential factors and conduct field experiments under a range of environmental conditions to validate simulation outcomes and enhance practical applications in agriculture.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101064"},"PeriodicalIF":6.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144365777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xipeng Wang , Delong Wang , Weijiao Dai , Cheng Zhang , Yudongchen Liang , Yong Zhou , Juan Yao , Fang Tian
{"title":"A multi-scale temporal feature fusion framework for sheep voiceprint recognition","authors":"Xipeng Wang , Delong Wang , Weijiao Dai , Cheng Zhang , Yudongchen Liang , Yong Zhou , Juan Yao , Fang Tian","doi":"10.1016/j.atech.2025.101061","DOIUrl":"10.1016/j.atech.2025.101061","url":null,"abstract":"<div><div>Voiceprint recognition technology is an effective way to identify individual sheep; however, related research is lacking. To this end, we propose a hybrid model based on the ResNet18 network and gated recurrent units (GRUs) to comprehensively represent the input data. The model uses the feature pyramid network (FPN) structure and a one-dimensional convolutional block attention module (1D-CBAM) for feature fusion to enhance the classification ability of the model. This model is used to extract sheep voiceprint features and combined with the proposed similarity correction method to construct a sheep voiceprint recognition system. The model is trained on a dataset including 300 sheep from three different breeds. The results of 5-fold cross-validation experiments show that the average recognition accuracy (Acc) and average contrast accuracy (CA) of the model reach 98.86 % and 98.66 %, respectively, with an average equal error rate (EER) of 1.34 %, demonstrating that the improved method is stable and reliable for sheep voiceprint recognition. This study provides a new solution for the identification of individual sheep.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101061"},"PeriodicalIF":6.3,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144336026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicola Furnitto , Juan Miguel Ramírez-Cuesta , Diego S. Intrigliolo , Giuseppe Todde , Sabina Failla
{"title":"Remote sensing for pasture biomass quantity and quality assessment: Challenges and future prospects","authors":"Nicola Furnitto , Juan Miguel Ramírez-Cuesta , Diego S. Intrigliolo , Giuseppe Todde , Sabina Failla","doi":"10.1016/j.atech.2025.101057","DOIUrl":"10.1016/j.atech.2025.101057","url":null,"abstract":"<div><div>Optimizing pasture use through careful management is critical to ensure the economic and environmental sustainability of pasture-based agriculture. Maximizing grass utilization and accurately measuring grass quantity and quality by adopting precision agriculture technologies, including estimates from satellite or unmanned aerial vehicles (UAVs), are key aspects to improve production efficiency and reducing environmental impact. With these goals, the review explores the crucial role of biomass quality and assessment estimation in pasture-based agricultural practices, with a focus on the potential offered by remote sensing technologies. This review examined recent advances in biomass and grassland quality assessment, highlighting the most widely used methodologies, remaining challenges and future prospects. The analysis focused particularly on applications of UAV and satellite platforms, discussing the advantages and limitations of the different techniques. Their and applications including machine learning (ML) technologies. A deep analysis of the main indices, electromagnetic regions and ML approaches utilized was also addressed, distinguishing among those intended to biomass quantity and quality assessment. Through the integration of innovative technologies and improved measurement protocols, the full potential of more sustainable and productive pasture-based agriculture can be realised, ensuring improved animal productivity and economic viability for farmers. These advances will pave the way for more effective management practices and contribute significantly to the global effort toward more sustainable agricultural systems.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101057"},"PeriodicalIF":6.3,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Carolina Picinini Petronilio , Clíssia Barboza Mastrangelo , Thiago Barbosa Batista , Gustavo Roberto Fonseca de Oliveira , Isabela Lopes dos Santos , Edvaldo Aparecido Amaral da Silva
{"title":"Smart and accurate: A new tool to identify stressed soybean seeds based on multispectral images and machine learning models","authors":"Ana Carolina Picinini Petronilio , Clíssia Barboza Mastrangelo , Thiago Barbosa Batista , Gustavo Roberto Fonseca de Oliveira , Isabela Lopes dos Santos , Edvaldo Aparecido Amaral da Silva","doi":"10.1016/j.atech.2025.101042","DOIUrl":"10.1016/j.atech.2025.101042","url":null,"abstract":"<div><div>Extreme environmental conditions have been recurrent during the last few years and have impacted crop seed quality worldwide, mainly but not limited to, soybeans (<em>Glycine</em> max (L) Merrill). To overcome this, seed companies often demand innovative tools to address seed quality factors. Machine learning models based on multispectral imaging are a novel seed quality analysis approach. Thus, we hypothesize that segmenting stressed (those produced under conditions that are not favorable to the mother-plant) and non-stressed (produced under conditions favorable to the mother-plant) soybean seeds would be possible with this technology, opening a new opportunity for seed quality management and elucidating quality factors. Soybean seeds (cultivar BR/MG 46-Conquista) were produced under water deficit and heat during maturation (from R5.5 onwards). Multispectral images were acquired from stressed and non-stressed seeds, and the reflectance, autofluorescence, physical properties, and chlorophyll parameters were extracted from the images. In parallel, we determined seed vigor. We designed machine learning models using multispectral imaging data based on three algorithms: neural network, support vector machine, and random forest. Our results demonstrated that the stressed seeds have spectral markers that enable their recognition. Concomitantly, these markers had a direct relationship with seed vigor. The machine learning models developed based on neural network algorithm showed the highest performance in segmenting stressed seeds (≥90 % of accuracy, precision, recall, specificity and F1 score) in contrast to random forest- and support vector machine algorithm (≥88 % of accuracy, precision, recall, specificity and F1 score). Here, we report a new approach for multispectral imaging with the potential to identify soybean seeds of lower vigor as a result of unfavorable environmental conditions during seed maturation.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101042"},"PeriodicalIF":6.3,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144291142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}