Plant PhenomicsPub Date : 2024-11-10eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0260
Tang Li, Pieter M Blok, James Burridge, Akito Kaga, Wei Guo
{"title":"Multi-Scale Attention Network for Vertical Seed Distribution in Soybean Breeding Fields.","authors":"Tang Li, Pieter M Blok, James Burridge, Akito Kaga, Wei Guo","doi":"10.34133/plantphenomics.0260","DOIUrl":"https://doi.org/10.34133/plantphenomics.0260","url":null,"abstract":"<p><p>The increase in the global population is leading to a doubling of the demand for protein. Soybean (<i>Glycine max</i>), a key contributor to global plant-based protein supplies, requires ongoing yield enhancements to keep pace with increasing demand. Precise, on-plant seed counting and localization may catalyze breeding selection of shoot architectures and seed localization patterns related to superior performance in high planting density and contribute to increased yield. Traditional manual counting and localization methods are labor-intensive and prone to error, necessitating more efficient approaches for yield prediction and seed distribution analysis. To solve this, we propose MSANet: a novel deep learning framework tailored for counting and localization of soybean seeds on mature field-grown soy plants. A multi-scale attention map mechanism was applied to maximize model performance in seed counting and localization in soybean breeding fields. We compared our model with a previous state-of-the-art model using the benchmark dataset and an enlarged dataset, including various soybean genotypes. Our model outperforms previous state-of-the-art methods on all datasets across various soybean genotypes on both counting and localization tasks. Furthermore, our model also performed well on in-canopy 360° video, dramatically increasing data collection efficiency. We also propose a technique that enables previously inaccessible insights into the phenotypic and genetic diversity of single plant vertical seed distribution, which may accelerate the breeding process. To accelerate further research in this domain, we have made our dataset and software publicly available: https://github.com/UTokyo-FieldPhenomics-Lab/MSANet.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0260"},"PeriodicalIF":7.6,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11550408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142625881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Plant PhenomicsPub Date : 2024-11-08eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0268
Erik Andvaag, Kaylie Krys, Steven J Shirtliffe, Ian Stavness
{"title":"Counting Canola: Toward Generalizable Aerial Plant Detection Models.","authors":"Erik Andvaag, Kaylie Krys, Steven J Shirtliffe, Ian Stavness","doi":"10.34133/plantphenomics.0268","DOIUrl":"https://doi.org/10.34133/plantphenomics.0268","url":null,"abstract":"<p><p>Plant population counts are highly valued by crop producers as important early-season indicators of field health. Traditionally, emergence rate estimates have been acquired through manual counting, an approach that is labor-intensive and relies heavily on sampling techniques. By applying deep learning-based object detection models to aerial field imagery, accurate plant population counts can be obtained for much larger areas of a field. Unfortunately, current detection models often perform poorly when they are faced with image conditions that do not closely resemble the data found in their training sets. In this paper, we explore how specific facets of a plant detector's training set can affect its ability to generalize to unseen image sets. In particular, we examine how a plant detection model's generalizability is influenced by the size, diversity, and quality of its training data. Our experiments show that the gap between in-distribution and out-of-distribution performance cannot be closed by merely increasing the size of a model's training set. We also demonstrate the importance of training set diversity in producing generalizable models, and show how different types of annotation noise can elicit different model behaviors in out-of-distribution test sets. We conduct our investigations with a large and diverse dataset of canola field imagery that we assembled over several years. We also present a new web tool, Canola Counter, which is specifically designed for remote-sensed aerial plant detection tasks. We use the Canola Counter tool to prepare our annotated canola seedling dataset and conduct our experiments. Both our dataset and web tool are publicly available.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0268"},"PeriodicalIF":7.6,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142625923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Phenotyping of Panicle Number and Shape in Rice Breeding Materials Based on Unmanned Aerial Vehicle Imagery.","authors":"Xuqi Lu, Yutao Shen, Jiayang Xie, Xin Yang, Qingyao Shu, Song Chen, Zhihui Shen, Haiyan Cen","doi":"10.34133/plantphenomics.0265","DOIUrl":"https://doi.org/10.34133/plantphenomics.0265","url":null,"abstract":"<p><p>The number of panicles per unit area (PNpA) is one of the key factors contributing to the grain yield of rice crops. Accurate PNpA quantification is vital for breeding high-yield rice cultivars. Previous studies were based on proximal sensing with fixed observation platforms or unmanned aerial vehicles (UAVs). The near-canopy images produced in these studies suffer from inefficiency and complex image processing pipelines that require manual image cropping and annotation. This study aims to develop an automated, high-throughput UAV imagery-based approach for field plot segmentation and panicle number quantification, along with a novel classification method for different panicle types, enhancing PNpA quantification at the plot level. RGB images of the rice canopy were efficiently captured at an altitude of 15 m, followed by image stitching and plot boundary recognition via a mask region-based convolutional neural network (Mask R-CNN). The images were then segmented into plot-scale subgraphs, which were categorized into 3 growth stages. The panicle vision transformer (Panicle-ViT), which integrates a multipath vision transformer and replaces the Mask R-CNN backbone, accurately detects panicles. Additionally, the Res2Net50 architecture classified panicle types with 4 angles of 0°, 15°, 45°, and 90°. The results confirm that the performance of Plot-Seg is comparable to that of manual segmentation. Panicle-ViT outperforms the traditional Mask R-CNN across all the datasets, with the average precision at 50% intersection over union (AP<sub>50</sub>) improved by 3.5% to 20.5%. The PNpA quantification for the full dataset achieved superior performance, with a coefficient of determination (<i>R</i> <sup>2</sup>) of 0.73 and a root mean square error (RMSE) of 28.3, and the overall panicle classification accuracy reached 94.8%. The proposed approach enhances operational efficiency and automates the process from plot cropping to PNpA prediction, which is promising for accelerating the selection of desired traits in rice breeding.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0265"},"PeriodicalIF":7.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11499587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142506483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Plant PhenomicsPub Date : 2024-10-23eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0264
Matthew J Sumnall, David R Carter, Timothy J Albaugh, Rachel L Cook, Otávio C Campoe, Rafael A Rubilar
{"title":"Evaluating the Influence of Row Orientation and Crown Morphology on Growth of <i>Pinus taeda L</i>. with Drone-Based Airborne Laser Scanning.","authors":"Matthew J Sumnall, David R Carter, Timothy J Albaugh, Rachel L Cook, Otávio C Campoe, Rafael A Rubilar","doi":"10.34133/plantphenomics.0264","DOIUrl":"https://doi.org/10.34133/plantphenomics.0264","url":null,"abstract":"<p><p>The tree crown's directionality of growth may be an indicator of how aggressive the tree is in terms of foraging for light. Airborne drone laser scanning (DLS) has been used to accurately classify individual tree crowns (ITCs) and derive size metrics related to the crown. We compare ITCs among 6 genotypes exhibiting different crown architectures in managed loblolly pine (<i>Pinus taeda L.</i>) in the United States. DLS data are classified into ITC objects, and we present novel methods to calculate ITC shape metrics. Tree stems are located using (a) model-based clustering and (b) weighting cluster-based size. We generated ITC shape metrics using 3-dimensional (3D) alphashapes in 2 DLS acquisitions of the same location, 4 years apart. Crown horizontal distance from the stem was estimated at multiple heights, in addition to calculating 3D volume in specific azimuths. Crown morphologies varied significantly (<i>P</i> < 0.05) spatially, temporally, and among the 6 genotypes. Most genotypes exhibited larger crown volumes facing south (150° to 173°). We found that crown asymmetries were consistent with (a) the direction of solar radiation, (b) the spatial arrangement and proximity of the neighboring crowns, and (c) genotype. Larger crowns were consistent with larger increases in stem volume, but that increases in the southern portions of crown volume were consistent with larger stem volume increases, than in the north. This finding suggests that row orientation could influence stem growth rates in plantations, particularly impacting earlier development. These differences can potentially reduce over time, especially if stands are not thinned in a timely manner once canopy growing space has diminished.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0264"},"PeriodicalIF":7.6,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142506482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cucumber Seedling Segmentation Network Based on a Multiview Geometric Graph Encoder from 3D Point Clouds.","authors":"Yonglong Zhang, Yaling Xie, Jialuo Zhou, Xiangying Xu, Minmin Miao","doi":"10.34133/plantphenomics.0254","DOIUrl":"https://doi.org/10.34133/plantphenomics.0254","url":null,"abstract":"<p><p>Plant phenotyping plays a pivotal role in observing and comprehending the growth and development of plants. In phenotyping, plant organ segmentation based on 3D point clouds has garnered increasing attention in recent years. However, using only the geometric relationship features of Euclidean space still cannot accurately segment and measure plants. To this end, we mine more geometric features and propose a segmentation network based on a multiview geometric graph encoder, called SN-MGGE. First, we construct a point cloud acquisition platform to obtain the cucumber seedling point cloud dataset, and employ CloudCompare software to annotate the point cloud data. The GGE module is then designed to generate the point features, including the geometric relationships and geometric shape structure, via a graph encoder over the Euclidean and hyperbolic spaces. Finally, the semantic segmentation results are obtained via a downsampling operation and multilayer perceptron. Extensive experiments on a cucumber seedling dataset clearly show that our proposed SN-MGGE network outperforms several mainstream segmentation networks (e.g., PointNet++, AGConv, and PointMLP), achieving mIoU and OA values of 94.90% and 97.43%, respectively. On the basis of the segmentation results, 4 phenotypic parameters (i.e., plant height, leaf length, leaf width, and leaf area) are extracted through the K-means clustering method; these parameters are very close to the ground truth, and the <i>R</i> <sup>2</sup> values reach 0.98, 0.96, 0.97, and 0.97, respectively. Furthermore, an ablation study and a generalization experiment also show that the SN-MGGE network is robust and extensive.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0254"},"PeriodicalIF":7.6,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11480588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142472839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Plant PhenomicsPub Date : 2024-10-09eCollection Date: 2024-01-01DOI: 10.34133/plantphenomics.0255
Liyan Shen, Guohui Ding, Robert Jackson, Mujahid Ali, Shuchen Liu, Arthur Mitchell, Yeyin Shi, Xuqi Lu, Jie Dai, Greg Deakin, Katherine Frels, Haiyan Cen, Yu-Feng Ge, Ji Zhou
{"title":"GSP-AI: An AI-Powered Platform for Identifying Key Growth Stages and the Vegetative-to-Reproductive Transition in Wheat Using Trilateral Drone Imagery and Meteorological Data.","authors":"Liyan Shen, Guohui Ding, Robert Jackson, Mujahid Ali, Shuchen Liu, Arthur Mitchell, Yeyin Shi, Xuqi Lu, Jie Dai, Greg Deakin, Katherine Frels, Haiyan Cen, Yu-Feng Ge, Ji Zhou","doi":"10.34133/plantphenomics.0255","DOIUrl":"10.34133/plantphenomics.0255","url":null,"abstract":"<p><p>Wheat (<i>Triticum aestivum</i>) is one of the most important staple crops worldwide. To ensure its global supply, the timing and duration of its growth cycle needs to be closely monitored in the field so that necessary crop management activities can be arranged in a timely manner. Also, breeders and plant researchers need to evaluate growth stages (GSs) for tens of thousands of genotypes at the plot level, at different sites and across multiple seasons. These indicate the importance of providing a reliable and scalable toolkit to address the challenge so that the plot-level assessment of GS can be successfully conducted for different objectives in plant research. Here, we present a multimodal deep learning model called GSP-AI, capable of identifying key GSs and predicting the vegetative-to-reproductive transition (i.e., flowering days) in wheat based on drone-collected canopy images and multiseasonal climatic datasets. In the study, we first established an open Wheat Growth Stage Prediction (WGSP) dataset, consisting of 70,410 annotated images collected from 54 varieties cultivated in China, 109 in the United Kingdom, and 100 in the United States together with key climatic factors. Then, we built an effective learning architecture based on Res2Net and long short-term memory (LSTM) to learn canopy-level vision features and patterns of climatic changes between 2018 and 2021 growing seasons. Utilizing the model, we achieved an overall accuracy of 91.2% in identifying key GS and an average root mean square error (RMSE) of 5.6 d for forecasting the flowering days compared with manual scoring. We further tested and improved the GSP-AI model with high-resolution smartphone images collected in the 2021/2022 season in China, through which the accuracy of the model was enhanced to 93.4% for GS and RMSE reduced to 4.7 d for the flowering prediction. As a result, we believe that our work demonstrates a valuable advance to inform breeders and growers regarding the timing and duration of key plant growth and development phases at the plot level, facilitating them to conduct more effective crop selection and make agronomic decisions under complicated field conditions for wheat improvement.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0255"},"PeriodicalIF":7.6,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11462051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142392656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MLG-YOLO: A Model for Real-Time Accurate Detection and Localization of Winter Jujube in Complex Structured Orchard Environments.","authors":"Chenhao Yu, Xiaoyi Shi, Wenkai Luo, Junzhe Feng, Zhouzhou Zheng, Ayanori Yorozu, Yaohua Hu, Jiapan Guo","doi":"10.34133/plantphenomics.0258","DOIUrl":"10.34133/plantphenomics.0258","url":null,"abstract":"<p><p>Our research focuses on winter jujube trees and is conducted in a greenhouse environment in a structured orchard to effectively control various growth conditions. The development of a robotic system for winter jujube harvesting is crucial for achieving mechanized harvesting. Harvesting winter jujubes efficiently requires accurate detection and location. To address this issue, we proposed a winter jujube detection and localization method based on the MobileVit-Large selective kernel-GSConv-YOLO (MLG-YOLO) model. First, a winter jujube dataset is constructed to comprise various scenarios of lighting conditions and leaf obstructions to train the model. Subsequently, the MLG-YOLO model based on YOLOv8n is proposed, with improvements including the incorporation of MobileViT to reconstruct the backbone and keep the model more lightweight. The neck is enhanced with LSKblock to capture broader contextual information, and the lightweight convolutional technology GSConv is introduced to further improve the detection accuracy. Finally, a 3-dimensional localization method combining MLG-YOLO with RGB-D cameras is proposed. Through ablation studies, comparative experiments, 3-dimensional localization error tests, and full-scale tree detection tests in laboratory environments and structured orchard environments, the effectiveness of the MLG-YOLO model in detecting and locating winter jujubes is confirmed. With MLG-YOLO, the mAP increases by 3.50%, while the number of parameters is reduced by 61.03% in comparison with the baseline YOLOv8n model. Compared with mainstream object detection models, MLG-YOLO excels in both detection accuracy and model size, with a mAP of 92.70%, a precision of 86.80%, a recall of 84.50%, and a model size of only 2.52 MB. The average detection accuracy in the laboratory environmental testing of winter jujube reached 100%, and the structured orchard environmental accuracy reached 92.82%. The absolute positioning errors in the <i>X</i>, <i>Y</i>, and <i>Z</i> directions are 4.20, 4.70, and 3.90 mm, respectively. This method enables accurate detection and localization of winter jujubes, providing technical support for winter jujube harvesting robots.</p>","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"6 ","pages":"0258"},"PeriodicalIF":7.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418275/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Plant PhenomicsPub Date : 2024-09-18DOI: 10.34133/plantphenomics.0252
Nikos Tsoulias,Arash Khosravi,Werner B Herppich,Manuela Zude-Sasse
{"title":"Fruit Water Stress Index of Apple Measured by Means of Temperature-Annotated 3D Point Cloud.","authors":"Nikos Tsoulias,Arash Khosravi,Werner B Herppich,Manuela Zude-Sasse","doi":"10.34133/plantphenomics.0252","DOIUrl":"https://doi.org/10.34133/plantphenomics.0252","url":null,"abstract":"In applied ecophysiological studies related to global warming and water scarcity, the water status of fruit is of increasing importance in the context of fresh food production. In the present work, a fruit water stress index (FWSI) is introduced for close analysis of the relationship between fruit and air temperatures. A sensor system consisting of light detection and ranging (LiDAR) sensor and thermal camera was employed to remotely analyze apple trees (Malus x domestica Borkh. \"Gala\") by means of 3D point clouds. After geometric calibration of the sensor system, the temperature values were assigned in the corresponding 3D point cloud to reconstruct a thermal point cloud of the entire canopy. The annotated points belonging to the fruit were segmented, providing annotated fruit point clouds. Such estimated 3D distribution of fruit surface temperature (T Est) was highly correlated to manually recorded reference temperature (r 2 = 0.93). As methodological innovation, based on T Est, the fruit water stress index (FWSI Est) was introduced, potentially providing more detailed information on the fruit compared to the crop water stress index of whole canopy obtained from established 2D thermal imaging. FWSI Est showed low error when compared to manual reference data. Considering in total 302 apples, FWSI Est increased during the season. Additional diel measurements on 50 apples, each at 6 measurements per day (in total 600 apples), were performed in the commercial harvest window. FWSI Est calculated with air temperature plus 5 °C appeared as diel hysteresis. Such diurnal changes of FWSI Est and those throughout fruit development provide a new ecophysiological tool aimed at 3D spatiotemporal fruit analysis and particularly more efficient, capturing more samples, insight in the specific requests of crop management.","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"18 1","pages":"0252"},"PeriodicalIF":6.5,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142248676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Auto-LIA: The Automated Vision-Based Leaf Inclination Angle Measurement System Improves Monitoring of Plant Physiology.","authors":"Sijun Jiang,Xingcai Wu,Qi Wang,Zhixun Pei,Yuxiang Wang,Jian Jin,Ying Guo,RunJiang Song,Liansheng Zang,Yong-Jin Liu,Gefei Hao","doi":"10.34133/plantphenomics.0245","DOIUrl":"https://doi.org/10.34133/plantphenomics.0245","url":null,"abstract":"Plant sensors are commonly used in agricultural production, landscaping, and other fields to monitor plant growth and environmental parameters. As an important basic parameter in plant monitoring, leaf inclination angle (LIA) not only influences light absorption and pesticide loss but also contributes to genetic analysis and other plant phenotypic data collection. The measurements of LIA provide a basis for crop research as well as agricultural management, such as water loss, pesticide absorption, and illumination radiation. On the one hand, existing efficient solutions, represented by light detection and ranging (LiDAR), can provide the average leaf angle distribution of a plot. On the other hand, the labor-intensive schemes represented by hand measurements can show high accuracy. However, the existing methods suffer from low automation and weak leaf-plant correlation, limiting the application of individual plant leaf phenotypes. To improve the efficiency of LIA measurement and provide the correlation between leaf and plant, we design an image-phenotype-based noninvasive and efficient optical sensor measurement system, which combines multi-processes implemented via computer vision technologies and RGB images collected by physical sensing devices. Specifically, we utilize object detection to associate leaves with plants and adopt 3-dimensional reconstruction techniques to recover the spatial information of leaves in computational space. Then, we propose a spatial continuity-based segmentation algorithm combined with a graphical operation to implement the extraction of leaf key points. Finally, we seek the connection between the computational space and the actual physical space and put forward a method of leaf transformation to realize the localization and recovery of the LIA in physical space. Overall, our solution is characterized by noninvasiveness, full-process automation, and strong leaf-plant correlation, which enables efficient measurements at low cost. In this study, we validate Auto-LIA for practicality and compare the accuracy with the best solution that is acquired with an expensive and invasive LiDAR device. Our solution demonstrates its competitiveness and usability at a much lower equipment cost, with an accuracy of only 2. 5° less than that of the widely used LiDAR. As an intelligent processing system for plant sensor signals, Auto-LIA provides fully automated measurement of LIA, improving the monitoring of plant physiological information for plant protection. We make our code and data publicly available at http://autolia.samlab.cn.","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"32 1","pages":"0245"},"PeriodicalIF":6.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142209421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AFM-YOLOv8s: An Accurate, Fast, and Highly Robust Model for Detection of Sporangia of Plasmopara viticola with Various Morphological Variants.","authors":"Changqing Yan,Zeyun Liang,Ling Yin,Shumei Wei,Qi Tian,Ying Li,Han Cheng,Jindong Liu,Qiang Yu,Gang Zhao,Junjie Qu","doi":"10.34133/plantphenomics.0246","DOIUrl":"https://doi.org/10.34133/plantphenomics.0246","url":null,"abstract":"Monitoring spores is crucial for predicting and preventing fungal- or oomycete-induced diseases like grapevine downy mildew. However, manual spore or sporangium detection using microscopes is time-consuming and labor-intensive, often resulting in low accuracy and slow processing speed. Emerging deep learning models like YOLOv8 aim to rapidly detect objects accurately but struggle with efficiency and accuracy when identifying various sporangia formations amidst complex backgrounds. To address these challenges, we developed an enhanced YOLOv8s, namely, AFM-YOLOv8s, by introducing an Adaptive Cross Fusion module, a lightweight feature extraction module FasterCSP (Faster Cross-Stage Partial Module), and a novel loss function MPDIoU (Minimum Point Distance Intersection over Union). AFM-YOLOv8s replaces the C2f module with FasterCSP, a more efficient feature extraction module, to reduce model parameter size and overall depth. In addition, we developed and integrated an Adaptive Cross Fusion Feature Pyramid Network to enhance the fusion of multiscale features within the YOLOv8 architecture. Last, we utilized the MPDIoU loss function to improve AFM-YOLOv8s' ability to locate bounding boxes and learn object spatial localization. Experimental results demonstrated AFM-YOLOv8s' effectiveness, achieving 91.3% accuracy (mean average precision at 50% IoU) on our custom grapevine downy mildew sporangium dataset-a notable improvement of 2.7% over the original YOLOv8 algorithm. FasterCSP reduced model complexity and size, enhanced deployment versatility, and improved real-time detection, chosen over C2f for easier integration despite minor accuracy trade-off. Currently, the AFM-YOLOv8s model is running as a backend algorithm in an open web application, providing valuable technical support for downy mildew prevention and control efforts and fungicide resistance studies.","PeriodicalId":20318,"journal":{"name":"Plant Phenomics","volume":"34 1","pages":"0246"},"PeriodicalIF":6.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}