Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu
{"title":"Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed","authors":"Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu","doi":"10.1016/j.aiia.2025.02.007","DOIUrl":"10.1016/j.aiia.2025.02.007","url":null,"abstract":"<div><div>Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 162-181"},"PeriodicalIF":8.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Billy G. Ram , Kirk Howatt , Joseph Mettler , Xin Sun
{"title":"Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification","authors":"Billy G. Ram , Kirk Howatt , Joseph Mettler , Xin Sun","doi":"10.1016/j.aiia.2025.02.005","DOIUrl":"10.1016/j.aiia.2025.02.005","url":null,"abstract":"<div><div>Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (> 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 131-146"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ameer Tamoor Khan , Signe Marie Jensen , Abdul Rehman Khan
{"title":"Advancing precision agriculture: A comparative analysis of YOLOv8 for multi-class weed detection in cotton cultivation","authors":"Ameer Tamoor Khan , Signe Marie Jensen , Abdul Rehman Khan","doi":"10.1016/j.aiia.2025.01.013","DOIUrl":"10.1016/j.aiia.2025.01.013","url":null,"abstract":"<div><div>Effective weed management plays a critical role in enhancing the productivity and sustainability of cotton cultivation. The rapid emergence of herbicide-resistant weeds has underscored the need for innovative solutions to address the challenges associated with precise weed detection. This paper investigates the potential of YOLOv8, the latest advancement in the YOLO family of object detectors, for multi-class weed detection in U.S. cotton fields. Leveraging the CottonWeedDet12 dataset, which includes diverse weed species captured under varying environmental conditions, this study provides a comprehensive evaluation of YOLOv8's performance. A comparative analysis with earlier YOLO variants reveals substantial improvements in detection accuracy, as evidenced by higher mean Average Precision (mAP) scores. These findings highlight YOLOv8's superior capability to generalize across complex field scenarios, making it a promising candidate for real-time applications in precision agriculture. The enhanced architecture of YOLOv8, featuring anchor-free detection, an advanced Feature Pyramid Network (FPN), and an optimized loss function, enables accurate detection even under challenging conditions. This research emphasizes the importance of machine vision technologies in modern agriculture, particularly for minimizing herbicide reliance and promoting sustainable farming practices. The results not only validate YOLOv8's efficacy in multi-class weed detection but also pave the way for its integration into autonomous agricultural systems, thereby contributing to the broader goals of precision agriculture and ecological sustainability.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 182-191"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precision agriculture technologies for soil site-specific nutrient management: A comprehensive review","authors":"Niharika Vullaganti, Billy G. Ram, Xin Sun","doi":"10.1016/j.aiia.2025.02.001","DOIUrl":"10.1016/j.aiia.2025.02.001","url":null,"abstract":"<div><div>Amidst the growing food demands of an increasing population, agricultural intensification frequently depends on excessive chemical and fertilizer applications. While this approach initially boosts crop yields, it effects long-term sustainability through soil degradation and compromised food quality. Thus, prioritizing soil health while enhancing crop production is essential for sustainable food production. Site-Specific Nutrient Management (SSNM) emerges as a critical strategy to increase crop production, maintain soil health, and reduce environmental pollution. Despite its potential, the application of SSNM technologies remain limited in farmers' fields due to existing research gaps. This review critically analyzes and presents research conducted in SSNM in the past 11 years (2013–2024), identifying gaps and future research directions. A comprehensive study of 97 relevant research publications reveals several key findings: a) Electrochemical sensing and spectroscopy are the two widely explored areas in SSNM research, b) Despite numerous technologies in SSNM, each has its own limitation, preventing any single technology from being ideal, c) The selection of models and preprocessing techniques significantly impacts nutrient prediction accuracy, d) No single sensor or sensor combination can predict all soil properties, as suitability is highly attribute-specific. This review provides researchers, and technical personnel in precision agriculture, and farmers with detailed insights into SSNM research, its implementation, limitations, challenges, and future research directions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 147-161"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143507923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony Crespo , Claudia Moncada , Fabricio Crespo , Manuel Eugenio Morocho-Cayamcela
{"title":"An efficient strawberry segmentation model based on Mask R-CNN and TensorRT","authors":"Anthony Crespo , Claudia Moncada , Fabricio Crespo , Manuel Eugenio Morocho-Cayamcela","doi":"10.1016/j.aiia.2025.01.008","DOIUrl":"10.1016/j.aiia.2025.01.008","url":null,"abstract":"<div><div>Currently, artificial intelligence (AI), particularly computer vision (CV), has numerous applications in agriculture. In this field, the production and consumption of strawberries have experienced great growth in recent years, which makes meeting the growing demand a challenge that producers must face. However, one of the main problems regarding the cultivation of this fruit is the high cost and long picking times. In response, automatic harvesting has surged as an option to address this difficulty, and fruit instance segmentation plays a crucial role in these types of systems. Fruit segmentation is related to the identification and separation of individual fruits within a crop, allowing a more efficient and accurate harvesting process. Although deep learning (DL) techniques have shown potential for this activity, the complexity of the models leads to difficulty in their implementation in real-time systems. For this reason, a model capable of performing adequately in real-time, while also having good precision is of great interest. With this motivation, this work presents a efficient Mask R-CNN model to perform instance segmentation in strawberry fruits. The efficiency of the model is assessed considering the amount of frames per second (FPS) it can process, its size in megabytes (MB) and its mean average precision (mAP) value. Two approaches are provided: The first one consists on the training of the model using the Detectron2 library, while the second one focuses on the training of the model using the NVIDIA TAO Toolkit. In both cases, NVIDIA TensorRT is used to optimize the models. The results show that the best Mask R-CNN model, without optimization, has a performance of 83.45 mAP, 4 FPS, and 351 MB of size, which, after the TensorRT optimization, achieved 83.17 mAP, 25.46 FPS, and only 48.2 MB of size. It represents a suitable model for implementation in real-time systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 327-337"},"PeriodicalIF":8.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingwen Li , Pengbo Zeng , Shuai Yue , Zhiyang Zheng , Lifeng Qin , Huaibo Song
{"title":"Automatic body condition scoring system for dairy cows in group state based on improved YOLOv5 and video analysis","authors":"Jingwen Li , Pengbo Zeng , Shuai Yue , Zhiyang Zheng , Lifeng Qin , Huaibo Song","doi":"10.1016/j.aiia.2025.01.010","DOIUrl":"10.1016/j.aiia.2025.01.010","url":null,"abstract":"<div><div>This study proposes an automated scoring system for cow body condition using improved YOLOv5 to assess the body condition distribution of herd cows, which significantly impacts herd productivity and feeding management. A dataset was created by capturing images of the cow's hindquarters using an image sensor at the entrance of the milking hall. This system enhances feature extraction ability by introducing dual path networks and convolutional block attention modules and improves efficiency by replacing some modules from the standard YOLOv5s with deep separable convolution to reduce parameters. Furthermore, the system employs an automatic detection and segmentation algorithm to achieve individual cow segmentation and body condition acquisition in the video. Subsequently, the system computes the body condition distribution of cows in a group state. The experimental findings demonstrate that the proposed model outperforms the original YOLOv5 network with higher accuracy and fewer computations and parameters. The precision, recall, and mean average precision of the model are 94.3 %, 92.5 %, and 91.8 %, respectively. The algorithm achieved an overall detection rate of 94.2 % for individual cow segmentation and body condition acquisition in the video, with a body condition scoring accuracy of 92.5 % among accurately detected cows and an overall body condition scoring accuracy of 87.1 % across the 10 video tests.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 350-362"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient one-stage detection of shrimp larvae in complex aquaculture scenarios","authors":"Guoxu Zhang , Tianyi Liao , Yingyi Chen , Ping Zhong , Zhencai Shen , Daoliang Li","doi":"10.1016/j.aiia.2025.01.009","DOIUrl":"10.1016/j.aiia.2025.01.009","url":null,"abstract":"<div><div>The swift evolution of deep learning has greatly benefited the field of intensive aquaculture. Specifically, deep learning-based shrimp larvae detection has offered important technical assistance for counting shrimp larvae and recognizing abnormal behaviors. Firstly, the transparent bodies and small sizes of shrimp larvae, combined with complex scenarios due to variations in light intensity and water turbidity, make it challenging for current detection methods to achieve high accuracy. Secondly, deep learning-based object detection demands substantial computing power and storage space, which restricts its application on edge devices. This paper proposes an efficient one-stage shrimp larvae detection method, FAMDet, specifically designed for complex scenarios in intensive aquaculture. Firstly, different from the ordinary detection methods, it exploits an efficient FasterNet backbone, constructed with partial convolution, to extract effective multi-scale shrimp larvae features. Meanwhile, we construct an adaptively bi-directional fusion neck to integrate high-level semantic information and low-level detail information of shrimp larvae in a matter that sufficiently merges features and further mitigates noise interference. Finally, a decoupled detection head equipped with MPDIoU is used for precise bounding box regression of shrimp larvae. We collected images of shrimp larvae from multiple scenarios and labeled 108,365 targets for experiments. Compared with the ordinary detection methods (Faster RCNN, SSD, RetinaNet, CenterNet, FCOS, DETR, and YOLOX_s), FAMDet has obtained considerable advantages in accuracy, speed, and complexity. Compared with the outstanding one-stage method YOLOv8s, it has improved accuracy while reducing 57 % parameters, 37 % FLOPs, 22 % inference latency per image on CPU, and 56 % storage overhead. Furthermore, FAMDet has still outperformed multiple lightweight methods (EfficientDet, RT-DETR, GhostNetV2, EfficientFormerV2, EfficientViT, and MobileNetV4). In addition, we conducted experiments on the public dataset (VOC 07 + 12) to further verify the effectiveness of FAMDet. Consequently, the proposed method can effectively alleviate the limitations faced by resource-constrained devices and achieve superior shrimp larvae detection results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 338-349"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying key factors influencing maize stalk lodging resistance through wind tunnel simulations with machine learning algorithms","authors":"Guanmin Huang, Ying Zhang, Shenghao Gu, Weiliang Wen, Xianju Lu, Xinyu Guo","doi":"10.1016/j.aiia.2025.01.007","DOIUrl":"10.1016/j.aiia.2025.01.007","url":null,"abstract":"<div><div>Climate change has intensified maize stalk lodging, severely impacting global maize production. While numerous traits influence stalk lodging resistance, their relative importance remains unclear, hindering breeding efforts. This study introduces an combining wind tunnel testing with machine learning algorithms to quantitatively evaluate stalk lodging resistance traits. Through extensive field experiments and literature review, we identified and measured 74 phenotypic traits encompassing plant morphology, biomass, and anatomical characteristics in maize plants. Correlation analysis revealed a median linear correlation coefficient of 0.497 among these traits, with 15.1 % of correlations exceeding 0.8. Principal component analysis showed that the first five components explained 90 % of the total variance, indicating significant trait interactions. Through feature engineering and gradient boosting regression, we developed a high-precision wind speed-ear displacement prediction model (R<sup>2</sup> = 0.93) and identified 29 key traits critical for stalk lodging resistance. Sensitivity analysis revealed plant height as the most influential factor (sensitivity coefficient: −3.87), followed by traits of the 7th internode including epidermis layer thickness (0.62), pith area (−0.60), and lignin content (0.35). Our methodological framework not only provides quantitative insights into maize stalk lodging resistance mechanisms but also establishes a systematic approach for trait evaluation. The findings offer practical guidance for breeding programs focused on enhancing stalk lodging resistance and yield stability under climate change conditions, with potential applications in agronomic practice optimization and breeding strategy development.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 316-326"},"PeriodicalIF":8.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comprehensive review on 3D point cloud segmentation in plants","authors":"Hongli Song , Weiliang Wen , Sheng Wu , Xinyu Guo","doi":"10.1016/j.aiia.2025.01.006","DOIUrl":"10.1016/j.aiia.2025.01.006","url":null,"abstract":"<div><div>Segmentation of three-dimensional (3D) point clouds is fundamental in comprehending unstructured structural and morphological data. It plays a critical role in research related to plant phenomics, 3D plant modeling, and functional-structural plant modeling. Although technologies for plant point cloud segmentation (PPCS) have advanced rapidly, there has been a lack of a systematic overview of the development process. This paper presents an overview of the progress made in 3D point cloud segmentation research in plants. It starts by discussing the methods used to acquire point clouds in plants, and analyzes the impact of point cloud resolution and quality on the segmentation task. It then introduces multi-scale point cloud segmentation in plants. The paper summarizes and analyzes traditional methods for PPCS, including the global and local features. This paper discusses the progress of machine learning-based segmentation on plant point clouds through supervised, unsupervised, and integrated approaches. It also summarizes the datasets that for PPCS using deep learning-oriented methods and explains the advantages and disadvantages of deep learning-based methods for projection-based, voxel-based, and point-based approaches respectively. Finally, the development of PPCS is discussed and prospected. Deep learning methods are predicted to become dominant in the field of PPCS, and 3D point cloud segmentation would develop towards more automated with higher resolution and precision.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 296-315"},"PeriodicalIF":8.2,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Cheng , Dongyan Zhang , Gan Zhang , Tianyi Wang , Weibo Ren , Feng Yuan , Yaling Liu , Zhaoming Wang , Chunjiang Zhao
{"title":"High-throughput phenotyping techniques for forage: Status, bottleneck, and challenges","authors":"Tao Cheng , Dongyan Zhang , Gan Zhang , Tianyi Wang , Weibo Ren , Feng Yuan , Yaling Liu , Zhaoming Wang , Chunjiang Zhao","doi":"10.1016/j.aiia.2025.01.003","DOIUrl":"10.1016/j.aiia.2025.01.003","url":null,"abstract":"<div><div>High-throughput phenotyping (HTP) technology is now a significant bottleneck in the efficient selection and breeding of superior forage genetic resources. To better understand the status of forage phenotyping research and identify key directions for development, this review summarizes advances in HTP technology for forage phenotypic analysis over the past ten years. This paper reviews the unique aspects and research priorities in forage phenotypic monitoring, highlights key remote sensing platforms, examines the applications of advanced sensing technology for quantifying phenotypic traits, explores artificial intelligence (AI) algorithms in phenotypic data integration and analysis, and assesses recent progress in phenotypic genomics. The practical applications of HTP technology in forage remain constrained by several challenges. These include establishing uniform data collection standards, designing effective algorithms to handle complex genetic and environmental interactions, deepening the cross-exploration of phenomics-genomics, solving the problem of pathological inversion of forage phenotypic growth monitoring models, and developing low-cost forage phenotypic equipment. Resolving these challenges will unlock the full potential of HTP, enabling precise identification of superior forage traits, accelerating the breeding of superior varieties, and ultimately improving forage yield.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 98-115"},"PeriodicalIF":8.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}