Journal of Imaging最新文献

筛选
英文 中文
Machine Learning-Based Approaches for Breast Density Estimation from Mammograms: A Comprehensive Review.
IF 2.7
Journal of Imaging Pub Date : 2025-01-26 DOI: 10.3390/jimaging11020038
Khaldoon Alhusari, Salam Dhou
{"title":"Machine Learning-Based Approaches for Breast Density Estimation from Mammograms: A Comprehensive Review.","authors":"Khaldoon Alhusari, Salam Dhou","doi":"10.3390/jimaging11020038","DOIUrl":"10.3390/jimaging11020038","url":null,"abstract":"<p><p>Breast cancer, as of 2022, is the most prevalent type of cancer in women. Breast density-a measure of the non-fatty tissue in the breast-is a strong risk factor for breast cancer that can be estimated from mammograms. The importance of studying breast density is twofold. First, high breast density can be a factor in lowering mammogram sensitivity, as dense tissue can mask tumors. Second, higher breast density is associated with an increased risk of breast cancer, making accurate assessments vital. This paper presents a comprehensive review of the mammographic density estimation literature, with an emphasis on machine-learning-based approaches. The approaches reviewed can be classified as visual, software-, machine learning-, and segmentation-based. Machine learning methods can be further broken down into two categories: traditional machine learning and deep learning approaches. The most commonly utilized models are support vector machines (SVMs) and convolutional neural networks (CNNs), with classification accuracies ranging from 76.70% to 98.75%. Major limitations of the current works include subjectivity and cost-inefficiency. Future work can focus on addressing these limitations, potentially through the use of unsupervised segmentation and state-of-the-art deep learning models such as transformers. By addressing the current limitations, future research can pave the way for more reliable breast density estimation methods, ultimately improving early detection and diagnosis.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iForal: Automated Handwritten Text Transcription for Historical Medieval Manuscripts.
IF 2.7
Journal of Imaging Pub Date : 2025-01-25 DOI: 10.3390/jimaging11020036
Alexandre Matos, Pedro Almeida, Paulo L Correia, Osvaldo Pacheco
{"title":"iForal: Automated Handwritten Text Transcription for Historical Medieval Manuscripts.","authors":"Alexandre Matos, Pedro Almeida, Paulo L Correia, Osvaldo Pacheco","doi":"10.3390/jimaging11020036","DOIUrl":"10.3390/jimaging11020036","url":null,"abstract":"<p><p>The transcription of historical manuscripts aims at making our cultural heritage more accessible to experts and also to the larger public, but it is a challenging and time-intensive task. This paper contributes an automated solution for text layout recognition, segmentation, and recognition to speed up the transcription process of historical manuscripts. The focus is on transcribing Portuguese municipal documents from the Middle Ages in the context of the iForal project, including the contribution of an annotated dataset containing Portuguese medieval documents, notably a corpus of 67 Portuguese royal charter data. The proposed system can accurately identify document layouts, isolate the text, segment, and transcribe it. Results for the layout recognition model achieved 0.98 mAP@0.50 and 0.98 precision, while the text segmentation model achieved 0.91 mAP@0.50, detecting 95% of the lines. The text recognition model achieved 8.1% character error rate (CER) and 25.5% word error rate (WER) on the test set. These results can then be validated by palaeographers with less effort, contributing to achieving high-quality transcriptions faster. Moreover, the automatic models developed can be utilized as a basis for the creation of models that perform well for other historical handwriting styles, notably using transfer learning techniques. The contributed dataset has been made available on the HTR United catalogue, which includes training datasets to be used for automatic transcription or segmentation models. The models developed can be used, for instance, on the eSriptorium platform, which is used by a vast community of experts.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856379/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of an Optimal Convolutional Neural Network Architecture for MRI Brain Tumor Classification by Exploiting Particle Swarm Optimization. 利用粒子群优化设计磁共振成像脑肿瘤分类的最佳卷积神经网络架构
IF 2.7
Journal of Imaging Pub Date : 2025-01-24 DOI: 10.3390/jimaging11020031
Sofia El Amoury, Youssef Smili, Youssef Fakhri
{"title":"Design of an Optimal Convolutional Neural Network Architecture for MRI Brain Tumor Classification by Exploiting Particle Swarm Optimization.","authors":"Sofia El Amoury, Youssef Smili, Youssef Fakhri","doi":"10.3390/jimaging11020031","DOIUrl":"10.3390/jimaging11020031","url":null,"abstract":"<p><p>The classification of brain tumors using MRI scans is critical for accurate diagnosis and effective treatment planning, though it poses significant challenges due to the complex and varied characteristics of tumors, including irregular shapes, diverse sizes, and subtle textural differences. Traditional convolutional neural network (CNN) models, whether handcrafted or pretrained, frequently fall short in capturing these intricate details comprehensively. To address this complexity, an automated approach employing Particle Swarm Optimization (PSO) has been applied to create a CNN architecture specifically adapted for MRI-based brain tumor classification. PSO systematically searches for an optimal configuration of architectural parameters-such as the types and numbers of layers, filter quantities and sizes, and neuron numbers in fully connected layers-with the objective of enhancing classification accuracy. This performance-driven method avoids the inefficiencies of manual design and iterative trial and error. Experimental results indicate that the PSO-optimized CNN achieves a classification accuracy of 99.19%, demonstrating significant potential for improving diagnostic precision in complex medical imaging applications and underscoring the value of automated architecture search in advancing critical healthcare technology.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11857081/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Potential of Latent Space for the Classification of Paint Defects.
IF 2.7
Journal of Imaging Pub Date : 2025-01-24 DOI: 10.3390/jimaging11020033
Doaa Almhaithawi, Alessandro Bellini, Georgios C Chasparis, Tania Cerquitelli
{"title":"Investigating the Potential of Latent Space for the Classification of Paint Defects.","authors":"Doaa Almhaithawi, Alessandro Bellini, Georgios C Chasparis, Tania Cerquitelli","doi":"10.3390/jimaging11020033","DOIUrl":"10.3390/jimaging11020033","url":null,"abstract":"<p><p>Defect detection methods have greatly assisted human operators in various fields, from textiles to surfaces and mechanical components, by facilitating decision-making processes and reducing visual fatigue. This area of research is widely recognized as a cross-industry concern, particularly in the manufacturing sector. Nevertheless, each specific application brings unique challenges that require tailored solutions. This paper presents a novel framework for leveraging latent space representations in defect detection tasks, focusing on improving explainability while maintaining accuracy. This work delves into how latent spaces can be utilized by integrating unsupervised and supervised analyses. We propose a hybrid methodology that not only identifies known defects but also provides a mechanism for detecting anomalies and dynamically adapting to new defect types. This dual approach supports human operators, reducing manual workload and enhancing interpretability.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856999/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revealing Gender Bias from Prompt to Image in Stable Diffusion.
IF 2.7
Journal of Imaging Pub Date : 2025-01-24 DOI: 10.3390/jimaging11020035
Yankun Wu, Yuta Nakashima, Noa Garcia
{"title":"Revealing Gender Bias from Prompt to Image in Stable Diffusion.","authors":"Yankun Wu, Yuta Nakashima, Noa Garcia","doi":"10.3390/jimaging11020035","DOIUrl":"10.3390/jimaging11020035","url":null,"abstract":"<p><p>Social biases in generative models have gained increasing attention. This paper proposes an automatic evaluation protocol for text-to-image generation, examining how gender bias originates and perpetuates in the generation process of Stable Diffusion. Using triplet prompts that vary by gender indicators, we trace presentations at several stages of the generation process and explore dependencies between prompts and images. Our findings reveal the bias persists throughout all internal stages of the generating process and manifests in the entire images. For instance, differences in object presence, such as different instruments and outfit preferences, are observed across genders and extend to overall image layouts. Moreover, our experiments demonstrate that neutral prompts tend to produce images more closely aligned with those from masculine prompts than with their female counterparts. We also investigate prompt-image dependencies to further understand how bias is embedded in the generated content. Finally, we offer recommendations for developers and users to mitigate this effect in text-to-image generation.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856082/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCNet: A Deep Learning Framework for Enhanced Grape Cluster Segmentation and Yield Estimation Incorporating Occluded Grape Detection with a Correction Factor for Indoor Experimentation.
IF 2.7
Journal of Imaging Pub Date : 2025-01-24 DOI: 10.3390/jimaging11020034
Rubi Quiñones, Syeda Mariah Banu, Eren Gultepe
{"title":"GCNet: A Deep Learning Framework for Enhanced Grape Cluster Segmentation and Yield Estimation Incorporating Occluded Grape Detection with a Correction Factor for Indoor Experimentation.","authors":"Rubi Quiñones, Syeda Mariah Banu, Eren Gultepe","doi":"10.3390/jimaging11020034","DOIUrl":"10.3390/jimaging11020034","url":null,"abstract":"<p><p>Object segmentation algorithms have heavily relied on deep learning techniques to estimate the count of grapes which is a strong indicator for the yield success of grapes. The issue with using object segmentation algorithms for grape analytics is that they are limited to counting only the visible grapes, thus omitting hidden grapes, which affect the true estimate of grape yield. Many grapes are occluded because of either the compactness of the grape bunch cluster or due to canopy interference. This introduces the need for models to be able to estimate the unseen berries to give a more accurate estimate of the grape yield by improving grape cluster segmentation. We propose the Grape Counting Network (GCNet), a novel framework for grape cluster segmentation, integrating deep learning techniques with correction factors to address challenges in indoor yield estimation. GCNet incorporates occlusion adjustments, enhancing segmentation accuracy even under conditions of foliage and cluster compactness, and setting new standards in agricultural indoor imaging analysis. This approach improves yield estimation accuracy, achieving a R² of 0.96 and reducing mean absolute error (MAE) by 10% compared to previous methods. We also propose a new dataset called GrapeSet which contains visible imagery of grape clusters imaged indoors, along with their ground truth mask, total grape count, and weight in grams. The proposed framework aims to encourage future research in determining which features of grapes can be leveraged to estimate the correct grape yield count, equip grape harvesters with the knowledge of early yield estimation, and produce accurate results in object segmentation algorithms for grape analytics.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856392/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Deep Learning Models for Climate-Related Natural Disaster Detection from UAV Images and Remote Sensing Data. 从无人机图像和遥感数据中优化与气候相关的自然灾害探测深度学习模型。
IF 2.7
Journal of Imaging Pub Date : 2025-01-24 DOI: 10.3390/jimaging11020032
Kim VanExel, Samendra Sherchan, Siyan Liu
{"title":"Optimizing Deep Learning Models for Climate-Related Natural Disaster Detection from UAV Images and Remote Sensing Data.","authors":"Kim VanExel, Samendra Sherchan, Siyan Liu","doi":"10.3390/jimaging11020032","DOIUrl":"10.3390/jimaging11020032","url":null,"abstract":"<p><p>This research study utilized artificial intelligence (AI) to detect natural disasters from aerial images. Flooding and desertification were two natural disasters taken into consideration. The Climate Change Dataset was created by compiling various open-access data sources. This dataset contains 6334 aerial images from UAV (unmanned aerial vehicles) images and satellite images. The Climate Change Dataset was then used to train Deep Learning (DL) models to identify natural disasters. Four different Machine Learning (ML) models were used: convolutional neural network (CNN), DenseNet201, VGG16, and ResNet50. These ML models were trained on our Climate Change Dataset so that their performance could be compared. DenseNet201 was chosen for optimization. All four ML models performed well. DenseNet201 and ResNet50 achieved the highest testing accuracies of 99.37% and 99.21%, respectively. This research project demonstrates the potential of AI to address environmental challenges, such as climate change-related natural disasters. This study's approach is novel by creating a new dataset, optimizing an ML model, cross-validating, and presenting desertification as one of our natural disasters for DL detection. Three categories were used (Flooded, Desert, Neither). Our study relates to AI for Climate Change and Environmental Sustainability. Drone emergency response would be a practical application for our research project.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Method for Estimating Fluorescence Emission Spectra from the Image Data of Plant Grain and Leaves Without a Spectrometer. 一种无需光谱仪即可从植物谷粒和叶片图像数据中估算荧光发射光谱的方法。
IF 2.7
Journal of Imaging Pub Date : 2025-01-21 DOI: 10.3390/jimaging11020030
Shoji Tominaga, Shogo Nishi, Ryo Ohtera, Hideaki Sakai
{"title":"A Method for Estimating Fluorescence Emission Spectra from the Image Data of Plant Grain and Leaves Without a Spectrometer.","authors":"Shoji Tominaga, Shogo Nishi, Ryo Ohtera, Hideaki Sakai","doi":"10.3390/jimaging11020030","DOIUrl":"10.3390/jimaging11020030","url":null,"abstract":"<p><p>This study proposes a method for estimating the spectral images of fluorescence spectral distributions emitted from plant grains and leaves without using a spectrometer. We construct two types of multiband imaging systems with six channels, using ordinary off-the-shelf cameras and a UV light. A mobile phone camera is used to detect the fluorescence emission in the blue wavelength region of rice grains. For plant leaves, a small monochrome camera is used with additional optical filters to detect chlorophyll fluorescence in the red-to-far-red wavelength region. A ridge regression approach is used to obtain a reliable estimate of the spectral distribution of the fluorescence emission at each pixel point from the acquired image data. The spectral distributions can be estimated by optimally selecting the ridge parameter without statistically analyzing the fluorescence spectra. An algorithm for optimal parameter selection is developed using a cross-validation technique. In experiments using real rice grains and green leaves, the estimated fluorescence emission spectral distributions by the proposed method are compared to the direct measurements obtained with a spectroradiometer and the estimates obtained using the minimum norm estimation method. The estimated images of fluorescence emissions are presented for rice grains and green leaves. The reliability of the proposed estimation method is demonstrated.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856269/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remote Sensing Target Tracking Method Based on Super-Resolution Reconstruction and Hybrid Networks. 基于超分辨率重构和混合网络的遥感目标跟踪方法。
IF 2.7
Journal of Imaging Pub Date : 2025-01-21 DOI: 10.3390/jimaging11020029
Hongqing Wan, Sha Xu, Yali Yang, Yongfang Li
{"title":"Remote Sensing Target Tracking Method Based on Super-Resolution Reconstruction and Hybrid Networks.","authors":"Hongqing Wan, Sha Xu, Yali Yang, Yongfang Li","doi":"10.3390/jimaging11020029","DOIUrl":"10.3390/jimaging11020029","url":null,"abstract":"<p><p>Remote sensing images have the characteristics of high complexity, being easily distorted, and having large-scale variations. Moreover, the motion of remote sensing targets usually has nonlinear features, and existing target tracking methods based on remote sensing data cannot accurately track remote sensing targets. And obtaining high-resolution images by optimizing algorithms will save a lot of costs. Aiming at the problem of large tracking errors in remote sensing target tracking by current tracking algorithms, this paper proposes a target tracking method combined with a super-resolution hybrid network. Firstly, this method utilizes the super-resolution reconstruction network to improve the resolution of remote sensing images. Then, the hybrid neural network is used to estimate the target motion after target detection. Finally, identity matching is completed through the Hungarian algorithm. The experimental results show that the tracking accuracy of this method is 67.8%, and the recognition identification F-measure (IDF1) value is 0.636. Its performance indicators are better than those of traditional target tracking algorithms, and it can meet the requirements for accurate tracking of remote sensing targets.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Plant Detection in RGB Images from Unmanned Aerial Vehicles Using Segmentation by Deep Learning and an Impact of Model Accuracy on Downstream Analysis.
IF 2.7
Journal of Imaging Pub Date : 2025-01-20 DOI: 10.3390/jimaging11010028
Mikhail V Kozhekin, Mikhail A Genaev, Evgenii G Komyshev, Zakhar A Zavyalov, Dmitry A Afonnikov
{"title":"Plant Detection in RGB Images from Unmanned Aerial Vehicles Using Segmentation by Deep Learning and an Impact of Model Accuracy on Downstream Analysis.","authors":"Mikhail V Kozhekin, Mikhail A Genaev, Evgenii G Komyshev, Zakhar A Zavyalov, Dmitry A Afonnikov","doi":"10.3390/jimaging11010028","DOIUrl":"10.3390/jimaging11010028","url":null,"abstract":"<p><p>Crop field monitoring using unmanned aerial vehicles (UAVs) is one of the most important technologies for plant growth control in modern precision agriculture. One of the important and widely used tasks in field monitoring is plant stand counting. The accurate identification of plants in field images provides estimates of plant number per unit area, detects missing seedlings, and predicts crop yield. Current methods are based on the detection of plants in images obtained from UAVs by means of computer vision algorithms and deep learning neural networks. These approaches depend on image spatial resolution and the quality of plant markup. The performance of automatic plant detection may affect the efficiency of downstream analysis of a field cropping pattern. In the present work, a method is presented for detecting the plants of five species in images acquired via a UAV on the basis of image segmentation by deep learning algorithms (convolutional neural networks). Twelve orthomosaics were collected and marked at several sites in Russia to train and test the neural network algorithms. Additionally, 17 existing datasets of various spatial resolutions and markup quality levels from the Roboflow service were used to extend training image sets. Finally, we compared several texture features between manually evaluated and neural-network-estimated plant masks. It was demonstrated that adding images to the training sample (even those of lower resolution and markup quality) improves plant stand counting significantly. The work indicates how the accuracy of plant detection in field images may affect their cropping pattern evaluation by means of texture characteristics. For some of the characteristics (GLCM mean, GLRM long run, GLRM run ratio) the estimates between images marked manually and automatically are close. For others, the differences are large and may lead to erroneous conclusions about the properties of field cropping patterns. Nonetheless, overall, plant detection algorithms with a higher accuracy show better agreement with the estimates of texture parameters obtained from manually marked images.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11766541/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143034562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信