Proceedings. International Conference on Image Processing最新文献

筛选
英文 中文
A PHYSICS-GUIDED SMOOTHING METHOD FOR MATERIAL MODELING WITH DIGITAL IMAGE CORRELATION (DIC) MEASUREMENTS. 基于数字图像相关(dic)测量的材料建模物理指导平滑方法。
Proceedings. International Conference on Image Processing Pub Date : 2025-09-01 Epub Date: 2025-08-18 DOI: 10.1109/icip55913.2025.11084372
Jihong Wang, Chung-Hao Lee, William Richardson, Yue Yu
{"title":"A PHYSICS-GUIDED SMOOTHING METHOD FOR MATERIAL MODELING WITH DIGITAL IMAGE CORRELATION (DIC) MEASUREMENTS.","authors":"Jihong Wang, Chung-Hao Lee, William Richardson, Yue Yu","doi":"10.1109/icip55913.2025.11084372","DOIUrl":"https://doi.org/10.1109/icip55913.2025.11084372","url":null,"abstract":"<p><p>In this work, we present a novel approach to process the DIC measurements of multiple biaxial stretching protocols. In particular, we develop a optimization-based approach, which calculates the smoothed nodal displacements using a moving least-squares algorithm subject to positive strain constraints. As such, physically consistent displacement and strain fields are obtained. Then, we further deploy a data-driven workflow to heterogeneous material modeling from these physically consistent DIC measurements, by estimating a nonlocal constitutive law together with the material microstructure. To demonstrate the applicability of our approach, we apply it in learning a material model and fiber orientation field from DIC measurements of a porcine tricuspid valve anterior leaflet. Our results demonstrate that the proposed DIC data processing approach can significantly improve the accuracy of modeling biological materials.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2025 ","pages":"2654-2659"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381652/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning From PU Data Using Disentangled Representations. 使用解纠缠表示从PU数据中学习。
Proceedings. International Conference on Image Processing Pub Date : 2025-09-01 Epub Date: 2025-08-18 DOI: 10.1109/icip55913.2025.11084723
Omar Zamzam, Haleh Akrami, Mahdi Soltanolkotabi, Richard Leahy
{"title":"Learning From PU Data Using Disentangled Representations.","authors":"Omar Zamzam, Haleh Akrami, Mahdi Soltanolkotabi, Richard Leahy","doi":"10.1109/icip55913.2025.11084723","DOIUrl":"10.1109/icip55913.2025.11084723","url":null,"abstract":"<p><p>We address the problem of learning a binary classifier given partially labeled data where all labeled samples come from only one of the classes, commonly known as Positive Unlabeled (PU) learning. Classical methods such as clustering, out-of-distribution detection, and positive density estimation, while effective in low-dimensional scenarios, lose their efficacy as the dimensionality of data increases, because of the increasing complexity. This has led to the development of methods that address the problem in high-dimensional spaces; however, many of these methods are also impacted by the increased complexity inherent in high-dimensional data. The contribution of this paper is the learning of a neural network-based data representation by employing a loss function that enables the projection of unlabeled data into two distinct clusters - positive and negative - facilitating their identification through basic clustering techniques and mirroring the simplicity of the problem seen in low-dimensional settings. We further enhance this separation of unlabeled data clusters by implementing a vector quantization strategy. Our experimental results on benchmarking PU datasets validate the superiority of our method over existing state-of-the-art techniques. Additionally, we provide theoretical justification to support our cluster-based approach and algorithmic choices.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2025 ","pages":"1624-1629"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12503129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DISCO: A DIFFUSION MODEL FOR SPATIAL TRANSCRIPTOMICS DATA COMPLETION. Disco:空间转录组学数据完成的扩散模型。
Proceedings. International Conference on Image Processing Pub Date : 2025-09-01 Epub Date: 2025-08-18 DOI: 10.1109/icip55913.2025.11084277
Ziheng Duan, Xi Li, Zhuoyang Zhang, James Song, Jing Zhang
{"title":"DISCO: A DIFFUSION MODEL FOR SPATIAL TRANSCRIPTOMICS DATA COMPLETION.","authors":"Ziheng Duan, Xi Li, Zhuoyang Zhang, James Song, Jing Zhang","doi":"10.1109/icip55913.2025.11084277","DOIUrl":"10.1109/icip55913.2025.11084277","url":null,"abstract":"<p><p>Spatial transcriptomics enables the study of gene expression within the spatial context of tissues, offering valuable insights into tissue organization and function. However, technical limitations can result in large missing regions of data, which hinder accurate downstream analyses and biological interpretation. To address these challenges, we propose <i>DISCO</i> (<i>DI</i>ffusion model for <i>S</i>patial transcriptomics data <i>CO</i>mpletion), a framework with three key features. First, DISCO employs a graph neural network-based region encoder to integrate spatial and gene expression information from observed regions, generating latent representations that guide the prediction of missing regions. Second, it uses two diffusion modules: a position diffusion module to predict the spatial layout of missing regions, and a gene expression diffusion module to generate gene expression profiles conditioned on the predicted coordinates. Third, DISCO incorporates neighboring region information during inference to guide the denoising process, ensuring smooth transitions and biologically coherent results. We validate DISCO across multiple sequencing platforms, species, and datasets, demonstrating its effectiveness in reconstructing large missing regions. DISCO is implemented as open-source software, providing researchers with a powerful tool to enhance data completeness and advance spatial transcriptomics research.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2025 ","pages":"19-24"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12407492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145002127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LOCALIZING MOMENTS OF ACTIONS IN UNTRIMMED VIDEOS OF INFANTS WITH AUTISM SPECTRUM DISORDER. 在自闭症谱系障碍婴儿的未修剪视频中定位动作时刻。
Proceedings. International Conference on Image Processing Pub Date : 2024-10-01 Epub Date: 2024-09-27 DOI: 10.1109/icip51287.2024.10648046
Halil Ismail Helvaci, Sen-Ching Samson Cheung, Chen-Nee Chuah, Sally Ozonoff
{"title":"LOCALIZING MOMENTS OF ACTIONS IN UNTRIMMED VIDEOS OF INFANTS WITH AUTISM SPECTRUM DISORDER.","authors":"Halil Ismail Helvaci, Sen-Ching Samson Cheung, Chen-Nee Chuah, Sally Ozonoff","doi":"10.1109/icip51287.2024.10648046","DOIUrl":"10.1109/icip51287.2024.10648046","url":null,"abstract":"<p><p>Autism Spectrum Disorder (ASD) presents significant challenges in early diagnosis and intervention, impacting children and their families. With prevalence rates rising, there is a critical need for accessible and efficient screening tools. Leveraging machine learning (ML) techniques, in particular Temporal Action Localization (TAL), holds promise for automating ASD screening. This paper introduces a self-attention based TAL model designed to identify ASD-related behaviors in infant videos. Unlike existing methods, our approach simplifies complex modeling and emphasizes efficiency, which is essential for practical deployment in real-world scenarios. Importantly, this work underscores the importance of developing computer vision methods capable of operating in naturilistic environments with little equipment control, addressing key challenges in ASD screening. This study is the first to conduct end-to-end temporal action localization in untrimmed videos of infants with ASD, offering promising avenues for early intervention and support. We report baseline results of behavior detection using our TAL model. We achieve 70% accuracy for look face, 79% accuracy for look object, 72% for smile and 65% for vocalization.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2024 ","pages":"3841-3847"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12539604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145350363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEEP ACTIVE LEARNING FOR CRYO-ELECTRON TOMOGRAPHY CLASSIFICATION. 基于深度主动学习的低温电子断层扫描分类。
Proceedings. International Conference on Image Processing Pub Date : 2022-10-01 DOI: 10.1109/icip46576.2022.9898002
Tianyang Wang, Bo Li, Jing Zhang, Xiangrui Zeng, Mostofa Rafid Uddin, Wei Wu, Min Xu
{"title":"DEEP ACTIVE LEARNING FOR CRYO-ELECTRON TOMOGRAPHY CLASSIFICATION.","authors":"Tianyang Wang,&nbsp;Bo Li,&nbsp;Jing Zhang,&nbsp;Xiangrui Zeng,&nbsp;Mostofa Rafid Uddin,&nbsp;Wei Wu,&nbsp;Min Xu","doi":"10.1109/icip46576.2022.9898002","DOIUrl":"https://doi.org/10.1109/icip46576.2022.9898002","url":null,"abstract":"<p><p>Cryo-Electron Tomography (cryo-ET) is an emerging 3D imaging technique which shows great potentials in structural biology research. One of the main challenges is to perform classification of macromolecules captured by cryo-ET. Recent efforts exploit deep learning to address this challenge. However, training reliable deep models usually requires a huge amount of labeled data in supervised fashion. Annotating cryo-ET data is arguably very expensive. Deep Active Learning (DAL) can be used to reduce labeling cost while not sacrificing the task performance too much. Nevertheless, most existing methods resort to auxiliary models or complex fashions (e.g. adversarial learning) for uncertainty estimation, the core of DAL. These models need to be highly customized for cryo-ET tasks which require 3D networks, and extra efforts are also indispensable for tuning these models, rendering a difficulty of deployment on cryo-ET tasks. To address these challenges, we propose a novel metric for data selection in DAL, which can also be leveraged as a regularizer of the empirical loss, further boosting the task model. We demonstrate the superiority of our method via extensive experiments on both simulated and real cryo-ET datasets. Our source <b><i>Code</i></b> and <b><i>Appendix</i></b> can be found at <i>this URL</i>.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2022 ","pages":"1611-1615"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10072314/pdf/nihms-1882159.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10183702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JOINT MOTION CORRECTION AND 3D SEGMENTATION WITH GRAPH-ASSISTED NEURAL NETWORKS FOR RETINAL OCT. 基于图辅助神经网络的视网膜Oct关节运动校正与三维分割。
Proceedings. International Conference on Image Processing Pub Date : 2022-10-01 DOI: 10.1109/icip46576.2022.9898072
Yiqian Wang, Carlo Galang, William R Freeman, Truong Q Nguyen, Cheolhong An
{"title":"JOINT MOTION CORRECTION AND 3D SEGMENTATION WITH GRAPH-ASSISTED NEURAL NETWORKS FOR RETINAL OCT.","authors":"Yiqian Wang,&nbsp;Carlo Galang,&nbsp;William R Freeman,&nbsp;Truong Q Nguyen,&nbsp;Cheolhong An","doi":"10.1109/icip46576.2022.9898072","DOIUrl":"https://doi.org/10.1109/icip46576.2022.9898072","url":null,"abstract":"<p><p>Optical Coherence Tomography (OCT) is a widely used non-invasive high resolution 3D imaging technique for biological tissues and plays an important role in ophthalmology. OCT retinal layer segmentation is a fundamental image processing step for OCT-Angiography projection, and disease analysis. A major problem in retinal imaging is the motion artifacts introduced by involuntary eye movements. In this paper, we propose neural networks that jointly correct eye motion and retinal layer segmentation utilizing 3D OCT information, so that the segmentation among neighboring B-scans would be consistent. The experimental results show both visual and quantitative improvements by combining motion correction and 3D OCT layer segmentation comparing to conventional and deep-learning based 2D OCT layer segmentation.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2022 ","pages":"766-770"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280808/pdf/nihms-1908719.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10066997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
UNSUPERVISED VIDEO SEGMENTATION ALGORITHMS BASED ON FLEXIBLY REGULARIZED MIXTURE MODELS. 基于灵活正则化混合模型的无监督视频分割算法。
Proceedings. International Conference on Image Processing Pub Date : 2022-10-01 Epub Date: 2022-10-18 DOI: 10.1109/icip46576.2022.9897691
Claire Launay, Jonathan Vacher, Ruben Coen-Cagli
{"title":"UNSUPERVISED VIDEO SEGMENTATION ALGORITHMS BASED ON FLEXIBLY REGULARIZED MIXTURE MODELS.","authors":"Claire Launay, Jonathan Vacher, Ruben Coen-Cagli","doi":"10.1109/icip46576.2022.9897691","DOIUrl":"10.1109/icip46576.2022.9897691","url":null,"abstract":"<p><p>We propose a family of probabilistic segmentation algorithms for videos that rely on a generative model capturing static and dynamic natural image statistics. Our framework adopts flexibly regularized mixture models (FlexMM) [1], an efficient method to combine mixture distributions across different data sources. FlexMMs of Student-t distributions successfully segment static natural images, through uncertainty-based information sharing between hidden layers of CNNs. We further extend this approach to videos and exploit FlexMM to propagate segment labels across space and time. We show that temporal propagation improves temporal consistency of segmentation, reproducing qualitatively a key aspect of human perceptual grouping. Besides, Student-t distributions can capture statistics of optical flows of natural movies, which represent apparent motion in the video. Integrating these motion cues in our temporal FlexMM further enhances the segmentation of each frame of natural movies. Our probabilistic dynamic segmentation algorithms thus provide a new framework to study uncertainty in human dynamic perceptual segmentation.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":" ","pages":"4073-4077"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9670685/pdf/nihms-1849113.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40477340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LEARNING TO CORRECT AXIAL MOTION IN OCT FOR 3D RETINAL IMAGING. 学习校正三维视网膜成像中的轴向运动。
Proceedings. International Conference on Image Processing Pub Date : 2021-09-01 Epub Date: 2021-08-23 DOI: 10.1109/icip42928.2021.9506620
Yiqian Wang, Alexandra Warter, Melina Cavichini-Cordeiro, William R Freeman, Dirk-Uwe G Bartsch, Truong Q Nguyen, Cheolhong An
{"title":"LEARNING TO CORRECT AXIAL MOTION IN OCT FOR 3D RETINAL IMAGING.","authors":"Yiqian Wang, Alexandra Warter, Melina Cavichini-Cordeiro, William R Freeman, Dirk-Uwe G Bartsch, Truong Q Nguyen, Cheolhong An","doi":"10.1109/icip42928.2021.9506620","DOIUrl":"10.1109/icip42928.2021.9506620","url":null,"abstract":"<p><p>Optical Coherence Tomography (OCT) is a powerful technique for non-invasive 3D imaging of biological tissues at high resolution that has revolutionized retinal imaging. A major challenge in OCT imaging is the motion artifacts introduced by involuntary eye movements. In this paper, we propose a convolutional neural network that learns to correct axial motion in OCT based on a single volumetric scan. The proposed method is able to correct large motion, while preserving the overall curvature of the retina. The experimental results show significant improvements in visual quality as well as overall error compared to the conventional methods in both normal and disease cases.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":" ","pages":"126-130"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9359411/pdf/nihms-1823145.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40601512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A DEEP LEARNING-BASED CAD SYSTEM FOR RENAL ALLOGRAFT ASSESSMENT: DIFFUSION, BOLD, AND CLINICAL BIOMARKERS. 基于深度学习的异体肾移植评估cad系统:扩散、大胆和临床生物标志物。
Proceedings. International Conference on Image Processing Pub Date : 2020-10-01 Epub Date: 2020-09-30 DOI: 10.1109/ICIP40778.2020.9190818
Mohamed Shehata, Mohammed Ghazal, Hadil Abu Khalifeh, Ashraf Khalil, Ahmed Shalaby, Amy C Dwyer, Ashraf M Bakr, Robert Keynton, Ayman El-Baz
{"title":"A DEEP LEARNING-BASED CAD SYSTEM FOR RENAL ALLOGRAFT ASSESSMENT: DIFFUSION, BOLD, AND CLINICAL BIOMARKERS.","authors":"Mohamed Shehata,&nbsp;Mohammed Ghazal,&nbsp;Hadil Abu Khalifeh,&nbsp;Ashraf Khalil,&nbsp;Ahmed Shalaby,&nbsp;Amy C Dwyer,&nbsp;Ashraf M Bakr,&nbsp;Robert Keynton,&nbsp;Ayman El-Baz","doi":"10.1109/ICIP40778.2020.9190818","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190818","url":null,"abstract":"<p><p>Recently, studies for non-invasive renal transplant evaluation have been explored to control allograft rejection. In this paper, a computer-aided diagnostic system has been developed to accommodate with an early-stage renal transplant status assessment, called RT-CAD. Our model of this system integrated multiple sources for a more accurate diagnosis: two image-based sources and two clinical-based sources. The image-based sources included apparent diffusion coefficients (ADCs) and the amount of deoxygenated hemoglobin (R2*). More specifically, these ADCs were extracted from 47 diffusion weighted magnetic resonance imaging (DW-MRI) scans at 11 different <i>b</i>-values (b0, b50, b100, …, b1000 s/mm<sup>2</sup>), while the R2* values were extracted from 30 blood oxygen level-dependent MRI (BOLD-MRI) scans at 5 different echo times (2ms, 7ms, 12ms, 17ms, and 22ms). The clinical sources included serum creatinine (SCr) and creatinine clearance (CrCl). First, the kidney was segmented through the RT-CAD system using a geometric deformable model called a level-set method. Second, both ADCs and R2* were estimated for common patients (N = 30) and then were integrated with the corresponding SCr and CrCl. Last, these integrated biomarkers were considered the discriminatory features to be used as trainers and testers for future deep learning-based classifiers such as stacked auto-encoders (SAEs). We used a k-fold cross-validation criteria to evaluate the RT-CAD system diagnostic performance, which achieved the following scores: 93.3%, 90.0%, and 95.0% in terms of accuracy, sensitivity, and specificity in differentiating between acute renal rejection (AR) and non-rejection (NR). The reliability and completeness of the RT-CAD system was further accepted by the area under the curve score of 0.92. The conclusions ensured that the presented RT-CAD system has a high reliability to diagnose the status of the renal transplant in a non-invasive way.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2020 ","pages":"355-359"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICIP40778.2020.9190818","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39578810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
RTIP: A FULLY AUTOMATED ROOT TIP TRACKER FOR MEASURING PLANT GROWTH WITH INTERMITTENT PERTURBATIONS. rtip:全自动根尖跟踪仪,用于测量间歇性扰动下的植物生长情况。
Proceedings. International Conference on Image Processing Pub Date : 2020-10-01 Epub Date: 2020-09-30 DOI: 10.1109/icip40778.2020.9191008
Deniz Kavzak Ufuktepe, Kannappan Palaniappan, Melissa Elmali, Tobias I Baskin
{"title":"RTIP: A FULLY AUTOMATED ROOT TIP TRACKER FOR MEASURING PLANT GROWTH WITH INTERMITTENT PERTURBATIONS.","authors":"Deniz Kavzak Ufuktepe, Kannappan Palaniappan, Melissa Elmali, Tobias I Baskin","doi":"10.1109/icip40778.2020.9191008","DOIUrl":"10.1109/icip40778.2020.9191008","url":null,"abstract":"<p><p>RTip is a tool to quantify plant root growth velocity using high resolution microscopy image sequences at sub-pixel accuracy. The fully automated RTip tracker is designed for high-throughput analysis of plant phenotyping experiments with episodic perturbations. RTip is able to auto-skip past these manual intervention perturbation activity, <i>i.e.</i> when the root tip is not under the microscope, image is distorted or blurred. RTip provides the most accurate root growth velocity results with the lowest variance (<i>i.e.</i> localization jitter) compared to six tracking algorithms including the top performing unsupervised Discriminative Correlation Filter Tracker and the Deeper and Wider Siamese Network. RTip is the only tracker that is able to automatically detect and recover from (occlusion-like) varying duration perturbation events.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2020 ","pages":"2516-2520"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8033648/pdf/nihms-1685784.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25581695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信