Proceedings. IEEE International Conference on Computer Vision最新文献

筛选
英文 中文
Enhancing Modality-Agnostic Representations via Meta-learning for Brain Tumor Segmentation. 通过元学习增强脑肿瘤分割的模态诊断表征。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2023-10-01 DOI: 10.1109/iccv51070.2023.01958
Aishik Konwer, Xiaoling Hu, Joseph Bae, Xuan Xu, Chao Chen, Prateek Prasanna
{"title":"Enhancing Modality-Agnostic Representations via Meta-learning for Brain Tumor Segmentation.","authors":"Aishik Konwer, Xiaoling Hu, Joseph Bae, Xuan Xu, Chao Chen, Prateek Prasanna","doi":"10.1109/iccv51070.2023.01958","DOIUrl":"10.1109/iccv51070.2023.01958","url":null,"abstract":"<p><p>In medical vision, different imaging modalities provide complementary information. However, in practice, not all modalities may be available during inference or even training. Previous approaches, e.g., knowledge distillation or image synthesis, often assume the availability of full modalities for all subjects during training; this is unrealistic and impractical due to the variability in data collection across sites. We propose a novel approach to learn enhanced modality-agnostic representations by employing a meta-learning strategy in training, even when only limited full modality samples are available. Meta-learning enhances partial modality representations to full modality representations by meta-training on partial modality data and meta-testing on limited full modality samples. Additionally, we co-supervise this feature enrichment by introducing an auxiliary adversarial learning branch. More specifically, a missing modality detector is used as a discriminator to mimic the full modality setting. Our segmentation framework significantly outperforms state-of-the-art brain tumor segmentation techniques in missing modality scenarios.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"21358-21368"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11087061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SimpleClick: Interactive Image Segmentation with Simple Vision Transformers. SimpleClick:利用简单视觉变换器进行交互式图像分割
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2023-10-01 DOI: 10.1109/iccv51070.2023.02037
Qin Liu, Zhenlin Xu, Gedas Bertasius, Marc Niethammer
{"title":"SimpleClick: Interactive Image Segmentation with Simple Vision Transformers.","authors":"Qin Liu, Zhenlin Xu, Gedas Bertasius, Marc Niethammer","doi":"10.1109/iccv51070.2023.02037","DOIUrl":"10.1109/iccv51070.2023.02037","url":null,"abstract":"<p><p>Click-based interactive image segmentation aims at extracting objects with a limited user clicking. A hierarchical backbone is the <i>de-facto</i> architecture for current methods. Recently, the plain, non-hierarchical Vision Transformer (ViT) has emerged as a competitive backbone for dense prediction tasks. This design allows the original ViT to be a foundation model that can be finetuned for downstream tasks without redesigning a hierarchical backbone for pretraining. Although this design is simple and has been proven effective, it has not yet been explored for interactive image segmentation. To fill this gap, we propose SimpleClick, the first interactive segmentation method that leverages a plain backbone. Based on the plain backbone, we introduce a symmetric patch embedding layer that encodes clicks into the backbone with minor modifications to the backbone itself. With the plain backbone pretrained as a masked autoencoder (MAE), SimpleClick achieves state-of-the-art performance. Remarkably, our method achieves <b>4.15</b> NoC@90 on SBD, improving <b>21.8%</b> over the previous best result. Extensive evaluation on medical images demonstrates the generalizability of our method. We provide a detailed computational analysis, highlighting the suitability of our method as a practical annotation tool.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"22233-22243"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378330/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PGFed: Personalize Each Client's Global Objective for Federated Learning. PGFed:个性化每个客户的全球目标,实现联合学习。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2023-10-01 Epub Date: 2024-01-15 DOI: 10.1109/iccv51070.2023.00365
Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu
{"title":"PGFed: Personalize Each Client's Global Objective for Federated Learning.","authors":"Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu","doi":"10.1109/iccv51070.2023.00365","DOIUrl":"https://doi.org/10.1109/iccv51070.2023.00365","url":null,"abstract":"<p><p>Personalized federated learning has received an upsurge of attention due to the mediocre performance of conventional federated learning (FL) over heterogeneous data. Unlike conventional FL which trains a single global consensus model, personalized FL allows different models for different clients. However, existing personalized FL algorithms only <b>implicitly</b> transfer the collaborative knowledge across the federation by embedding the knowledge into the aggregated model or regularization. We observed that this implicit knowledge transfer fails to maximize the potential of each client's empirical risk toward other clients. Based on our observation, in this work, we propose <b>P</b>ersonalized <b>G</b>lobal <b>Fed</b>erated Learning (PGFed), a novel personalized FL framework that enables each client to <b>personalize</b> its own <b>global</b> objective by <b>explicitly</b> and adaptively aggregating the empirical risks of itself and other clients. To avoid massive <math><mrow><mrow><mo>(</mo><mrow><mi>O</mi><mrow><mo>(</mo><mrow><msup><mi>N</mi><mn>2</mn></msup></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow></math> communication overhead and potential privacy leakage while achieving this, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation. On top of PGFed, we develop a momentum upgrade, dubbed PGFedMo, to more efficiently utilize clients' empirical risks. Our extensive experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods. The code is publicly available at https://github.com/ljaiverson/pgfed.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"3923-3933"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11024864/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140853842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior. 魔鬼就在升采样中:利用深度图像先验简化去噪的架构决策
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2023-10-01 Epub Date: 2024-01-15 DOI: 10.1109/ICCV51070.2023.01140
Yilin Liu, Jiang Li, Yunkui Pang, Dong Nie, Pew-Thian Yap
{"title":"The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior.","authors":"Yilin Liu, Jiang Li, Yunkui Pang, Dong Nie, Pew-Thian Yap","doi":"10.1109/ICCV51070.2023.01140","DOIUrl":"10.1109/ICCV51070.2023.01140","url":null,"abstract":"<p><p>Deep Image Prior (DIP) shows that some network architectures inherently tend towards generating smooth images while resisting noise, a phenomenon known as spectral bias. Image denoising is a natural application of this property. Although denoising with DIP mitigates the need for large training sets, two often intertwined practical challenges need to be overcome: architectural design and noise fitting. Existing methods either handcraft or search for suitable architectures from a vast design space, due to the limited understanding of how architectural choices affect the denoising outcome. In this study, we demonstrate from a frequency perspective that unlearnt upsampling is the main driving force behind the denoising phenomenon with DIP. This finding leads to straightforward strategies for identifying a suitable architecture for every image without laborious search. Extensive experiments show that the estimated architectures achieve superior denoising results than existing methods with up to 95% fewer parameters. Thanks to this under-parameterization, the resulting architectures are less prone to noise-fitting.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"12374-12383"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140900571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Representation Learning for Histopathologic Images with Cluster Constraints. 利用集群约束改进组织病理学图像的表征学习
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2023-01-01 Epub Date: 2024-01-15 DOI: 10.1109/iccv51070.2023.01957
Weiyi Wu, Chongyang Gao, Joseph DiPalma, Soroush Vosoughi, Saeed Hassanpour
{"title":"Improving Representation Learning for Histopathologic Images with Cluster Constraints.","authors":"Weiyi Wu, Chongyang Gao, Joseph DiPalma, Soroush Vosoughi, Saeed Hassanpour","doi":"10.1109/iccv51070.2023.01957","DOIUrl":"10.1109/iccv51070.2023.01957","url":null,"abstract":"<p><p>Recent advances in whole-slide image (WSI) scanners and computational capabilities have significantly propelled the application of artificial intelligence in histopathology slide analysis. While these strides are promising, current supervised learning approaches for WSI analysis come with the challenge of exhaustively labeling high-resolution slides-a process that is both labor-intensive and timeconsuming. In contrast, self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative, given that they don't rely on explicit data annotations. These SSL strategies are quickly bridging the performance disparity with their supervised counterparts. In this context, we introduce an SSL framework. This framework aims for transferable representation learning and semantically meaningful clustering by synergizing invariance loss and clustering loss in WSI analysis. Notably, our approach outperforms common SSL methods in downstream classification and clustering tasks, as evidenced by tests on the Camelyon16 and a pancreatic cancer dataset. The code and additional details are accessible at https://github.com/wwyi1828/CluSiam.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"21347-21357"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11062482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Fixed Points in Generative Adversarial Networks: From Image-to-Image Translation to Disease Detection and Localization. 生成对抗网络中的不动点学习:从图像到图像的转换到疾病检测和定位。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2019-11-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.00028
Md Mahfuzur Rahman Siddiquee, Zongwei Zhou, Nima Tajbakhsh, Ruibin Feng, Michael B Gotway, Yoshua Bengio, Jianming Liang
{"title":"Learning Fixed Points in Generative Adversarial Networks: From Image-to-Image Translation to Disease Detection and Localization.","authors":"Md Mahfuzur Rahman Siddiquee,&nbsp;Zongwei Zhou,&nbsp;Nima Tajbakhsh,&nbsp;Ruibin Feng,&nbsp;Michael B Gotway,&nbsp;Yoshua Bengio,&nbsp;Jianming Liang","doi":"10.1109/iccv.2019.00028","DOIUrl":"https://doi.org/10.1109/iccv.2019.00028","url":null,"abstract":"<p><p>Generative adversarial networks (GANs) have ushered in a revolution in image-to-image translation. The development and proliferation of GANs raises an interesting question: can we train a GAN to remove an object, if present, from an image while otherwise preserving the image? Specifically, can a GAN \"virtually heal\" anyone by turning his medical image, with an unknown health status (diseased or healthy), into a healthy one, so that diseased regions could be revealed by subtracting those two images? Such a task requires a GAN to identify a minimal subset of target pixels for domain translation, an ability that we call fixed-point translation, which no GAN is equipped with yet. Therefore, we propose a new GAN, called Fixed-Point GAN, trained by (1) supervising same-domain translation through a conditional identity loss, and (2) regularizing cross-domain translation through revised adversarial, domain classification, and cycle consistency loss. Based on fixed-point translation, we further derive a novel framework for disease detection and localization using only image-level annotation. Qualitative and quantitative evaluations demonstrate that the proposed method outperforms the state of the art in multi-domain image-to-image translation and that it surpasses predominant weakly-supervised localization methods in both disease detection and localization. Implementation is available at https://github.com/jlianglab/Fixed-Point-GAN.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"191-200"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2019.00028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38108077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
Dilated Convolutional Neural Networks for Sequential Manifold-valued Data. 序列流形值数据的扩展卷积神经网络。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2019-10-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.01072
Xingjian Zhen, Rudrasis Chakraborty, Nicholas Vogt, Barbara B Bendlin, Vikas Singh
{"title":"Dilated Convolutional Neural Networks for Sequential Manifold-valued Data.","authors":"Xingjian Zhen, Rudrasis Chakraborty, Nicholas Vogt, Barbara B Bendlin, Vikas Singh","doi":"10.1109/iccv.2019.01072","DOIUrl":"10.1109/iccv.2019.01072","url":null,"abstract":"<p><p>Efforts are underway to study ways via which the power of deep neural networks can be extended to non-standard data types such as structured data (e.g., graphs) or manifold-valued data (e.g., unit vectors or special matrices). Often, sizable empirical improvements are possible when the geometry of such data spaces are incorporated into the design of the model, architecture, and the algorithms. Motivated by neuroimaging applications, we study formulations where the data are sequential manifold-valued measurements. This case is common in brain imaging, where the samples correspond to symmetric positive definite matrices or orientation distribution functions. Instead of a recurrent model which poses computational/technical issues, and inspired by recent results showing the viability of dilated convolutional models for sequence prediction, we develop a dilated convolutional neural network architecture for this task. On the technical side, we show how the modules needed in our network can be derived while explicitly taking the Riemannian manifold structure into account. We show how the operations needed can leverage known results for calculating the weighted Fréchet Mean (wFM). Finally, we present scientific results for group difference analysis in Alzheimer's disease (AD) where the groups are derived using AD pathology load: here the model finds several brain fiber bundles that are related to AD even when the subjects are all still cognitively healthy.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"10620-10630"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7220031/pdf/nihms-1058367.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37932355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DUAL-GLOW: Conditional Flow-Based Generative Model for Modality Transfer. DUAL-GLOW:基于条件流的模态迁移生成模型。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2019-10-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.01071
Haoliang Sun, Ronak Mehta, Hao H Zhou, Zhichun Huang, Sterling C Johnson, Vivek Prabhakaran, Vikas Singh
{"title":"DUAL-GLOW: Conditional Flow-Based Generative Model for Modality Transfer.","authors":"Haoliang Sun,&nbsp;Ronak Mehta,&nbsp;Hao H Zhou,&nbsp;Zhichun Huang,&nbsp;Sterling C Johnson,&nbsp;Vivek Prabhakaran,&nbsp;Vikas Singh","doi":"10.1109/iccv.2019.01071","DOIUrl":"https://doi.org/10.1109/iccv.2019.01071","url":null,"abstract":"<p><p>Positron emission tomography (PET) imaging is an imaging modality for diagnosing a number of neurological diseases. In contrast to Magnetic Resonance Imaging (MRI), PET is costly and involves injecting a radioactive substance into the patient. Motivated by developments in modality transfer in vision, we study the generation of certain types of PET images from MRI data. We derive new flow-based generative models which we show perform well in this small sample size regime (much smaller than dataset sizes available in standard vision tasks). Our formulation, DUAL-GLOW, is based on two invertible networks and a relation network that maps the latent spaces to each other. We discuss how given the prior distribution, learning the conditional distribution of PET given the MRI image reduces to obtaining the conditional distribution between the two latent codes w.r.t. the two image types. We also extend our framework to leverage \"side\" information (or attributes) when available. By controlling the PET generation through \"conditioning\" on age, our model is also able to capture brain FDG-PET (hypometabolism) changes, as a function of age. We present experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset with 826 subjects, and obtain good performance in PET image synthesis, qualitatively and quantitatively better than recent works.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"10610-10619"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2019.01071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39893370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Scene Graph Prediction with Limited Labels. 有限标签的场景图预测。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2019-10-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.00267
Vincent S Chen, Paroma Varma, Ranjay Krishna, Michael Bernstein, Christopher Ré, Li Fei-Fei
{"title":"Scene Graph Prediction with Limited Labels.","authors":"Vincent S Chen,&nbsp;Paroma Varma,&nbsp;Ranjay Krishna,&nbsp;Michael Bernstein,&nbsp;Christopher Ré,&nbsp;Li Fei-Fei","doi":"10.1109/iccv.2019.00267","DOIUrl":"https://doi.org/10.1109/iccv.2019.00267","url":null,"abstract":"<p><p>Visual knowledge bases such as Visual Genome power numerous applications in computer vision, including visual question answering and captioning, but suffer from sparse, incomplete relationships. All scene graph models to date are limited to training on a small set of visual relationships that have thousands of training labels each. Hiring human annotators is expensive, and using textual knowledge base completion methods are incompatible with visual data. In this paper, we introduce a semi-supervised method that assigns probabilistic relationship labels to a large number of unlabeled images using few' labeled examples. We analyze visual relationships to suggest two types of image-agnostic features that are used to generate noisy heuristics, whose outputs are aggregated using a factor graph-based generative model. With as few as 10 labeled examples per relationship, the generative model creates enough training data to train any existing state-of-the-art scene graph model. We demonstrate that our method outperforms all baseline approaches on scene graph prediction by 5.16 recall@ 100 for PREDCLS. In our limited label setting, we define a complexity metric for relationships that serves as an indicator (R<sup>2</sup> = 0.778) for conditions under which our method succeeds over transfer learning, the de-facto approach for training with limited labels.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"2580-2590"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2019.00267","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37776489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Conditional Recurrent Flow: Conditional Generation of Longitudinal Samples with Applications to Neuroimaging. 条件循环流:纵向样本的条件生成与神经成像应用
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2019-10-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.01079
Seong Jae Hwang, Zirui Tao, Won Hwa Kim, Vikas Singh
{"title":"Conditional Recurrent Flow: Conditional Generation of Longitudinal Samples with Applications to Neuroimaging.","authors":"Seong Jae Hwang, Zirui Tao, Won Hwa Kim, Vikas Singh","doi":"10.1109/iccv.2019.01079","DOIUrl":"10.1109/iccv.2019.01079","url":null,"abstract":"<p><p>We develop a conditional generative model for longitudinal image datasets based on sequential invertible neural networks. Longitudinal image acquisitions are common in various scientific and biomedical studies where often each image sequence sample may also come together with various secondary (fixed or temporally dependent) measurements. The key goal is not only to estimate the parameters of a deep generative model for the given longitudinal data, but also to enable evaluation of how the temporal course of the generated longitudinal samples are influenced as a function of induced changes in the (secondary) temporal measurements (or events). Our proposed formulation incorporates recurrent subnetworks and temporal context gating, which provide a smooth transition in a temporal sequence of generated data that can be easily informed or modulated by secondary temporal conditioning variables. We show that the formulation works well despite the smaller sample sizes common in these applications. Our model is validated on two video datasets and a longitudinal Alzheimer's disease (AD) dataset for both quantitative and qualitative evaluations of the generated samples. Further, using our generated longitudinal image samples, we show that we can capture the pathological progressions in the brain that turn out to be consistent with the existing literature, and could facilitate various types of downstream statistical analysis.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"10691-10700"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7220239/pdf/nihms-1058360.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37932354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信