Proceedings. International Conference on Image Processing最新文献

筛选
英文 中文
DEEP ACTIVE LEARNING FOR CRYO-ELECTRON TOMOGRAPHY CLASSIFICATION. 基于深度主动学习的低温电子断层扫描分类。
Proceedings. International Conference on Image Processing Pub Date : 2022-10-01 DOI: 10.1109/icip46576.2022.9898002
Tianyang Wang, Bo Li, Jing Zhang, Xiangrui Zeng, Mostofa Rafid Uddin, Wei Wu, Min Xu
{"title":"DEEP ACTIVE LEARNING FOR CRYO-ELECTRON TOMOGRAPHY CLASSIFICATION.","authors":"Tianyang Wang,&nbsp;Bo Li,&nbsp;Jing Zhang,&nbsp;Xiangrui Zeng,&nbsp;Mostofa Rafid Uddin,&nbsp;Wei Wu,&nbsp;Min Xu","doi":"10.1109/icip46576.2022.9898002","DOIUrl":"https://doi.org/10.1109/icip46576.2022.9898002","url":null,"abstract":"<p><p>Cryo-Electron Tomography (cryo-ET) is an emerging 3D imaging technique which shows great potentials in structural biology research. One of the main challenges is to perform classification of macromolecules captured by cryo-ET. Recent efforts exploit deep learning to address this challenge. However, training reliable deep models usually requires a huge amount of labeled data in supervised fashion. Annotating cryo-ET data is arguably very expensive. Deep Active Learning (DAL) can be used to reduce labeling cost while not sacrificing the task performance too much. Nevertheless, most existing methods resort to auxiliary models or complex fashions (e.g. adversarial learning) for uncertainty estimation, the core of DAL. These models need to be highly customized for cryo-ET tasks which require 3D networks, and extra efforts are also indispensable for tuning these models, rendering a difficulty of deployment on cryo-ET tasks. To address these challenges, we propose a novel metric for data selection in DAL, which can also be leveraged as a regularizer of the empirical loss, further boosting the task model. We demonstrate the superiority of our method via extensive experiments on both simulated and real cryo-ET datasets. Our source <b><i>Code</i></b> and <b><i>Appendix</i></b> can be found at <i>this URL</i>.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2022 ","pages":"1611-1615"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10072314/pdf/nihms-1882159.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10183702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JOINT MOTION CORRECTION AND 3D SEGMENTATION WITH GRAPH-ASSISTED NEURAL NETWORKS FOR RETINAL OCT. 基于图辅助神经网络的视网膜Oct关节运动校正与三维分割。
Proceedings. International Conference on Image Processing Pub Date : 2022-10-01 DOI: 10.1109/icip46576.2022.9898072
Yiqian Wang, Carlo Galang, William R Freeman, Truong Q Nguyen, Cheolhong An
{"title":"JOINT MOTION CORRECTION AND 3D SEGMENTATION WITH GRAPH-ASSISTED NEURAL NETWORKS FOR RETINAL OCT.","authors":"Yiqian Wang,&nbsp;Carlo Galang,&nbsp;William R Freeman,&nbsp;Truong Q Nguyen,&nbsp;Cheolhong An","doi":"10.1109/icip46576.2022.9898072","DOIUrl":"https://doi.org/10.1109/icip46576.2022.9898072","url":null,"abstract":"<p><p>Optical Coherence Tomography (OCT) is a widely used non-invasive high resolution 3D imaging technique for biological tissues and plays an important role in ophthalmology. OCT retinal layer segmentation is a fundamental image processing step for OCT-Angiography projection, and disease analysis. A major problem in retinal imaging is the motion artifacts introduced by involuntary eye movements. In this paper, we propose neural networks that jointly correct eye motion and retinal layer segmentation utilizing 3D OCT information, so that the segmentation among neighboring B-scans would be consistent. The experimental results show both visual and quantitative improvements by combining motion correction and 3D OCT layer segmentation comparing to conventional and deep-learning based 2D OCT layer segmentation.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2022 ","pages":"766-770"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280808/pdf/nihms-1908719.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10066997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
UNSUPERVISED VIDEO SEGMENTATION ALGORITHMS BASED ON FLEXIBLY REGULARIZED MIXTURE MODELS. 基于灵活正则化混合模型的无监督视频分割算法。
Proceedings. International Conference on Image Processing Pub Date : 2022-10-01 Epub Date: 2022-10-18 DOI: 10.1109/icip46576.2022.9897691
Claire Launay, Jonathan Vacher, Ruben Coen-Cagli
{"title":"UNSUPERVISED VIDEO SEGMENTATION ALGORITHMS BASED ON FLEXIBLY REGULARIZED MIXTURE MODELS.","authors":"Claire Launay, Jonathan Vacher, Ruben Coen-Cagli","doi":"10.1109/icip46576.2022.9897691","DOIUrl":"10.1109/icip46576.2022.9897691","url":null,"abstract":"<p><p>We propose a family of probabilistic segmentation algorithms for videos that rely on a generative model capturing static and dynamic natural image statistics. Our framework adopts flexibly regularized mixture models (FlexMM) [1], an efficient method to combine mixture distributions across different data sources. FlexMMs of Student-t distributions successfully segment static natural images, through uncertainty-based information sharing between hidden layers of CNNs. We further extend this approach to videos and exploit FlexMM to propagate segment labels across space and time. We show that temporal propagation improves temporal consistency of segmentation, reproducing qualitatively a key aspect of human perceptual grouping. Besides, Student-t distributions can capture statistics of optical flows of natural movies, which represent apparent motion in the video. Integrating these motion cues in our temporal FlexMM further enhances the segmentation of each frame of natural movies. Our probabilistic dynamic segmentation algorithms thus provide a new framework to study uncertainty in human dynamic perceptual segmentation.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":" ","pages":"4073-4077"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9670685/pdf/nihms-1849113.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40477340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LEARNING TO CORRECT AXIAL MOTION IN OCT FOR 3D RETINAL IMAGING. 学习校正三维视网膜成像中的轴向运动。
Proceedings. International Conference on Image Processing Pub Date : 2021-09-01 Epub Date: 2021-08-23 DOI: 10.1109/icip42928.2021.9506620
Yiqian Wang, Alexandra Warter, Melina Cavichini-Cordeiro, William R Freeman, Dirk-Uwe G Bartsch, Truong Q Nguyen, Cheolhong An
{"title":"LEARNING TO CORRECT AXIAL MOTION IN OCT FOR 3D RETINAL IMAGING.","authors":"Yiqian Wang, Alexandra Warter, Melina Cavichini-Cordeiro, William R Freeman, Dirk-Uwe G Bartsch, Truong Q Nguyen, Cheolhong An","doi":"10.1109/icip42928.2021.9506620","DOIUrl":"10.1109/icip42928.2021.9506620","url":null,"abstract":"<p><p>Optical Coherence Tomography (OCT) is a powerful technique for non-invasive 3D imaging of biological tissues at high resolution that has revolutionized retinal imaging. A major challenge in OCT imaging is the motion artifacts introduced by involuntary eye movements. In this paper, we propose a convolutional neural network that learns to correct axial motion in OCT based on a single volumetric scan. The proposed method is able to correct large motion, while preserving the overall curvature of the retina. The experimental results show significant improvements in visual quality as well as overall error compared to the conventional methods in both normal and disease cases.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":" ","pages":"126-130"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9359411/pdf/nihms-1823145.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40601512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A DEEP LEARNING-BASED CAD SYSTEM FOR RENAL ALLOGRAFT ASSESSMENT: DIFFUSION, BOLD, AND CLINICAL BIOMARKERS. 基于深度学习的异体肾移植评估cad系统:扩散、大胆和临床生物标志物。
Proceedings. International Conference on Image Processing Pub Date : 2020-10-01 Epub Date: 2020-09-30 DOI: 10.1109/ICIP40778.2020.9190818
Mohamed Shehata, Mohammed Ghazal, Hadil Abu Khalifeh, Ashraf Khalil, Ahmed Shalaby, Amy C Dwyer, Ashraf M Bakr, Robert Keynton, Ayman El-Baz
{"title":"A DEEP LEARNING-BASED CAD SYSTEM FOR RENAL ALLOGRAFT ASSESSMENT: DIFFUSION, BOLD, AND CLINICAL BIOMARKERS.","authors":"Mohamed Shehata,&nbsp;Mohammed Ghazal,&nbsp;Hadil Abu Khalifeh,&nbsp;Ashraf Khalil,&nbsp;Ahmed Shalaby,&nbsp;Amy C Dwyer,&nbsp;Ashraf M Bakr,&nbsp;Robert Keynton,&nbsp;Ayman El-Baz","doi":"10.1109/ICIP40778.2020.9190818","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190818","url":null,"abstract":"<p><p>Recently, studies for non-invasive renal transplant evaluation have been explored to control allograft rejection. In this paper, a computer-aided diagnostic system has been developed to accommodate with an early-stage renal transplant status assessment, called RT-CAD. Our model of this system integrated multiple sources for a more accurate diagnosis: two image-based sources and two clinical-based sources. The image-based sources included apparent diffusion coefficients (ADCs) and the amount of deoxygenated hemoglobin (R2*). More specifically, these ADCs were extracted from 47 diffusion weighted magnetic resonance imaging (DW-MRI) scans at 11 different <i>b</i>-values (b0, b50, b100, …, b1000 s/mm<sup>2</sup>), while the R2* values were extracted from 30 blood oxygen level-dependent MRI (BOLD-MRI) scans at 5 different echo times (2ms, 7ms, 12ms, 17ms, and 22ms). The clinical sources included serum creatinine (SCr) and creatinine clearance (CrCl). First, the kidney was segmented through the RT-CAD system using a geometric deformable model called a level-set method. Second, both ADCs and R2* were estimated for common patients (N = 30) and then were integrated with the corresponding SCr and CrCl. Last, these integrated biomarkers were considered the discriminatory features to be used as trainers and testers for future deep learning-based classifiers such as stacked auto-encoders (SAEs). We used a k-fold cross-validation criteria to evaluate the RT-CAD system diagnostic performance, which achieved the following scores: 93.3%, 90.0%, and 95.0% in terms of accuracy, sensitivity, and specificity in differentiating between acute renal rejection (AR) and non-rejection (NR). The reliability and completeness of the RT-CAD system was further accepted by the area under the curve score of 0.92. The conclusions ensured that the presented RT-CAD system has a high reliability to diagnose the status of the renal transplant in a non-invasive way.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2020 ","pages":"355-359"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICIP40778.2020.9190818","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39578810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
RTIP: A FULLY AUTOMATED ROOT TIP TRACKER FOR MEASURING PLANT GROWTH WITH INTERMITTENT PERTURBATIONS. rtip:全自动根尖跟踪仪,用于测量间歇性扰动下的植物生长情况。
Proceedings. International Conference on Image Processing Pub Date : 2020-10-01 Epub Date: 2020-09-30 DOI: 10.1109/icip40778.2020.9191008
Deniz Kavzak Ufuktepe, Kannappan Palaniappan, Melissa Elmali, Tobias I Baskin
{"title":"RTIP: A FULLY AUTOMATED ROOT TIP TRACKER FOR MEASURING PLANT GROWTH WITH INTERMITTENT PERTURBATIONS.","authors":"Deniz Kavzak Ufuktepe, Kannappan Palaniappan, Melissa Elmali, Tobias I Baskin","doi":"10.1109/icip40778.2020.9191008","DOIUrl":"10.1109/icip40778.2020.9191008","url":null,"abstract":"<p><p>RTip is a tool to quantify plant root growth velocity using high resolution microscopy image sequences at sub-pixel accuracy. The fully automated RTip tracker is designed for high-throughput analysis of plant phenotyping experiments with episodic perturbations. RTip is able to auto-skip past these manual intervention perturbation activity, <i>i.e.</i> when the root tip is not under the microscope, image is distorted or blurred. RTip provides the most accurate root growth velocity results with the lowest variance (<i>i.e.</i> localization jitter) compared to six tracking algorithms including the top performing unsupervised Discriminative Correlation Filter Tracker and the Deeper and Wider Siamese Network. RTip is the only tracker that is able to automatically detect and recover from (occlusion-like) varying duration perturbation events.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2020 ","pages":"2516-2520"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8033648/pdf/nihms-1685784.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25581695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QUANTIFYING ACTIN FILAMENTS IN MICROSCOPIC IMAGES USING KEYPOINT DETECTION TECHNIQUES AND A FAST MARCHING ALGORITHM. 使用关键点检测技术和快速行进算法量化显微图像中的肌动蛋白丝。
Proceedings. International Conference on Image Processing Pub Date : 2020-10-01 Epub Date: 2020-09-30 DOI: 10.1109/ICIP40778.2020.9191337
Yi Liu, Alexander Nedo, Kody Seward, Jeffrey Caplan, Chandra Kambhamettu
{"title":"QUANTIFYING ACTIN FILAMENTS IN MICROSCOPIC IMAGES USING KEYPOINT DETECTION TECHNIQUES AND A FAST MARCHING ALGORITHM.","authors":"Yi Liu, Alexander Nedo, Kody Seward, Jeffrey Caplan, Chandra Kambhamettu","doi":"10.1109/ICIP40778.2020.9191337","DOIUrl":"10.1109/ICIP40778.2020.9191337","url":null,"abstract":"<p><p>The actin filament plays a fundamental role in numerous cellular processes such as cell growth, proliferation, migration, division, and locomotion. The actin cytoskeleton is highly dynamical and can polymerize and depolymerize in a very short time under different stimuli. To study the mechanics of actin filament, quantifying the length and number of actin filaments in each time frame of microscopic images is fundamental. In this paper, we adopt a Convolutional Neural Network (CNN) to segment actin filaments first, and then we utilize a modified Resnet to detect junctions and endpoints of filaments. With binary segmentation and detected keypoints, we apply a fast marching algorithm to obtain the number and length of each actin filament in microscopic images. We have also collected a dataset of 10 microscopic images of actin filaments to test our method. Our experiments show that our approach outperforms other existing approaches tackling this problem regarding both accuracy and inference time.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2020 ","pages":"2506-2510"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7983297/pdf/nihms-1673276.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25510030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A REAL-TIME MEDICAL ULTRASOUND SIMULATOR BASED ON A GENERATIVE ADVERSARIAL NETWORK MODEL. 基于生成对抗网络模型的实时医学超声模拟器。
Proceedings. International Conference on Image Processing Pub Date : 2019-09-01 Epub Date: 2019-08-26 DOI: 10.1109/icip.2019.8803570
Bo Peng, Xing Huang, Shiyuan Wang, Jingfeng Jiang
{"title":"A REAL-TIME MEDICAL ULTRASOUND SIMULATOR BASED ON A GENERATIVE ADVERSARIAL NETWORK MODEL.","authors":"Bo Peng,&nbsp;Xing Huang,&nbsp;Shiyuan Wang,&nbsp;Jingfeng Jiang","doi":"10.1109/icip.2019.8803570","DOIUrl":"https://doi.org/10.1109/icip.2019.8803570","url":null,"abstract":"This paper presents an artificial intelligence-based ultrasound simulator suitable for medical simulation and clinical training. Particularly, we propose a machine learning approach to realistically simulate ultrasound images based on generative adversarial networks (GANs). Using B-mode ultrasound images simulated by a known ultrasound simulator, Field II, an \"image-to-image\" ultrasound simulator was trained. Then, through evaluations, we found that the GAN-based simulator can generate B-mode images following Rayleigh scattering. Our preliminary study demonstrated that ultrasound B-mode images from anatomies inferred from magnetic resonance imaging (MRI) data were feasible. While some image blurring was observed, ultrasound B- mode images obtained were both visually and quantitatively comparable to those obtained using the Field II simulator. It is also important to note that the GAN-based ultrasound simulator was computationally efficient and could achieve a frame rate of 15 frames/second using a regular laptop computer. In the future, the proposed GAN-based simulator will be used to synthesize more realistic looking ultrasound images with artifacts such as shadowing.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2019 ","pages":"4629-4633"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/icip.2019.8803570","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25541753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
EARLY ASSESSMENT OF RENAL TRANSPLANTS USING BOLD-MRI: PROMISING RESULTS. 使用bold-mri对肾移植的早期评估:有希望的结果。
Proceedings. International Conference on Image Processing Pub Date : 2019-09-01 Epub Date: 2019-08-26 DOI: 10.1109/ICIP.2019.8803042
M Shehata, A Shalaby, M Ghazal, M Abou El-Ghar, M A Badawy, G Beache, A Dwyer, M El-Melegy, G Giridharan, R Keynton, A El-Baz
{"title":"EARLY ASSESSMENT OF RENAL TRANSPLANTS USING BOLD-MRI: PROMISING RESULTS.","authors":"M Shehata,&nbsp;A Shalaby,&nbsp;M Ghazal,&nbsp;M Abou El-Ghar,&nbsp;M A Badawy,&nbsp;G Beache,&nbsp;A Dwyer,&nbsp;M El-Melegy,&nbsp;G Giridharan,&nbsp;R Keynton,&nbsp;A El-Baz","doi":"10.1109/ICIP.2019.8803042","DOIUrl":"https://doi.org/10.1109/ICIP.2019.8803042","url":null,"abstract":"<p><p>Non-invasive evaluation of renal transplant function is essential to minimize and manage renal rejection. A computer-assisted diagnostic (CAD) system was developed to evaluate kidney function post-transplantation. The developed CAD system utilizes the amount of blood-oxygenation extracted from 3D (2D + time) blood oxygen level-dependent magnetic resonance imaging (BOLD-MRI) to estimate renal function. BOLD-MRI scans were acquired at five different echo-times (2, 7, 12, 17, and 22) ms from 15 transplant patients. The developed CAD system first segments kidneys using the level-sets method followed by estimation of the amount of deoxyhemoglobin, also known as apparent relaxation rate (R2*). These R2* estimates were used as discriminatory features (global features (mean R2*) and local features (pixel-wise R2*)) to train and test state-of-the-art machine learning classifiers to differentiate between non-rejection (NR) and acute renal rejection. Using a leave-one-out cross-validation approach along with an artificial neural network (ANN) classifier, the CAD system demonstrated 93.3% accuracy, 100% sensitivity, and 90% specificity in distinguishing AR from non-rejection . These preliminary results demonstrate the efficacy of the CAD system to detect renal allograft status non-invasively.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2019 ","pages":"1395-1399"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICIP.2019.8803042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39555134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
DEEP MR IMAGE SUPER-RESOLUTION USING STRUCTURAL PRIORS. 使用结构先验的深度磁共振图像超分辨率。
Proceedings. International Conference on Image Processing Pub Date : 2018-10-01 Epub Date: 2018-09-06 DOI: 10.1109/ICIP.2018.8451496
Venkateswararao Cherukuri, Tiantong Guo, Steven J Schiff, Vishal Monga
{"title":"DEEP MR IMAGE SUPER-RESOLUTION USING STRUCTURAL PRIORS.","authors":"Venkateswararao Cherukuri,&nbsp;Tiantong Guo,&nbsp;Steven J Schiff,&nbsp;Vishal Monga","doi":"10.1109/ICIP.2018.8451496","DOIUrl":"https://doi.org/10.1109/ICIP.2018.8451496","url":null,"abstract":"<p><p>High resolution magnetic resonance (MR) images are desired for accurate diagnostics. In practice, image resolution is restricted by factors like hardware, cost and processing constraints. Recently, deep learning methods have been shown to produce compelling state of the art results for image superresolution. Paying particular attention to desired hi-resolution MR image structure, we propose a new regularized network that exploits image priors, namely a low-rank structure and a sharpness prior to enhance deep MR image superresolution. Our contributions are then incorporating these priors in an analytically tractable fashion in the learning of a convolutional neural network (CNN) that accomplishes the super-resolution task. This is particularly challenging for the low rank prior, since the rank is not a differentiable function of the image matrix (and hence the network parameters), an issue we address by pursuing differentiable approximations of the rank. Sharpness is emphasized by the variance of the Laplacian which we show can be implemented by a fixed <i>feedback</i> layer at the output of the network. Experiments performed on two publicly available MR brain image databases exhibit promising results particularly when training imagery is limited.</p>","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"2018 ","pages":"410-414"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICIP.2018.8451496","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37106530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信