2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)最新文献

筛选
英文 中文
Automatic detection of aortic dissection in contrast-enhanced CT 增强CT对主动脉夹层的自动检测
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950582
E. Dehghan, Hongzhi Wang, T. Syeda-Mahmood
{"title":"Automatic detection of aortic dissection in contrast-enhanced CT","authors":"E. Dehghan, Hongzhi Wang, T. Syeda-Mahmood","doi":"10.1109/ISBI.2017.7950582","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950582","url":null,"abstract":"Aortic dissection is a condition in which a tear in the inner wall of the aorta allows blood to flow between two layers of the aortic wall. Aortic dissection is associated with severe chest pain and can be deadly. Contrast-enhanced CT is the main modality for detection of aortic dissection. Aortic dissection is one of the target abnormalities during evaluation of a triple rule-out CT in emergency cases. In this paper, we present a method for automatic patient-level detection of aortic dissection. Our algorithm starts by an atlas-based segmentation of the aorta which is used to produce cross-sectional images of the organ. Segmentation refinement, flap detection and shape analysis are employed to detect aortic dissection in these cross-sectional slices. Then, the slice-level results are aggregated to render a patient-level detection result. We tested our algorithm on a data set of 37 contrast-enhanced CT volumes, with 13 cases of aortic dissection. We achieved an accuracy of 83.8%, a sensitivity of 84.6% and a specificity of 83.3%.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87070594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Automated vesicle fusion detection using Convolutional Neural Networks 基于卷积神经网络的自动囊泡融合检测
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950497
Haohan Li, Zhaozheng Yin, Yingke Xu
{"title":"Automated vesicle fusion detection using Convolutional Neural Networks","authors":"Haohan Li, Zhaozheng Yin, Yingke Xu","doi":"10.1109/ISBI.2017.7950497","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950497","url":null,"abstract":"Quantitative analysis of vesicle-plasma membrane fusion events in the fluorescence microscopy, has been proven to be important in the vesicle exocytosis study. In this paper, we present a framework to automatically detect fusion events. First, an iterative searching algorithm is developed to extract image patch sequences containing potential events. Then, we propose an event image to integrate the critical image patches of a candidate event into a single-image joint representation as the input to Convolutional Neural Networks (CNNs). According to the duration of candidate events, we design three CNN architectures to automatically learn features for the fusion event classification. Compared on 9 challenging datasets, our proposed method showed very competitive performance and outperformed two state-of-the-arts.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86373864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A simple respiratory motion analysis method for chest tomosynthesis 一种简易胸腔断层合成呼吸运动分析方法
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950569
Hua Zhang, X. Tao, G. Qin, Jianhua Ma, Qianjin Feng, Wufan Chen
{"title":"A simple respiratory motion analysis method for chest tomosynthesis","authors":"Hua Zhang, X. Tao, G. Qin, Jianhua Ma, Qianjin Feng, Wufan Chen","doi":"10.1109/ISBI.2017.7950569","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950569","url":null,"abstract":"Chest tomosynthesis (CTS) is a newly developed imaging technique which provides pseudo-3D volume anatomical information of thorax from limited angle projections and therefore improves the visibility of anatomy without so much increase on radiation dose compared to the chest radiography (CXR). However, one of the relatively common problems in CTS is the respiratory motion of patient during image acquisition, which negatively impacts the detectability. In this paper, we propose a sin-quadratic model to analyze the respiratory motion during CTS scanning, which is a real time method that generates the respiratory signal by directly extracting the motion of diaphragm during data acquisition. According to the extracted respiratory signal, physicians could re-scan the patient immediately or conduct motion free CTS image reconstruction for patients that could not hold their breath perfectly during the scan time. The effectiveness of the proposed model was demonstrated with both the simulated phantom data and the real patient data.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83678352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancement of 250-MHz quantitative acoustic-microscopy data using a single-image super-resolution method 使用单图像超分辨率方法增强250-MHz定量声学显微镜数据
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950645
A. Basarab, D. Rohrbach, Ningning Zhao, J. Tourneret, D. Kouamé, J. Mamou
{"title":"Enhancement of 250-MHz quantitative acoustic-microscopy data using a single-image super-resolution method","authors":"A. Basarab, D. Rohrbach, Ningning Zhao, J. Tourneret, D. Kouamé, J. Mamou","doi":"10.1109/ISBI.2017.7950645","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950645","url":null,"abstract":"Scanning acoustic microscopy (SAM) is a well-accepted imaging modality for forming quantitative, two-dimensional maps of acoustic properties of soft tissues at microscopic scales. The quantitative maps formed using our custom SAM system using a 250-MHz single-element transducer have a nominal resolution of 7 µm, which is insufficient for some investigations. To enhance spatial resolution, a SAM system operating at even higher frequencies could be designed, but associated costs and experimental difficulties are challenging. Therefore, the objective of this study is to evaluate the potential of super-resolution (SR) image processing to enhance the spatial resolution of quantitative maps in SAM. To the best of our knowledge, this is the first attempt at using post-processing, image-enhancement techniques in SAM. Results of realistic simulations and experimental data acquired from a standard resolution test pattern confirm the improved spatial resolution and the potential value of using SR in SAM.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79459530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The structural disconnectome: A pathology-sensitive extension of the structural connectome 结构断连组:结构连接组的病理敏感延伸
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950539
C. Langen, M. Vernooij, L. Cremers, Wyke Huizinga, M. Groot, M. Ikram, T. White, W. Niessen
{"title":"The structural disconnectome: A pathology-sensitive extension of the structural connectome","authors":"C. Langen, M. Vernooij, L. Cremers, Wyke Huizinga, M. Groot, M. Ikram, T. White, W. Niessen","doi":"10.1109/ISBI.2017.7950539","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950539","url":null,"abstract":"Brain connectivity is increasingly being studied using connectomes. Typical structural connectome definitions do not directly take white matter pathology into account. Presumably, pathology impedes signal transmission along fibres, leading to a reduction in function. In order to directly study disconnection and localize pathology within the connectome, we present the disconnectome, which only considers fibres that intersect with white matter pathology. To show the potential of the disconnectome in brain studies, we showed in a cohort of 4199 adults with varying loads of white matter lesions (WMLs) that: (1) Disconnection is not a function of streamline density; (2) Hubs are more affected by WMLs than peripheral nodes; (3) Connections between hubs are more severely and frequently affected by WMLs than other connection types; and (4) Connections between region clusters are often more severely affected than those within clusters.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89297445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An easy-to-use image labeling platform for automatic magnetic resonance image quality assessment 一个易于使用的图像标记平台,用于自动磁共振图像质量评估
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950628
Thomas Kustner, Philipp Wolf, Martin Schwartz, Annika Liebgott, F. Schick, S. Gatidis, Bin Yang
{"title":"An easy-to-use image labeling platform for automatic magnetic resonance image quality assessment","authors":"Thomas Kustner, Philipp Wolf, Martin Schwartz, Annika Liebgott, F. Schick, S. Gatidis, Bin Yang","doi":"10.1109/ISBI.2017.7950628","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950628","url":null,"abstract":"In medical imaging, images are usually evaluated by a human observer (HO) depending on the underlying diagnostic question which can be a time-demanding and cost-intensive process. Model observers (MO) which mimic the human visual system can help to support the HO during this reading process or can provide feedback to the MR scanner and/or HO about the derived image quality. For this purpose MOs are trained on HO-derived image labels with respect to a certain diagnostic task. We propose a non-reference image quality assessment system based on a machine-learning approach with a deep neural network and active learning to keep the amount of needed labeled training data small. A labeling platform is developed as a web application with accounted data security and confidentiality to facilitate the HO labeling procedure. The platform is made publicly available.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91233132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-angle TOF MR brain angiography of the common marmoset 普通狨猴的多角度TOF MR脑血管造影
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950714
M. Mescam, J. Brossard, N. Vayssiere, C. Fonta
{"title":"Multi-angle TOF MR brain angiography of the common marmoset","authors":"M. Mescam, J. Brossard, N. Vayssiere, C. Fonta","doi":"10.1109/ISBI.2017.7950714","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950714","url":null,"abstract":"The relation between normal and pathological aging and the cerebrovascular component is still unclear. In this context, the common marmoset, which has the advantage of enabling longitudinal studies over a reasonable timeframe, appears as a good pre-clinical model. However, there is still a lack of quantitative information on the macrovascular structure of the marmoset brain. In this paper, we investigate the potentiality of multi-angle TOF MR angiography using a 3T MRI scanner to perform morphometric analysis of the marmoset brain vasculature. Our image processing pipeline greatly relies on the use of multiscale vesselness enhancement filters to help extract the 3D macrovasculature and perform subsequent morphometric calculations. Although multi-angle acquisition does not improve morphometric analysis significantly as compared to single-angle acquisition, it improves the network extraction by increasing the robustness of image processing algorithms.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89877684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HEp-2 cell classification based on a Deep Autoencoding-Classification convolutional neural network 基于深度自编码-分类卷积神经网络的HEp-2细胞分类
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950689
Jingxin Liu, Bolei Xu, L. Shen, J. Garibaldi, G. Qiu
{"title":"HEp-2 cell classification based on a Deep Autoencoding-Classification convolutional neural network","authors":"Jingxin Liu, Bolei Xu, L. Shen, J. Garibaldi, G. Qiu","doi":"10.1109/ISBI.2017.7950689","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950689","url":null,"abstract":"In this paper, we present a novel deep learning model termed Deep Autoencoding-Classification Network (DACN) for HEp-2 cell classification. The DACN consists of an autoencoder and a normal classification convolutional neural network (CNN), while the two architectures shares the same encoding pipeline. The DACN model is jointly optimized for the classification error and the image reconstruction error based on a multi-task learning procedure. We evaluate the proposed model using the publicly available ICPR2012 benchmark dataset. We show that this architecture is particularly effective when the training dataset is small which is often the case in medical imaging applications. We present experimental results to show that the proposed approach outperforms all known state of the art HEp-2 cell classification methods.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77896441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Neuron reconstruction from fluorescence microscopy images using sequential Monte Carlo estimation 神经元重建从荧光显微镜图像使用顺序蒙特卡罗估计
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950462
M. Radojević, E. Meijering
{"title":"Neuron reconstruction from fluorescence microscopy images using sequential Monte Carlo estimation","authors":"M. Radojević, E. Meijering","doi":"10.1109/ISBI.2017.7950462","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950462","url":null,"abstract":"Microscopic analysis of neuronal cell morphology is required in many studies in neurobiology. The development of computational methods for this purpose is an ongoing challenge and includes solving some of the fundamental computer vision problems such as detecting and grouping sometimes very noisy line-like image structures. Advancements in the field are impeded by the complexity and immense diversity of neuronal cell shapes across species and brain regions, as well as by the high variability in image quality across labs and experimental setups. Here we present a novel method for fully automatic neuron reconstruction based on sequential Monte Carlo estimation. It uses newly designed models for predicting and updating branch node estimates as well as novel initialization and final tree construction strategies. The proposed method was evaluated on 3D fluorescence microscopy images containing single neurons and neuronal networks for which manual annotations were available as gold-standard references. The results indicate that our method performs favorably compared to state-of-the-art alternative methods.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72954974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Feature selection and thyroid nodule classification using transfer learning 基于迁移学习的特征选择和甲状腺结节分类
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950707
Tianjiao Liu, Shuaining Xie, Yukang Zhang, Jing Yu, Lijuan Niu, Weidong Sun
{"title":"Feature selection and thyroid nodule classification using transfer learning","authors":"Tianjiao Liu, Shuaining Xie, Yukang Zhang, Jing Yu, Lijuan Niu, Weidong Sun","doi":"10.1109/ISBI.2017.7950707","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950707","url":null,"abstract":"Ultrasonography is a valuable diagnosis method for thyroid nodules. Automatically discriminating benign and malignant nodules in the ultrasound images can provide aided diagnosis suggestions, or increase the diagnosis accuracy when lack of experts. The core problem in this issue is how to capture appropriate features for this specific task. Here, we propose a feature extraction method for ultrasound images based on the convolution neural networks (CNNs), try to introduce more meaningful and specific features to the classification. A CNN model trained with ImageNet data is transferred to the ultrasound image domain, to generate semantic deep features under small sample condition. Then, we combine those deep features with conventional features such as Histogram of Oriented Gradient (HOG) and Scale Invariant Feature Transform (SIFT) together to form a hybrid feature space. Furthermore, to make the general deep features more pertinent to our problem, a feature subset selection process is employed for the hybrid nodule classification, followed by a detailed discussion on the influence of feature number and feature composition method. Experimental results on 1037 images show that the accuracy of our proposed method is 0.929, which outperforms other relative methods by over 10%.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72801861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信