Ahmed Almazroa, Sami Alodhayb, Kaamran Raahemifar, Vasudevan Lakshminarayanan
{"title":"An Automatic Image Processing System for Glaucoma Screening.","authors":"Ahmed Almazroa, Sami Alodhayb, Kaamran Raahemifar, Vasudevan Lakshminarayanan","doi":"10.1155/2017/4826385","DOIUrl":"https://doi.org/10.1155/2017/4826385","url":null,"abstract":"<p><p>Horizontal and vertical cup to disc ratios are the most crucial parameters used clinically to detect glaucoma or monitor its progress and are manually evaluated from retinal fundus images of the optic nerve head. Due to the rarity of the glaucoma experts as well as the increasing in glaucoma's population, an automatically calculated horizontal and vertical cup to disc ratios (HCDR and VCDR, resp.) can be useful for glaucoma screening. We report on two algorithms to calculate the HCDR and VCDR. In the algorithms, level set and inpainting techniques were developed for segmenting the disc, while thresholding using Type-II fuzzy approach was developed for segmenting the cup. The results from the algorithms were verified using the manual markings of images from a dataset of glaucomatous images (retinal fundus images for glaucoma analysis (RIGA dataset)) by six ophthalmologists. The algorithm's accuracy for HCDR and VCDR combined was 74.2%. Only the accuracy of manual markings by one ophthalmologist was higher than the algorithm's accuracy. The algorithm's best agreement was with markings by ophthalmologist number 1 in 230 images (41.8%) of the total tested images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"4826385"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/4826385","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35545367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Medical Image Fusion Based on Feature Extraction and Sparse Representation.","authors":"Yin Fei, Gao Wei, Song Zongxi","doi":"10.1155/2017/3020461","DOIUrl":"https://doi.org/10.1155/2017/3020461","url":null,"abstract":"<p><p>As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"3020461"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/3020461","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34837712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Histogram Region Merging Based Multithreshold Segmentation Algorithm for MR Brain Images.","authors":"Siyan Liu, Xuanjing Shen, Yuncong Feng, Haipeng Chen","doi":"10.1155/2017/9759414","DOIUrl":"https://doi.org/10.1155/2017/9759414","url":null,"abstract":"<p><p>Multithreshold segmentation algorithm is time-consuming, and the time complexity will increase exponentially with the increase of thresholds. In order to reduce the time complexity, a novel multithreshold segmentation algorithm is proposed in this paper. First, all gray levels are used as thresholds, so the histogram of the original image is divided into 256 small regions, and each region corresponds to one gray level. Then, two adjacent regions are merged in each iteration by a new designed scheme, and a threshold is removed each time. To improve the accuracy of the merger operation, variance and probability are used as energy. No matter how many the thresholds are, the time complexity of the algorithm is stable at <i>O</i>(<i>L</i>). Finally, the experiment is conducted on many MR brain images to verify the performance of the proposed algorithm. Experiment results show that our method can reduce the running time effectively and obtain segmentation results with high accuracy.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"9759414"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/9759414","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34912736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monte Carlo Simulation for Polychromatic X-Ray Fluorescence Computed Tomography with Sheet-Beam Geometry.","authors":"Shanghai Jiang, Peng He, Luzhen Deng, Mianyi Chen, Biao Wei","doi":"10.1155/2017/7916260","DOIUrl":"https://doi.org/10.1155/2017/7916260","url":null,"abstract":"<p><p>X-ray fluorescence computed tomography (XFCT) based on sheet beam can save a huge amount of time to obtain a whole set of projections using synchrotron. However, it is clearly unpractical for most biomedical research laboratories. In this paper, polychromatic X-ray fluorescence computed tomography with sheet-beam geometry is tested by Monte Carlo simulation. First, two phantoms (<i>A</i> and <i>B</i>) filled with PMMA are used to simulate imaging process through GEANT 4. Phantom <i>A</i> contains several GNP-loaded regions with the same size (10 mm) in height and diameter but different Au weight concentration ranging from 0.3% to 1.8%. Phantom <i>B</i> contains twelve GNP-loaded regions with the same Au weight concentration (1.6%) but different diameter ranging from 1 mm to 9 mm. Second, discretized presentation of imaging model is established to reconstruct more accurate XFCT images. Third, XFCT images of phantoms <i>A</i> and <i>B</i> are reconstructed by filter back-projection (FBP) and maximum likelihood expectation maximization (MLEM) with and without correction, respectively. Contrast-to-noise ratio (CNR) is calculated to evaluate all the reconstructed images. Our results show that it is feasible for sheet-beam XFCT system based on polychromatic X-ray source and the discretized imaging model can be used to reconstruct more accurate images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"7916260"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/7916260","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35047052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siming Bayer, Andreas Maier, Martin Ostermeier, Rebecca Fahrig
{"title":"Intraoperative Imaging Modalities and Compensation for Brain Shift in Tumor Resection Surgery.","authors":"Siming Bayer, Andreas Maier, Martin Ostermeier, Rebecca Fahrig","doi":"10.1155/2017/6028645","DOIUrl":"10.1155/2017/6028645","url":null,"abstract":"<p><p>Intraoperative brain shift during neurosurgical procedures is a well-known phenomenon caused by gravity, tissue manipulation, tumor size, loss of cerebrospinal fluid (CSF), and use of medication. For the use of image-guided systems, this phenomenon greatly affects the accuracy of the guidance. During the last several decades, researchers have investigated how to overcome this problem. The purpose of this paper is to present a review of publications concerning different aspects of intraoperative brain shift especially in a tumor resection surgery such as intraoperative imaging systems, quantification, measurement, modeling, and registration techniques. Clinical experience of using intraoperative imaging modalities, details about registration, and modeling methods in connection with brain shift in tumor resection surgery are the focuses of this review. In total, 126 papers regarding this topic are analyzed in a comprehensive summary and are categorized according to fourteen criteria. The result of the categorization is presented in an interactive web tool. The consequences from the categorization and trends in the future are discussed at the end of this work.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"6028645"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/6028645","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35142099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacopo Biasetti, Kaushik Sampath, Angel Cortez, Alaleh Azhir, Assaf A Gilad, Thomas S Kickler, Tobias Obser, Zaverio M Ruggeri, Joseph Katz
{"title":"Space and Time Resolved Detection of Platelet Activation and von Willebrand Factor Conformational Changes in Deep Suspensions.","authors":"Jacopo Biasetti, Kaushik Sampath, Angel Cortez, Alaleh Azhir, Assaf A Gilad, Thomas S Kickler, Tobias Obser, Zaverio M Ruggeri, Joseph Katz","doi":"10.1155/2017/8318906","DOIUrl":"https://doi.org/10.1155/2017/8318906","url":null,"abstract":"<p><p>Tracking cells and proteins' phenotypic changes in deep suspensions is critical for the direct imaging of blood-related phenomena in <i>in vitro</i> replica of cardiovascular systems and blood-handling devices. This paper introduces fluorescence imaging techniques for space and time resolved detection of platelet activation, von Willebrand factor (VWF) conformational changes, and VWF-platelet interaction in deep suspensions. Labeled VWF, platelets, and VWF-platelet strands are suspended in deep cuvettes, illuminated, and imaged with a high-sensitivity EM-CCD camera, allowing detection using an exposure time of 1 ms. In-house postprocessing algorithms identify and track the moving signals. Recombinant VWF-eGFP (rVWF-eGFP) and VWF labeled with an FITC-conjugated polyclonal antibody are employed. Anti-P-Selectin FITC-conjugated antibodies and the calcium-sensitive probe Indo-1 are used to detect activated platelets. A positive correlation between the mean number of platelets detected per image and the percentage of activated platelets determined through flow cytometry is obtained, validating the technique. An increase in the number of rVWF-eGFP signals upon exposure to shear stress demonstrates the technique's ability to detect breakup of self-aggregates. VWF globular and unfolded conformations and self-aggregation are also observed. The ability to track the size and shape of VWF-platelet strands in space and time provides means to detect pro- and antithrombotic processes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"8318906"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/8318906","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35650754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dinh Tuan Tran, Ryuhei Sakurai, Hirotake Yamazoe, Joo-Ho Lee
{"title":"Phase Segmentation Methods for an Automatic Surgical Workflow Analysis.","authors":"Dinh Tuan Tran, Ryuhei Sakurai, Hirotake Yamazoe, Joo-Ho Lee","doi":"10.1155/2017/1985796","DOIUrl":"https://doi.org/10.1155/2017/1985796","url":null,"abstract":"In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"1985796"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/1985796","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34912735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images.","authors":"Hirokazu Nosato, Hidenori Sakanashi, Eiichi Takahashi, Masahiro Murakawa, Hiroshi Aoki, Ken Takeuchi, Yasuo Suzuki","doi":"10.1155/2017/7089213","DOIUrl":"https://doi.org/10.1155/2017/7089213","url":null,"abstract":"<p><p>Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"7089213"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/7089213","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34778449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multitemporal Volume Registration for the Analysis of Rheumatoid Arthritis Evolution in the Wrist.","authors":"Roberta Ferretti, Silvana G Dellepiane","doi":"10.1155/2017/7232751","DOIUrl":"10.1155/2017/7232751","url":null,"abstract":"<p><p>This paper describes a method based on an automatic segmentation process to coregister carpal bones of the same patient imaged at different time points. A rigid registration was chosen to avoid artificial bone deformations and to allow finding eventual differences in the bone shape due to erosion, disease regression, or other eventual pathological signs. The actual registration step is performed on the basis of principal inertial axes of each carpal bone volume, as estimated from the inertia matrix. In contrast to already published approaches, the proposed method suggests splitting the 3D rotation into successive rotations about one axis at a time (the so-called basic or elemental rotations). In such a way, singularity and ambiguity drawbacks affecting other classical methods, for instance, the Euler angles method, are addressed. The proposed method was quantitatively evaluated using a set of real magnetic resonance imaging (MRI) sequences acquired at two different times from healthy wrists and by choosing a direct volumetric comparison as a cost function. Both the segmentation and registration steps are not based on a priori models, and they are therefore able to obtain good results even in pathological cases, as proven by the visual evaluation of actual pathological cases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"7232751"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5672126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35216080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shanshan Chen, Bensheng Qiu, Feng Zhao, Chao Li, Hongwei Du
{"title":"Fast Compressed Sensing MRI Based on Complex Double-Density Dual-Tree Discrete Wavelet Transform.","authors":"Shanshan Chen, Bensheng Qiu, Feng Zhao, Chao Li, Hongwei Du","doi":"10.1155/2017/9604178","DOIUrl":"https://doi.org/10.1155/2017/9604178","url":null,"abstract":"<p><p>Compressed sensing (CS) has been applied to accelerate magnetic resonance imaging (MRI) for many years. Due to the lack of translation invariance of the wavelet basis, undersampled MRI reconstruction based on discrete wavelet transform may result in serious artifacts. In this paper, we propose a CS-based reconstruction scheme, which combines complex double-density dual-tree discrete wavelet transform (CDDDT-DWT) with fast iterative shrinkage/soft thresholding algorithm (FISTA) to efficiently reduce such visual artifacts. The CDDDT-DWT has the characteristics of shift invariance, high degree, and a good directional selectivity. In addition, FISTA has an excellent convergence rate, and the design of FISTA is simple. Compared with conventional CS-based reconstruction methods, the experimental results demonstrate that this novel approach achieves higher peak signal-to-noise ratio (PSNR), larger signal-to-noise ratio (SNR), better structural similarity index (SSIM), and lower relative error.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2017 ","pages":"9604178"},"PeriodicalIF":7.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2017/9604178","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34980329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}