Nguyen P Nguyen, Stephanie Lopez, Catherine L Smith, Teresa E Lever, Nicole L Nichols, Filiz Bunyak
{"title":"Axon and Myelin Sheath Segmentation in Electron Microscopy Images using Meta Learning.","authors":"Nguyen P Nguyen, Stephanie Lopez, Catherine L Smith, Teresa E Lever, Nicole L Nichols, Filiz Bunyak","doi":"10.1109/aipr57179.2022.10092238","DOIUrl":"https://doi.org/10.1109/aipr57179.2022.10092238","url":null,"abstract":"<p><p>Various neurological diseases affect the morphology of myelinated axons. Quantitative analysis of these structures and changes occurring due to neurodegeneration or neuroregeneration is of great importance for characterization of disease state and treatment response. This paper proposes a robust, meta-learning based pipeline for segmentation of axons and surrounding myelin sheaths in electron microscopy images. This is the first step towards computation of electron microscopy related bio-markers of hypoglossal nerve degeneration/regeneration. This segmentation task is challenging due to large variations in morphology and texture of myelinated axons at different levels of degeneration and very limited availability of annotated data. To overcome these difficulties, the proposed pipeline uses a meta learning-based training strategy and a U-net like encoder decoder deep neural network. Experiments on unseen test data collected at different magnification levels (i.e, trained on 500X and 1200X images, and tested on 250X and 2500X images) showed improved segmentation performance by 5% to 7% compared to a regularly trained, comparable deep learning network.</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10197949/pdf/nihms-1895752.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9509527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deniz Kavzak Ufuktepe, Feng Yang, Yasmin M Kassim, Hang Yu, Richard J Maude, Kannappan Palaniappan, Stefan Jaeger
{"title":"Deep Learning-Based Cell Detection and Extraction in Thin Blood Smears for Malaria Diagnosis.","authors":"Deniz Kavzak Ufuktepe, Feng Yang, Yasmin M Kassim, Hang Yu, Richard J Maude, Kannappan Palaniappan, Stefan Jaeger","doi":"10.1109/AIPR52630.2021.9762109","DOIUrl":"10.1109/AIPR52630.2021.9762109","url":null,"abstract":"<p><p>Malaria is a major health threat caused by Plasmodium parasites that infect the red blood cells. Two predominant types of Plasmodium parasites are <i>Plasmodium vivax</i> (<i>P</i>. <i>vivax</i>) and <i>Plasmodium falciparum</i> (<i>P</i>. <i>falciparum</i>). Diagnosis of malaria typically involves visual microscopy examination of blood smears for malaria parasites. This is a tedious, error-prone visual inspection task requiring microscopy expertise which is often lacking in resource-poor settings. To address these problems, attempts have been made in recent years to automate malaria diagnosis using machine learning approaches. Several challenges need to be met for a machine learning approach to be successful in malaria diagnosis. Microscopy images acquired at different sites often vary in color, contrast, and consistency caused by different smear preparation and staining methods. Moreover, touching and overlapping cells complicate the red blood cell detection process, which can lead to inaccurate blood cell counts and thus incorrect parasitemia calculations. In this work, we propose a red blood cell detection and extraction framework to enable processing and analysis of single cells for follow-up processes like counting infected cells or identifying parasite species in thin blood smears. This framework consists of two modules: a cell detection module and a cell extraction module. The cell detection module trains a modified Channel-wise Feature Pyramid Network for Medicine (CFPNet-M) deep learning network that takes the green channel of the image and the color-deconvolution processed image as inputs, and learns a truncated distance transform image of cell annotations. CFPNet-M is chosen due to its low resource requirements, while the distance transform allows achieving more accurate cell counts for dense cells. Once the cells are detected by the network, the cell extraction module is used to extract single cells from the original image and count the number of cells. Our preliminary results based on 193 patients (including 148 <i>P</i>. <i>Falciparum</i> infected patients, and 45 uninfected patients) show that our framework achieves cell count accuracy of 92.2%.</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2021 ","pages":"9762109"},"PeriodicalIF":0.0,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7613898/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10722590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasmin M Kassim, Olga V Glinskii, Vladislav V Glinsky, Virginia H Huxley, Kannappan Palaniappan
{"title":"Patch-Based Semantic Segmentation for Detecting Arterioles and Venules in Epifluorescence Imagery.","authors":"Yasmin M Kassim, Olga V Glinskii, Vladislav V Glinsky, Virginia H Huxley, Kannappan Palaniappan","doi":"10.1109/aipr.2018.8707387","DOIUrl":"https://doi.org/10.1109/aipr.2018.8707387","url":null,"abstract":"<p><p>Segmentation and quantification of microvasculature structures are the main steps toward studying microvasculature remodeling. The proposed patch based semantic architecture enables accurate segmentation for the challenging epifluorescence microscopy images. Our pixel-based fast semantic network trained on random patches from different epifluorescence images to learn how to discriminate between vessels versus nonvessels pixels. The proposed semantic vessel network (SVNet) relies on understanding the morphological structure of the thin vessels in the patches rather than considering the whole image as input to speed up the training process and to maintain the clarity of thin structures. Experimental results on our ovariectomized - ovary removed (OVX) - mice dura mater epifluorescence microscopy images shows promising results in both arteriole and venule part. We compared our results with different segmentation methods such as local, global thresholding, matched based filter approaches and related state of the art deep learning networks. Our overall accuracy (> 98%) outperforms all the methods including our previous work (VNet). [1].</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2018 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/aipr.2018.8707387","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37699059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziv Yaniv, Jessica Faruque, Sally Howe, Kathel Dunn, David Sharlip, Andrew Bond, Pablo Perillan, Olivier Bodenreider, Michael J Ackerman, Terry S Yoo
{"title":"The National Library of Medicine Pill Image Recognition Challenge: An Initial Report.","authors":"Ziv Yaniv, Jessica Faruque, Sally Howe, Kathel Dunn, David Sharlip, Andrew Bond, Pablo Perillan, Olivier Bodenreider, Michael J Ackerman, Terry S Yoo","doi":"10.1109/AIPR.2016.8010584","DOIUrl":"https://doi.org/10.1109/AIPR.2016.8010584","url":null,"abstract":"<p><p>In January 2016 the U.S. National Library of Medicine announced a challenge competition calling for the development and discovery of high-quality algorithms and software that rank how well consumer images of prescription pills match reference images of pills in its authoritative RxIMAGE collection. This challenge was motivated by the need to easily identify unknown prescription pills both by healthcare personnel and the general public. Potential benefits of this capability include confirmation of the pill in settings where the documentation and medication have been separated, such as in a disaster or emergency; and confirmation of a pill when the prescribed medication changes from brand to generic, or for any other reason the shape and color of the pill change. The data for the competition consisted of two types of images, high quality macro photographs, reference images, and consumer quality photographs of the quality we expect users of a proposed application to acquire. A training dataset consisting of 2000 reference images and 5000 corresponding consumer quality images acquired from 1000 pills was provided to challenge participants. A second dataset acquired from 1000 pills with similar distributions of shape and color was reserved as a segregated testing set. Challenge submissions were required to produce a ranking of the reference images, given a consumer quality image as input. Determination of the winning teams was done using the mean average precision quality metric, with the three winners obtaining mean average precision scores of 0.27, 0.09, and 0.08. In the retrieval results, the correct image was amongst the top five ranked images 43%, 12%, and 11% of the time, out of 5000 query/consumer images. This is an initial promising step towards development of an NLM software system and application-programming interface facilitating pill identification. The training dataset will continue to be freely available online at: http://pir.nlm.nih.gov/challenge/submission.html.</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/AIPR.2016.8010584","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36182079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasmin M Kassim, V B Surya Prasath, Olga V Glinskii, Vladislav V Glinsky, Virginia H Huxley, Kannappan Palaniappan
{"title":"Confocal Vessel Structure Segmentation with Optimized Feature Bank and Random Forests.","authors":"Yasmin M Kassim, V B Surya Prasath, Olga V Glinskii, Vladislav V Glinsky, Virginia H Huxley, Kannappan Palaniappan","doi":"10.1109/AIPR.2016.8010580","DOIUrl":"https://doi.org/10.1109/AIPR.2016.8010580","url":null,"abstract":"<p><p>In this paper, we consider confocal microscopy based vessel segmentation with optimized features and random forest classification. By utilizing multi-scale vessel-specific features tuned to capture curvilinear structures such as Frobenius norm of the Hessian eigenvalues, Laplacian of Gaussians (LoG), oriented second derivative, line detector and intensity masked with LoG scale map. we obtain better segmentation results in challenging imaging conditions. We obtain binary segmentations using random forest classifier trained on physiologists marked ground-truth. Experimental results on mice dura mater confocal microscopy vessel segmentations indicate that we obtain better results compared to global segmentation approaches.</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/AIPR.2016.8010580","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35565305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}