Nchongmaje Ndipenoch, A. Miron, Zidong Wang, Yongmin Li
{"title":"Retinal Image Segmentation with Small Datasets","authors":"Nchongmaje Ndipenoch, A. Miron, Zidong Wang, Yongmin Li","doi":"10.5220/0011779200003414","DOIUrl":"https://doi.org/10.5220/0011779200003414","url":null,"abstract":"Many eye diseases like Diabetic Macular Edema (DME), Age-related Macular Degeneration (AMD), and Glaucoma manifest in the retina, can cause irreversible blindness or severely impair the central version. The Optical Coherence Tomography (OCT), a 3D scan of the retina with high qualitative information about the retinal morphology, can be used to diagnose and monitor changes in the retinal anatomy. Many Deep Learning (DL) methods have shared the success of developing an automated tool to monitor pathological changes in the retina. However, the success of these methods depend mainly on large datasets. To address the challenge from very small and limited datasets, we proposed a DL architecture termed CoNet (Coherent Network) for joint segmentation of layers and fluids in retinal OCT images on very small datasets (less than a hundred training samples). The proposed model was evaluated on the publicly available Duke DME dataset consisting of 110 B-Scans from 10 patients suffering from DME. Experimental results show that the proposed model outperformed both the human experts' annotation and the current state-of-the-art architectures by a clear margin with a mean Dice Score of 88% when trained on 55 images without any data augmentation.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125643629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tirupati Saketh Chandra, S. Nasser, N. Kurian, A. Sethi
{"title":"Improving Mitosis Detection Via UNet-based Adversarial Domain Homogenizer","authors":"Tirupati Saketh Chandra, S. Nasser, N. Kurian, A. Sethi","doi":"10.48550/arXiv.2209.09193","DOIUrl":"https://doi.org/10.48550/arXiv.2209.09193","url":null,"abstract":"The effective localization of mitosis is a critical precursory task for deciding tumor prognosis and grade. Automated mitosis detection through deep learning-oriented image analysis often fails on unseen patient data due to inherent domain biases. This paper proposes a domain homogenizer for mitosis detection that attempts to alleviate domain differences in histology images via adversarial reconstruction of input images. The proposed homogenizer is based on a U-Net architecture and can effectively reduce domain differences commonly seen with histology imaging data. We demonstrate our domain homogenizer's effectiveness by observing the reduction in domain differences between the preprocessed images. Using this homogenizer, along with a subsequent retina-net object detector, we were able to outperform the baselines of the 2021 MIDOG challenge in terms of average precision of the detected mitotic figures.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123763172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Gupta, Shivani Nandgaonkar, N. Kurian, Tripti Bameta, S. Yadav, R. Kaushal, S. Rane, A. Sethi
{"title":"EGFR Mutation Prediction of Lung Biopsy Images using Deep Learning","authors":"R. Gupta, Shivani Nandgaonkar, N. Kurian, Tripti Bameta, S. Yadav, R. Kaushal, S. Rane, A. Sethi","doi":"10.48550/arXiv.2208.12506","DOIUrl":"https://doi.org/10.48550/arXiv.2208.12506","url":null,"abstract":"The standard diagnostic procedures for targeted therapies in lung cancer treatment involve histological subtyping and subsequent detection of key driver mutations, such as EGFR. Even though molecular profiling can uncover the driver mutation, the process is often expensive and time-consuming. Deep learning-oriented image analysis offers a more economical alternative for discovering driver mutations directly from whole slide images (WSIs). In this work, we used customized deep learning pipelines with weak supervision to identify the morphological correlates of EGFR mutation from hematoxylin and eosin-stained WSIs, in addition to detecting tumor and histologically subtyping it. We demonstrate the effectiveness of our pipeline by conducting rigorous experiments and ablation studies on two lung cancer datasets - TCGA and a private dataset from India. With our pipeline, we achieved an average area under the curve (AUC) of 0.964 for tumor detection, and 0.942 for histological subtyping between adenocarcinoma and squamous cell carcinoma on the TCGA dataset. For EGFR detection, we achieved an average AUC of 0.864 on the TCGA dataset and 0.783 on the dataset from India. Our key learning points include the following. Firstly, there is no particular advantage of using a feature extractor layers trained on histology, if one is going to fine-tune the feature extractor on the target dataset. Secondly, selecting patches with high cellularity, presumably capturing tumor regions, is not always helpful, as the sign of a disease class may be present in the tumor-adjacent stroma.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Weakly supervised deep learning-based intracranial hemorrhage localization","authors":"Jakub Nemček, Tomáš Vičar, Roman Jakubícek","doi":"10.5220/0010825000003123","DOIUrl":"https://doi.org/10.5220/0010825000003123","url":null,"abstract":"Intracranial hemorrhage is a life-threatening disease, which requires fast medical intervention. Owing to the duration of data annotation, head CT images are usually available only with slice-level labeling. This paper presents a weakly supervised method of precise hemorrhage localization in axial slices using only position-free labels, which is based on multiple instance learning. An algorithm is introduced that generates hemorrhage likelihood maps and finds the coordinates of bleeding. The Dice coefficient of 58.08 % is achieved on data from a publicly available dataset.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134524683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Rathour, Kashu Yamakazi, T. Hoàng, Ngan T. H. Le
{"title":"Roughness Index and Roughness Distance for Benchmarking Medical Segmentation","authors":"V. Rathour, Kashu Yamakazi, T. Hoàng, Ngan T. H. Le","doi":"10.5220/0010335500820093","DOIUrl":"https://doi.org/10.5220/0010335500820093","url":null,"abstract":"Medical image segmentation is one of the most challenging tasks in medical image analysis and has been widely developed for many clinical applications. Most of the existing metrics have been first designed for natural images and then extended to medical images. While object surface plays an important role in medical segmentation and quantitative analysis i.e. analyze brain tumor surface, measure gray matter volume, most of the existing metrics are limited when it comes to analyzing the object surface, especially to tell about surface smoothness or roughness of a given volumetric object or to analyze the topological errors. In this paper, we first analysis both pros and cons of all existing medical image segmentation metrics, specially on volumetric data. We then propose an appropriate roughness index and roughness distance for medical image segmentation analysis and evaluation. Our proposed method addresses two kinds of segmentation errors, i.e. (i)topological errors on boundary/surface and (ii)irregularities on the boundary/surface. The contribution of this work is four-fold: (i) detect irregular spikes/holes on a surface, (ii) propose roughness index to measure surface roughness of a given object, (iii) propose a roughness distance to measure the distance of two boundaries/surfaces by utilizing the proposed roughness index and (iv) suggest an algorithm which helps to remove the irregular spikes/holes to smooth the surface. Our proposed roughness index and roughness distance are built upon the solid surface roughness parameter which has been successfully developed in the civil engineering.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127134582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Dubey, M. T. Young, Christopher Stanley, D. Lunga, Jacob D. Hinkle
{"title":"Computer-aided abnormality detection in chest radiographs in a clinical setting via domain-adaptation","authors":"A. Dubey, M. T. Young, Christopher Stanley, D. Lunga, Jacob D. Hinkle","doi":"10.5220/0010302500650072","DOIUrl":"https://doi.org/10.5220/0010302500650072","url":null,"abstract":"Deep learning (DL) models are being deployed at medical centers to aid radiologists for diagnosis of lung conditions from chest radiographs. Such models are often trained on a large volume of publicly available labeled radiographs. These pre-trained DL models' ability to generalize in clinical settings is poor because of the changes in data distributions between publicly available and privately held radiographs. In chest radiographs, the heterogeneity in distributions arises from the diverse conditions in X-ray equipment and their configurations used for generating the images. In the machine learning community, the challenges posed by the heterogeneity in the data generation source is known as domain shift, which is a mode shift in the generative model. In this work, we introduce a domain-shift detection and removal method to overcome this problem. Our experimental results show the proposed method's effectiveness in deploying a pre-trained DL model for abnormality detection in chest radiographs in a clinical setting.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123399529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Terahertz Reflection Imaging of Paraffin-embedded Human Breast Cancer Samples: Some First Results","authors":"M. Boutaayamou, Delphine Cerica, J. Verly","doi":"10.5220/0009163302000203","DOIUrl":"https://doi.org/10.5220/0009163302000203","url":null,"abstract":"Several studies have shown that terahertz (THz) pulsed imaging has the potential of identifying the margins of human breast cancer in paraffin-embedded tissue samples. Before using this technique for the assessment of cancer margins during breast-conserving surgery, it is important to study the validity and reproducibility of previously published results. In the present paper, we describe some first results in the characterization of paraffin-embedded human breast cancer tissue through THz reflection imaging based on measurements provided by a newly acquired THz time-domain spectrometer. First, we measured the THz reflection impulse response of these samples using this spectrometer. Second, we processed, for one selected breast cancer tissue sample, the recorded data to generate preliminary images of (1) several maps of parameters extracted in the timeand frequency-domains, and (2) a map of the absorbance.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130570722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Water-sensitive Gelatin Phantoms for Skin Water Content Imaging","authors":"Gennadi Saiko, A. Douplik","doi":"10.5220/0008919501300134","DOIUrl":"https://doi.org/10.5220/0008919501300134","url":null,"abstract":"Oxygen supply to tissues can be seriously impacted during wound healing. Edema (accumulation of fluids in interstitial space) can increase the distance between capillaries, thus decreasing oxygen supply to cells. There is no standard clinical tool for quantification of edema, and early edema detection (preferably preclinical) is of great clinical need. Multispectral imaging can be a helpful clinical tool to characterize water content in the skin. However, to develop and validate this technology, a reliable water-sensitive preclinical model has to be developed. The scope of this work is to develop a water-responsive skin model and assess the feasibility of extracting water content using multispectral imaging. Methods: A phantom fabrication protocol has been developed. The phantoms are based on the gelatin crosslinked with glutaraldehyde. TiO2 nanoparticles were added to mimic the optical properties of the skin. To emulate various water content, phantoms were dipped in the water for various duration. The phantoms were imaged using the Multi-Spectral Imaging Device (MSID) (Swift Medical Inc, Toronto). MSID is a multispectral imaging system for visualization of tissue chromophores in surface tissues. It uses 12-bit scientific-grade NIR-enhanced monochrome camera (Basler, Germany) and ten wavelength light source (600-1000nm range) to visualize the distribution of oxy-, deoxyhemoglobins, methemoglobin, water, and melanin. The imaging distance is 30cm, the field of view: 7x7cm. Results: Initial results show that the developed model mimics the optical scattering properties of the skin. MSID was able to extract water content using a full set (ten wavelengths) and a subset (three wavelengths) of channels. Conclusions: A new water responsive model for skin moisture imaging has been developed. Initial experiments with multispectral imaging of these phantoms show feasibility of tissue water content imaging with Si-based cameras.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133942414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Maddalena, Ilaria Granata, Ichcha Manipur, M. Manzo, M. Guarracino
{"title":"Glioma Grade Classification via Omics Imaging","authors":"L. Maddalena, Ilaria Granata, Ichcha Manipur, M. Manzo, M. Guarracino","doi":"10.5220/0009167700820092","DOIUrl":"https://doi.org/10.5220/0009167700820092","url":null,"abstract":"Omics imaging is an emerging interdisciplinary field concerned with the integration of data collected from biomedical images and omics experiments. Bringing together information coming from different sources, it permits to reveal hidden genotype-phenotype relationships, with the aim of better understanding the onset and progression of many diseases, and identifying new diagnostic and prognostic biomarkers. In this work, we present an omics imaging approach to the classification of different grades of gliomas, which are primary brain tumors arising from glial cells, as this is of critical clinical importance for making decisions regarding initial and subsequent treatment strategies. Imaging data come from analyses available in The Cancer Imaging Archive, while omics attributes are extracted by integrating metabolic models with transcriptomic data available from the Genomic Data Commons portal. We investigate the results of feature selection for the two types of data separately, as well as for the integrated data, providing hints on the most distinctive ones that can be exploited as biomarkers for glioma grading. Moreover, we show how the integrated data can provide additional clinical information as compared to the two types of data separately, leading to higher performance. We believe our results can be valuable to clinical tests in practice.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124852252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Food Recognition: Can Deep Learning or Bag-of-Words Match Humans?","authors":"P. Furtado","doi":"10.5220/0008893301020108","DOIUrl":"https://doi.org/10.5220/0008893301020108","url":null,"abstract":": Automated smartphone-based food recognition is a useful basis for applications targeted at dietary assessment. Dish recognition is a necessary step in that process. One of the possible approaches to use is deep learning-based recognition, another one is bag-of-words based classification. Deep learning has increasingly become the preferred approach to use in either this or other image classification tasks. Additionally, if humans are better recognizing the dish, the automated approach is useless (it will be less error-prone for the user to identify the dish instead of capturing the photo). We compare the alternatives of Deep Learning (DL), Bag-of-words (BoW) and Humans (H). The best deep learner beats humans when on few food categories, but looses if it has to learn many more food categories, which is expected in real contexts. We describe the approaches, analyze the results, draw conclusions and design further work to evaluate further and improve the approaches.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122167858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}