{"title":"Curriculum Deep Reinforcement Learning with Different Exploration Strategies: A Feasibility Study on Cardiac Landmark Detection","authors":"P. Astudillo, P. Mortier, M. Beule, F. Wyffels","doi":"10.5220/0008948900370045","DOIUrl":"https://doi.org/10.5220/0008948900370045","url":null,"abstract":"Transcatheter aortic valve implantation (TAVI) is associated with conduction abnormalities and the mechanical interaction between the prosthesis and the atrioventricular (AV) conduction path cause these life-threatening arrhythmias. Pre-operative assessment of the location of the AV conduction path can help to understand the risk of post-TAVI conduction abnormalities. As the AV conduction path is not visible on cardiac CT, the inferior border of the membranous septum can be used as an anatomical landmark. Detecting this border automatically, accurately and efficiently would save operator time and thus benefit pre-operative planning. This preliminary study was performed to identify the feasibility of 3D landmark detection in cardiac CT images with curriculum deep Q-learning. In this study, curriculum learning was used to gradually teach an artificial agent to detect this anatomical landmark from cardiac CT. This agent was equipped with a small field of view and burdened with a large ac tion-space. Moreover, we introduced two novel action-selection strategies: α-decay and action-dropout. We compared these two strategies to the already established e-decay strategy and observed that α-decay yielded the most accurate results. Limited computational resources were used to ensure reproducibility. In order to maximize the amount of patient data, the method was cross-validated with k-folding for all three action-selection strategies. An inter-operator variability study was conducted to assess the accuracy of the method","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132744197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object Tracking using CSRT Tracker and RCNN","authors":"Khurshedjon Farkhodov, Suk-Hwan Lee, Ki-Ryong Kwon","doi":"10.5220/0009183802090212","DOIUrl":"https://doi.org/10.5220/0009183802090212","url":null,"abstract":"Nowadays, Object tracking is one of the trendy and under investigation topic of Computer Vision that challenges with several issues that should be considered while creating tracking systems, such as, visual appearance, occlusions, camera motion, and so on. In several tracking algorithms Convolutional Neural Network (CNN) has been applied to take advantage of its powerfulness in feature extraction that convolutional layers can characterize the object from different perspectives and treat tracking process from misclassification. To overcome these problems, we integrated the Region based CNN (Faster RCNN) pre-trained object detection model that the OpenCV based CSRT (Channel and Spatial Reliability Tracking) tracker has a high chance to identifying objects features, classes and locations as well. Basically, CSRT tracker is C++ implementation of the CSR-DCF (Channel and Spatial Reliability of Discriminative Correlation Filter) tracking algorithm in OpenCV library. Experimental results demonstrated that CSRT tracker presents better tracking outcomes with integration of object detection model, rather than using tracking algorithm or filter itself.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132641998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ambrosanio, S. Franceschini, F. Baselice, V. Pascazio
{"title":"Artificial Neural Networks for Quantitative Microwave Breast Imaging","authors":"M. Ambrosanio, S. Franceschini, F. Baselice, V. Pascazio","doi":"10.5220/0009172802040208","DOIUrl":"https://doi.org/10.5220/0009172802040208","url":null,"abstract":"This paper is focused on the use of artificial neural networks (ANNs) for biomedical microwave imaging of breast tissues in the framework of advanced breast cancer imaging techniques. The proposed scheme processes the scattered field collected at receivers locations of a multiview-multistatic system and aims at providing an estimate of the morphological and dielectric features of the breast tissues, which represents a strongly nonlinear scenario with several challenging aspects. In order to train the network, a simulated data set has been created by implementing the forward problem and an automatic randomly-shaped breast profile generator based on the statistical distribution of complex permittivity of breast biological tissues was developed. Some numerical tests were carried out to evaluate the performance of the proposed method and, in conclusion, we found that the use of ANNs for quantitative biomedical imaging purposes seems to be very promising.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125702319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Jing, Donal Mc Laughlin, David Steele, Sara Mc Namee, Brian Mac Namee, P. Cullen, D. Finlay, J. Mclaughlin
{"title":"Detection and Categorisation of Multilevel High-sensitivity Cardiovascular Biomarkers from Lateral Flow Immunoassay Images via Recurrent Neural Networks","authors":"Min Jing, Donal Mc Laughlin, David Steele, Sara Mc Namee, Brian Mac Namee, P. Cullen, D. Finlay, J. Mclaughlin","doi":"10.5220/0009117901770183","DOIUrl":"https://doi.org/10.5220/0009117901770183","url":null,"abstract":": Lateral Flow Immunoassays (LFA) have the potential to provide low cost, rapid and highly efficacious Point-of-Care (PoC) diagnostic testing in resource limited settings. Traditional LFA testing is semi-quantitative based on the calibration curve, which faces challenges in the detection of multilevel high-sensitivity biomarkers due its low sensitivity. This paper proposes a novel framework in which the LFA images are acquired from a designed CMOS reader system under controlled lighting. Unlike most existing approaches based on image intensity, the proposed system does not require detection of region of interest (ROI), instead each row of the LFA image was considered as time series signals. The Long Short-Term Memory (LSTM) network was deployed to classify the LFA data obtained from cardiovascular biomarker, C-Reactive Protein (CRP), at eight concentration levels (within the range 0-5mg/L) that are aligned with clinically actionable categories. The performance under different arrangements for input dimension and parameters were evaluated. The preliminary results show that the proposed LSTM outperforms other popular classification methods, which demonstrate the capability of the proposed system to detect high-sensitivity CRP and suggests the potential of applications for early risk assessment of cardiovascular diseases (CVD).","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"274 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134172556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benita Scout Mackay, Sophie Blundell, O. Etter, Yunhui Xie, M. McDonnel, M. Praeger, J. Grant-Jacob, R. Eason, Rohan M. Lewis, B. Mills
{"title":"Automated 3D Labelling of Fibroblasts and Endothelial Cells in SEM-Imaged Placenta using Deep Learning","authors":"Benita Scout Mackay, Sophie Blundell, O. Etter, Yunhui Xie, M. McDonnel, M. Praeger, J. Grant-Jacob, R. Eason, Rohan M. Lewis, B. Mills","doi":"10.5220/0008949700460053","DOIUrl":"https://doi.org/10.5220/0008949700460053","url":null,"abstract":"Analysis of fibroblasts within placenta is necessary for research into placental growth-factors, which are linked to lifelong health and chronic disease risk. 2D analysis of fibroblasts can be challenging due to the variation and complexity of their structure. 3D imaging can provide important visualisation, but the images produced are extremely labour intensive to construct because of the extensive manual processing required. Machine learning can be used to automate the labelling process for faster 3D analysis. Here, a deep neural network is trained to label a fibroblast from serial block face scanning electron microscopy (SBFSEM) placental imaging.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122338321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Image Deblur using Point Spread Function Modelling for Optical Projection Tomography","authors":"Xiaoqin Tang, G. Lamers, F. Verbeek","doi":"10.5220/0007237700670075","DOIUrl":"https://doi.org/10.5220/0007237700670075","url":null,"abstract":"Optical projection tomography (OPT) is widely used to produce 3D image for specimens of size between 1mm and 10mm. However, to image large specimens a large depth of field is needed, which normally results in blur in imaging process, i.e. compromises the image quality or resolution. Yet, it is important to obtain the best possible quality of 3D image from OPT, thus deblurring the image is of significance. In this paper we first model the point spread function along optical axis which varies at different depths in OPT imaging system. The magnification is taken into account in the point spread function modelling. Afterward, deconvolution in the coronal plane based on the modelled point spread function is implemented for the image deblur. Experiments with the proposed approach based on 25 3D images including 4 categories of samples, indicate the effectiveness of quality improvement assessed by image blur measures in both spatial and","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114470918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting bilateral symmetry in brain lesion segmentation","authors":"Kevin Raina, U. Yahorau, T. Schmah","doi":"10.5220/0008912101160122","DOIUrl":"https://doi.org/10.5220/0008912101160122","url":null,"abstract":"Brain lesions, including stroke and tumours, have a high degree of variability in terms of location, size, intensity and form, making automatic segmentation difficult. We propose an improvement to existing segmentation methods by exploiting the bilateral quasi-symmetry of healthy brains, which breaks down when lesions are present. Specifically, we use nonlinear registration of a neuroimage to a reflected version of itself (\"reflective registration\") to determine for each voxel its homologous (corresponding) voxel in the other hemisphere. A patch around the homologous voxel is added as a set of new features to the segmentation algorithm. To evaluate this method, we implemented two different CNN-based multimodal MRI stroke lesion segmentation algorithms, and then augmented them by adding extra symmetry features using the reflective registration method described above. For each architecture, we compared the performance with and without symmetry augmentation, on the SISS Training dataset of the Ischemic Stroke Lesion Segmentation Challenge (ISLES) 2015 challenge. Using affine reflective registration improves performance over baseline, but nonlinear reflective registration gives significantly better results: an improvement in Dice coefficient of 13 percentage points over baseline for one architecture and 9 points for the other. We argue for the broad applicability of adding symmetric features to existing segmentation algorithms, specifically using nonlinear, template-free methods.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133653487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. F. Carvalho, José Antonio Camacho Guerrero, Luis Javier Maldonado Zapata, A. Uscamayta, H. Vale, L. F. Borges, A. C. Bruno, H. Oliveira
{"title":"Radiotherapy Support Tools, the Brazilian Project: SIPRAD","authors":"D. F. Carvalho, José Antonio Camacho Guerrero, Luis Javier Maldonado Zapata, A. Uscamayta, H. Vale, L. F. Borges, A. C. Bruno, H. Oliveira","doi":"10.5220/0007482901370143","DOIUrl":"https://doi.org/10.5220/0007482901370143","url":null,"abstract":"The radiotherapy planning process (teletherapy) is initially performed by the acquisition of Computed Tomography images of the areas of interest to guide a series of health professionals in the work of vector design of regions of interest for protection (risk organs) and radiation (tumors). All these steps are performed using computational tools that extrapolate measurements and scales in the treatment plan. The efficiency of the treatment depends on the recreation of the patient's positioning on the linear accelerator stretcher with the previously acquired tomography images. For this, in this article, we present three modules of the SIPRAD (Information Systems for Radiation Therapy Planning) project. With the name of Radiotherapy Portal it is able to perform a fusion of planar images of the target region, made on the day of treatment, with the digital recreation (DDR - Digital Reconstructed Radiographs) of this radiograph generated from the Tomography of treatment planning, aiming to improve the reproducibility of the positioning that the radiation dose delivered during all the radiotherapy treatment. The second module named by LYRIA PACS RT provides a client/server architecture for storing, distributing and displaying images from any systems using the DICOM RT Struct, Image, Plan and Dose modes. The third module called Contouring is responsible for the training of new radiotherapists.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"48 21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115341178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sonia Mejbri, C. Franchet, I. Reshma, J. Mothe, P. Brousset, Emmanuel Faure
{"title":"Deep Analysis of CNN Settings for New Cancer Whole-slide Histological Images Segmentation: The Case of Small Training Sets","authors":"Sonia Mejbri, C. Franchet, I. Reshma, J. Mothe, P. Brousset, Emmanuel Faure","doi":"10.5220/0007406601200128","DOIUrl":"https://doi.org/10.5220/0007406601200128","url":null,"abstract":"Accurate analysis and interpretation of stained biopsy images is a crucial step in the cancer diagnostic routine which is mainly done manually by expert pathologists. The recent progress of digital pathology gives us a challenging opportunity to automatically process these complex image data in order to retrieve essential information and to study tissue elements and structures. This paper addresses the task of tissue-level segmentation in intermediate resolution of histopathological breast cancer images. Firstly, we present a new medical dataset we developed which is composed of hematoxylin and eosin stained whole-slide images wherein all 7 tissues were labeled by hand and validated by expert pathologist. Then, with this unique dataset, we proposed an automatic end-to-end framework using deep neural network for tissue-level segmentation. Moreover, we provide a deep analysis of the framework settings that can be used in similar task by the scientific community.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134580745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape Recognition in High-level Image Representations: Data Preparation and Framework of Recognition Method","authors":"J. Lazarek, P. Szczepaniak","doi":"10.5220/0007579000570064","DOIUrl":"https://doi.org/10.5220/0007579000570064","url":null,"abstract":"The automatic shape recognition is an important task in various image processing applications, including medical problems. Choosing the right image representation is key to the recognition process. In the paper, we focused on high-level image representation (using line segments), thanks to which the amount of data necessary for processing in subsequent stages is significantly reduced. We present the framework of recognition method with the use of graph grammars.","PeriodicalId":162397,"journal":{"name":"Bioimaging (Bristol. Print)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125920857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}