Hong Wu, Yingwen Huo, Yupeng Pan, Zeyan Xu, Rian Huang, Yu Xie, Chu Han, Zaiyi Liu, Yi Wang
{"title":"Learning Pre- and Post-contrast Representation for Breast Cancer Segmentation in DCE-MRI","authors":"Hong Wu, Yingwen Huo, Yupeng Pan, Zeyan Xu, Rian Huang, Yu Xie, Chu Han, Zaiyi Liu, Yi Wang","doi":"10.1109/CBMS55023.2022.00070","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00070","url":null,"abstract":"Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a considerable role in high-risk breast cancer diagnosis and image-based prognostic prediction. The accurate and robust segmentation of cancerous regions is with clinical demands. However, automatic segmentation remains challenging, due to the large variations of cancers in shape and size, and the class-imbalance issue. To tackle these problems, we offer a two-stage framework, which leverages both pre- and post-contrast images for the segmentation of breast cancer. Specifically, we first employ a breast segmentation network, which generates the breast region of interest (ROI) thus removing confounding information from thorax region in DCE-MRI. Furthermore, based on the generated breast ROI, we offer an attention network to learn both pre- and post-contrast representations for distinguishing cancerous regions from the normal breast tissue. The efficacy of our framework is evaluated on a collected dataset of 261 patients with biopsy-proven breast cancers. Experimental results demonstrate our method attains a Dice coefficient of 91.11% for breast cancer segmentation. The proposed framework provides an effective cancer segmentation solution for breast examination using DCE-MRI. The code is publicly available at https://github.com/2313595986/BreastCancerMRI.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125859457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel CNN model with dense connectivity and attention mechanism for arrhythmia classification","authors":"Qin Zhan, Peilin Li, Yongle Wu, Jingchun Huang, Xunde Dong","doi":"10.1109/CBMS55023.2022.00016","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00016","url":null,"abstract":"Cardiac arrhythmia is a common cardiovascular disease that can cause sudden death in severe cases. Electro-cardiography (ECG) is the most well-known and widely applied method for heart diseases detection. Computer-aided diagnosis of ECG can help improve physician efficiency and reduce the rate of misdiagnosis of ECG. In this paper, we propose a method for arrhythmia classification based on the dense convolutional network (DenseNet) and efficient channel attention (ECA). Evaluation experiments were performed using the ECG records from the MIT-BIH database. The accuracy, sensitivity, specificity, and F1 values of 99.69%, 97.55%, 99.81%, and 97.72% were achieved for the six types of heartbeats classification, respectively. The experimental results demonstrate the validity and feasibility of the method, which can be used for ECG screening.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122536642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Spezialetti, Renata Di Filippo, Ramon Gimenez De Lorenzo, G. Gravina, G. Placidi, Guido Proietti, F. Rossi, S. Smriglio, J. M. R. Tavares, F. Vittorini, F. Mignosi
{"title":"Optimizing Nozzle Travel Time in Proton Therapy","authors":"M. Spezialetti, Renata Di Filippo, Ramon Gimenez De Lorenzo, G. Gravina, G. Placidi, Guido Proietti, F. Rossi, S. Smriglio, J. M. R. Tavares, F. Vittorini, F. Mignosi","doi":"10.1109/CBMS55023.2022.00085","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00085","url":null,"abstract":"Proton therapy is a cancer therapy that is more expensive than classical radiotherapy but that is considered the gold standard in several situations. Since there is also a limited amount of delivering facilities for this techniques, it is fundamental to increase the number of treated patients over time. The objective of this work is to offer an insight on the problem of the optimization of the part of the delivery time of a treatment plan that relates to the movements of the system. We denote it as the Nozzle Travel Time Problem (NTTP), in analogy with the Leaf Travel Time Problem (LTTP) in classical radiotherapy. In particular this work: (i) describes a mathematical model for the delivery system and formalize the optimization problem for finding the optimal sequence of movements of the system (nozzle and bed) that satisfies the covering of the prescribed irradiation directions; (ii) provides an optimization pipeline that solves the problem for instances with an amount of irradiation directions much greater than those usually employed in the clinical practice; (iii) reports preliminary results about the effects of employing two different resolution strategies within the aforementioned pipeline, that rely on an exact Traveling Salesman Problem (TSP) solver, Concorde, and an efficient Vehicle Routing Problem (VRP) heuristic, VROOM.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123178908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TSEUnet: A 3D neural network with fused Transformer and SE-Attention for brain tumor segmentation","authors":"Yan-Min Chen, Jiajun Wang","doi":"10.1109/CBMS55023.2022.00030","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00030","url":null,"abstract":"Brain tumor segmentation of 3D magnetic resonance (MR) images is of great significance for brain diagnosis. Although the U-Net and its variants have achieved outstanding performance in medical image segmentation, there still exist some challenges somewhat due to the fact that the CNN based models are powerful in extracting local features but are powerless in capturing global representations. To tackle this problem, we propose a 3D network structure based on the nnUNet, named TSEUnet. In this network, the transformer module is introduced in the encoder in a parallel interactive manner so that both local features and global contexts can be efficiently extracted. Moreover, SE-Attention is also incorporated in the decoder to enhance the meaningful information and improve the segmentation accuracy for brain tumor area. In addition, we propose a post-processing method to further improve the brain tumor segmentation. Experiments on the BRATS 2018 dataset show that our proposed TSEUnet achieves better performance on brain tumor segmentation as compared with the state-of-the-art methods.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"324 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132529343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contrastive learning-based Adenoid Hypertrophy Grading Network Using Nasoendoscopic Image","authors":"Siting Zheng, Xuechen Li, Mingmin Bi, Yuxuan Wang, Haiyan Liu, Xia Feng, Yunping Fan, Linlin Shen","doi":"10.1109/CBMS55023.2022.00074","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00074","url":null,"abstract":"Adenoid hypertrophy is a common disease in children with otolaryngology diseases. Otolaryngologists usually use nasoendoscopy for adenoid hypertrophy screening, which is however tedious and time-consuming for the grading. So far, artificial intelligence technology has not been applied to the grading of nasoendoscopic adenoid. In this work, we firstly propose a novel multi-scale grading network, MIB-ANet, for adenoid hypertrophy classification. And we further propose a contrastive learning-based network to alleviate the overfitting problem of the model caused by lacking of nasoendoscopic adenoid images with high-quality annotations. The experimental results show that MIB-ANet shows the best grading performance compared to four classic CNNs, i.e., AlexNet, VGG16, ResNet50 and GoogleNet. Take $F_{1}$ score as an example, MIB-ANet achieves 1.38% higher $F_{1}$ score than the best baseline CNN - AlexNet. Due to the capability of the contrastive learning-based pre-training strategy in exploring unannotated data, the pre-training using SimCLR pretext task can consistently improve the performance of MIB-ANet when different ratios of the labeled training data are employed. The MIB-ANet pre-trained by SimCLR pretext task achieves 4.41%, 2.64%, 3.10%, and 1.71% higher $F_{1}$ score when 25%, 50%, 75% and 100% of the training data are labeled, respectively.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132361442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Breast Lesions Segmentation using Dual-level UNet (DL-UNet)","authors":"Yanjiao Zhao, Zhihui Lai, Linlin Shen, Heng Kong","doi":"10.1109/CBMS55023.2022.00067","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00067","url":null,"abstract":"Breast disease is one of the primary diseases endangering women's health. Accurate segmentation of breast lesions can help doctors diagnose breast diseases. However, the size and morphology of breast lesions are different, and the intensity of breast tissue is uneven. Thus, it is challenging to segment the lesion area accurately. In this paper, we propose Dual-scale Feature Fusion (DSFF) module and Edgeloss to segment breast lesions. The DSFF module aims to integrate two-scale features and design another effective skip connection scheme to reduce false positive regions. To solve the problem of unclear segmentation boundary, we design Edgeloss for additional supervision on the boundary region to obtain a finer segmentation boundary. The experiment results show that the proposed DL-UNet with the DSFF module and new Edgeloss performs best in several classic networks.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"633 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131651738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ethically Informed Software Process for Smart Health Home","authors":"Xiang Zhang, M. Pike, Nasser Mustafa, V. Brusic","doi":"10.1109/CBMS55023.2022.00040","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00040","url":null,"abstract":"Smart health homes (SHHs) integrate wearable sensors and various interconnected devices using the Internet of Things (IoT) technologies. SHHs combine IoT, data communication, and health-related applications to deliver healthcare services at home. The existing regulations and standards for SHH design are insufficient for home health care. Technical and device standards are available for guiding SHH design and implementation, but ethical standards are lacking. We identified six ethical requirements important for SHH: safety/trust, privacy/data security, vulnerable groups, individual autonomy, transparency/explainability/fairness, and social responsibility/ morality. We identified a set of questions useful for software engineering (SE) process for ethically informed software in SHH design and mapped them to the steps of software process. We mapped related guidelines from relevant professional codes of conduct. These questions can guide ethically informed software process of SHH.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123512633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uli Niemann, Atrayee Neog, B. Behrendt, K. Lawonn, M. Gutberlet, M. Spiliopoulou, B. Preim, M. Meuschke
{"title":"Classification of cardiac cohorts based on morphological and hemodynamic features derived from 4D PC-MRI data","authors":"Uli Niemann, Atrayee Neog, B. Behrendt, K. Lawonn, M. Gutberlet, M. Spiliopoulou, B. Preim, M. Meuschke","doi":"10.1109/CBMS55023.2022.00081","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00081","url":null,"abstract":"An accurate assessment of the cardiovascular system and prediction of cardiovascular diseases (CVDs) are crucial. Cardiac blood flow data provide insights about patient-specific hemodynamics. However, there is a lack of machine learning approaches for a feature-based classification of heart-healthy people and patients with CVDs. In this paper, we investigate the potential of morphological and hemodynamic features extracted from measured blood flow data in the aorta to classify heart-healthy volunteers (HHV) and patients with bicuspid aortic valve (BAV). Furthermore, we determine features that distinguish male vs. female patients and elderly HHV vs. BAV patients. We propose a data analysis pipeline for cardiac status classification, encompassing feature selection, model training, and hyperparameter tuning. Our results suggest substantial differences in flow features of the aorta between HHV and BAV patients. The excellent performance of the classifiers separating between elderly HHV and BAV patients indicates that aging is not associated with pathological morphology and hemodynamics. Our models represent a first step towards automated diagnosis of CVS using interpretable machine learning models.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116164233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sachintha R. Brandigampala, Abdullah F. Al-Battal, Truong Q. Nguyen
{"title":"Data Augmentation Methods For Object Detection and Segmentation In Ultrasound Scans: An Empirical Comparative Study","authors":"Sachintha R. Brandigampala, Abdullah F. Al-Battal, Truong Q. Nguyen","doi":"10.1109/CBMS55023.2022.00057","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00057","url":null,"abstract":"In ultrasound imaging, sonographers are tasked with analyzing scans for diagnostic purposes; a challenging task, especially for novice sonographers. Deep Learning methods have shown great potential in their ability to infer semantics and key information from scans to assist with these tasks. However, deep learning methods require large training sets to accomplish tasks such as segmentation and object detection. Generating these large datasets is a significant challenge in the medical domain due to the high cost of acquisition and annotation. Therefore, data augmentation is used to increase the size of training datasets to create the needed variability for deep learning models to generalize. These augmentation methods try to mimic differences among scans that result from noise, tissue movement, acquisition settings, and others. In this paper, we analyze the effectiveness of general augmentation methods that perform color, rigid, and non-rigid geometric transformation, to empirically analyze and compare their ability to improve the performance of three segmentation architectures on three different ultrasound datasets. We observe that non-rigid geometric transformations produce the best performance improvement.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117325041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reham H. Elnabawy, Slim Abdennadher, O. Hellwich, S. Eldawlatly
{"title":"A YOLO-based Object Simplification Approach for Visual Prostheses","authors":"Reham H. Elnabawy, Slim Abdennadher, O. Hellwich, S. Eldawlatly","doi":"10.1109/CBMS55023.2022.00039","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00039","url":null,"abstract":"Visual prostheses have been introduced to partially restore vision to the blind via visual pathway stimulation. Despite their success, some challenges have been reported by the implanted patients. One of those challenges is the difficulty of object recognition due to the low resolution of the images perceived through these devices. In this paper, a deep learning-based approach combined with image pre-processing is proposed to allow visual prostheses' users to recognize objects in a given scene. The approach simplifies the objects in the scene by displaying the objects in clip art form to enhance object recognition. These clip art images are generated by, first, identifying the objects in the scene using the You Only Look Once (YOLO) deep neural network. The clip art corresponding to each identified object is then retrieved via Google Images. Three experiments were conducted to measure the success of the proposed approach using simulated prosthetic vision. Our results reveal a remarkable decrease in the recognition time, increase in the recognition accuracy and confidence level when using the clip art representation as opposed to using the actual images of the objects. These results demonstrate the utility of object simplification in enhancing the perception of images in prosthetic vision.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116922419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}