Biocybernetics and Biomedical Engineering最新文献

筛选
英文 中文
Corrigendum to “Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images” [Biocybern. Biomed. Eng. 43(3) (2023) 586–602] 用于超声图像中前列腺精确分割的多级全卷积网络》[Biocybern. Biomed. Eng. 43(3) (2023) 586-602] 更正
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-10-01 DOI: 10.1016/j.bbe.2023.10.003
Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou
{"title":"Corrigendum to “Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images” [Biocybern. Biomed. Eng. 43(3) (2023) 586–602]","authors":"Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou","doi":"10.1016/j.bbe.2023.10.003","DOIUrl":"10.1016/j.bbe.2023.10.003","url":null,"abstract":"","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Page 776"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0208521623000578/pdfft?md5=6a5fed5d9ac5219134f858f11ea0539f&pid=1-s2.0-S0208521623000578-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136127396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated detection of abnormal respiratory sound from electronic stethoscope and mobile phone using MobileNetV2 利用MobileNetV2自动检测来自电子听诊器和手机的异常呼吸声
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-10-01 DOI: 10.1016/j.bbe.2023.11.001
Ximing Liao , Yin Wu , Nana Jiang , Jiaxing Sun , Wujian Xu , Shaoyong Gao , Jun Wang , Ting Li , Kun Wang , Qiang Li
{"title":"Automated detection of abnormal respiratory sound from electronic stethoscope and mobile phone using MobileNetV2","authors":"Ximing Liao ,&nbsp;Yin Wu ,&nbsp;Nana Jiang ,&nbsp;Jiaxing Sun ,&nbsp;Wujian Xu ,&nbsp;Shaoyong Gao ,&nbsp;Jun Wang ,&nbsp;Ting Li ,&nbsp;Kun Wang ,&nbsp;Qiang Li","doi":"10.1016/j.bbe.2023.11.001","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.11.001","url":null,"abstract":"<div><p>Auscultation, a traditional clinical examination method using a stethoscope to quickly assess airway abnormalities, remains valuable due to its real-time, non-invasive, and easy-to-perform nature. Recent advancements in computerized respiratory sound analysis (CRSA) have provided a quantifiable approach for recording, editing, and comparing respiratory sounds, also enabling the training of artificial intelligence models to fully excavate the potential of auscultation. However, existing sound analysis models often require complex computations, leading to prolonged processing times and high calculation and memory requirements. Moreover, the limited diversity and scope of available databases limits reproducibility and robustness, mainly relying on small sample datasets primarily collected from Caucasians. In order to overcome these limitations, we developed a new Chinese adult respiratory sound database, LD-DF RSdb, using an electronic stethoscope and mobile phone. By enrolling 145 participants, 9,584 high quality recordings were collected, containing 6,435 normal sounds, 2,782 crackles, 208 wheezes, and 159 combined sounds. Subsequently, we utilized a lightweight neural network architecture, MobileNetV2, for automated categorization of the four types of respiratory sounds, achieving an appreciable overall performance with an AUC of 0.8923. This study demonstrates the feasibility and potential of using mobile phones, electronic stethoscopes, and MobileNetV2 in CRSA. The proposed method offers a convenient and promising approach to enhance overall respiratory disease management and may help address healthcare resource disparities.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 763-775"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0208521623000608/pdfft?md5=eb2d1ad12271a18266dc09d4d5b9b3c9&pid=1-s2.0-S0208521623000608-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138448149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dual-stage transformer and MLP-based network for breast ultrasound image segmentation 基于双级变压器和mlp网络的乳腺超声图像分割
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-10-01 DOI: 10.1016/j.bbe.2023.09.001
Guidi Lin , Mingzhi Chen , Minsheng Tan , Lingna Chen , Junxi Chen
{"title":"A dual-stage transformer and MLP-based network for breast ultrasound image segmentation","authors":"Guidi Lin ,&nbsp;Mingzhi Chen ,&nbsp;Minsheng Tan ,&nbsp;Lingna Chen ,&nbsp;Junxi Chen","doi":"10.1016/j.bbe.2023.09.001","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.09.001","url":null,"abstract":"<div><p><span>Automatic segmentation of breast lesions from ultrasound images plays an important role in computer-aided breast cancer diagnosis. Many deep learning<span> methods based on convolutional neural networks (CNNs) have been proposed for </span></span>breast ultrasound<span> image segmentation. However, breast ultrasound image segmentation is still challenging due to ambiguous lesion boundaries. We propose a novel dual-stage framework based on Transformer and Multi-layer perceptron<span><span><span> (MLP) for the segmentation of breast lesions. We combine the Swin Transformer block with an efficient pyramid squeezed attention block in a parallel design and introduce bi-directional interactions across branches, which can efficiently extract multi-scale long-range dependencies to improve the segmentation performance and robustness of the model. Furthermore, we introduce tokenized MLP block in the MLP stage to extract global contextual information while retaining fine-grained information to segment more complex breast lesions. We have conducted extensive experiments with state-of-the-art methods on three breast ultrasound datasets, including BUSI, BUL, and MT_BUS datasets. The dice coefficient reached 0.8127 ± 0.2178, and the intersection over union reached 0.7269 ± 0.2370 on </span>benign lesions<span> when the Hausdorff distance was maintained at 3.75 ± 1.83. The dice coefficient of malignant lesions is improved by 3.09% for BUSI dataset. The segmentation results on the BUL and MT_BUS datasets also show that our proposed model achieves better segmentation results than other methods. Moreover, the external experiments indicate that the proposed model provides better generalization capability for breast lesion segmentation. The dual-stage scheme and the proposed Transformer module achieve the fine-grained local information and long-range dependencies to relieve the burden of </span></span>radiologists.</span></span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 656-671"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated detection of multi-class urinary sediment particles: An accurate deep learning approach 多类尿液沉淀物颗粒的自动检测:一种精确的深度学习方法
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-10-01 DOI: 10.1016/j.bbe.2023.09.003
He Lyu , Fanxin Xu , Tao Jin , Siyi Zheng , Chenchen Zhou , Yang Cao , Bin Luo , Qinzhen Huang , Wei Xiang , Dong Li
{"title":"Automated detection of multi-class urinary sediment particles: An accurate deep learning approach","authors":"He Lyu ,&nbsp;Fanxin Xu ,&nbsp;Tao Jin ,&nbsp;Siyi Zheng ,&nbsp;Chenchen Zhou ,&nbsp;Yang Cao ,&nbsp;Bin Luo ,&nbsp;Qinzhen Huang ,&nbsp;Wei Xiang ,&nbsp;Dong Li","doi":"10.1016/j.bbe.2023.09.003","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.09.003","url":null,"abstract":"<div><p>Urine microscopy is an essential diagnostic tool for kidney and urinary tract diseases, with automated analysis of urinary sediment particles improving diagnostic efficiency. However, some urinary sediment particles remain challenging to identify due to individual variations, blurred boundaries, and unbalanced samples. This research aims to mitigate the adverse effects of urine sediment particles while improving multi-class detection performance. We proposed an innovative model based on improved YOLOX for detecting urine sediment particles (YUS-Net). The combination of urine sediment data augmentation and overall pre-trained weights enhances model optimization potential. Furthermore, we incorporate the attention module into the critical feature transfer path and employ a novel loss function, Varifocal loss, to facilitate the extraction of discriminative features, which assists in the identification of densely distributed small objects. Based on the USE dataset, YUS-Net achieves the mean Average Precision (mAP) of 96.07%, 99.35% average precision, and 96.77% average recall, with a latency of 26.13 ms per image. The specific metrics for each category are as follows: cast: 99.66% AP; cryst: 100% AP; epith: 92.31% AP; epithn: 100% AP; eryth: 92.31% AP; leuko: 99.90% AP; mycete: 99.96% AP. With a practical network structure, YUS-Net achieved efficient, accurate, end-to-end urinary sediment particle detection. The model takes native high-resolution images as input without additional steps. Finally, a data augmentation strategy appropriate for the urinary microscopic image domain is established, which provides a novel approach for applying other methods in urine microscopic images.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 672-683"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images 超声图像中前列腺精确分割的多阶段全卷积网络
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.08.002
Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou
{"title":"Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images","authors":"Yujie Feng ,&nbsp;Chukwuemeka Clinton Atabansi ,&nbsp;Jing Nie ,&nbsp;Haijun Liu ,&nbsp;Hang Zhou ,&nbsp;Huai Zhao ,&nbsp;Ruixia Hong ,&nbsp;Fang Li ,&nbsp;Xichuan Zhou","doi":"10.1016/j.bbe.2023.08.002","DOIUrl":"10.1016/j.bbe.2023.08.002","url":null,"abstract":"<div><p><span><span>Prostate cancer is one of the most commonly diagnosed non-cutaneous malignant tumors and the sixth major cause of cancer-related death generally found in men globally. Automatic segmentation of prostate regions has a wide range of applications in prostate cancer diagnosis and treatment. It is challenging to extract powerful spatial features for precise prostate </span>segmentation methods due to the wide variation in prostate size, shape, and histopathologic heterogeneity among patients. Most of the existing CNN-based architectures often produce unsatisfactory results and inaccurate boundaries in prostate segmentation, which are caused by inadequate discriminative feature maps and the limited amount of spatial information. To address these issues, we propose a novel </span>deep learning<span> technique called Multi-Stage FCN architecture for 2D prostate segmentation that captures more precise spatial information and accurate prostate boundaries. In addition, a new prostate ultrasound image dataset known as CCH-TRUSPS was collected from Chongqing University Cancer Hospital, including prostate ultrasound images of various prostate cancer architectures. We evaluate our method on the CCH-TRUSPS dataset and the publicly available Multi-site T2-weighted MRI dataset using five commonly used metrics for medical image analysis. When compared to other CNN-based methods on the CCH-TRUSPS test set, our Multi-Stage FCN achieves the highest and best binary accuracy of 99.15%, the DSC score of 94.90%, the IoU score of 89.80%, the precision of 94.67%, and the recall of 96.49%. The statistical and visual results demonstrate that our approach outperforms previous CNN-based techniques in all ramifications and can be used for the clinical diagnosis of prostate cancer.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 586-602"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43556776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-guided multiple instance learning for COPD identification: To combine the intensity and morphology 注意引导多实例学习识别COPD:将强度与形态学相结合
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.06.004
Yanan Wu , Shouliang Qi , Jie Feng , Runsheng Chang , Haowen Pang , Jie Hou , Mengqi Li , Yingxi Wang , Shuyue Xia , Wei Qian
{"title":"Attention-guided multiple instance learning for COPD identification: To combine the intensity and morphology","authors":"Yanan Wu ,&nbsp;Shouliang Qi ,&nbsp;Jie Feng ,&nbsp;Runsheng Chang ,&nbsp;Haowen Pang ,&nbsp;Jie Hou ,&nbsp;Mengqi Li ,&nbsp;Yingxi Wang ,&nbsp;Shuyue Xia ,&nbsp;Wei Qian","doi":"10.1016/j.bbe.2023.06.004","DOIUrl":"10.1016/j.bbe.2023.06.004","url":null,"abstract":"<div><p><span>Chronic obstructive pulmonary disease<span> (COPD) is a complex and multi-component respiratory disease. Computed tomography (CT) images can characterize lesions in COPD patients, but the image intensity and morphology of lung components have not been fully exploited. Two datasets (Dataset 1 and 2) comprising a total of 561 subjects were obtained from two centers. A multiple instance learning (MIL) method is proposed for COPD identification. First, randomly selected slices (instances) from CT scans and multi-view 2D snapshots of the 3D </span></span>airway tree<span><span> and lung field extracted from CT images are acquired. Then, three attention-guided MIL models (slice-CT, snapshot-airway, and snapshot-lung-field models) are trained. In these models, a deep convolution<span> neural network (CNN) is utilized for feature extraction. Finally, the outputs of the above three MIL models are combined using </span></span>logistic regression to produce the final prediction. For Dataset 1, the accuracy of the slice-CT MIL model with 20 instances was 88.1%. The backbone of VGG-16 outperformed Alexnet, Resnet18, Resnet26, and Mobilenet_v2 in feature extraction. The snapshot-airway and snapshot-lung-field MIL models achieved accuracies of 89.4% and 90.0%, respectively. After the three models were combined, the accuracy reached 95.8%. The proposed model outperformed several state-of-the-art methods and afforded an accuracy of 83.1% for the external dataset (Dataset 2). The proposed weakly supervised MIL method is feasible for COPD identification. The effective CNN module and attention-guided MIL pooling module contribute to performance enhancement. The morphology information of the airway and lung field is beneficial for identifying COPD.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 568-585"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42298880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient simultaneous segmentation and classification of brain tumors from MRI scans using deep learning 利用深度学习对MRI扫描中的脑肿瘤进行有效的同时分割和分类
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.08.003
Akshya Kumar Sahoo , Priyadarsan Parida , K. Muralibabu , Sonali Dash
{"title":"Efficient simultaneous segmentation and classification of brain tumors from MRI scans using deep learning","authors":"Akshya Kumar Sahoo ,&nbsp;Priyadarsan Parida ,&nbsp;K. Muralibabu ,&nbsp;Sonali Dash","doi":"10.1016/j.bbe.2023.08.003","DOIUrl":"10.1016/j.bbe.2023.08.003","url":null,"abstract":"<div><p><span>Brain tumors can be difficult to diagnose, as they may have similar radiographic characteristics, and a thorough examination may take a considerable amount of time. To address these challenges, we propose an intelligent system for the automatic extraction and identification of brain tumors from 2D CE MRI images. Our approach comprises two stages. In the first stage, we use an encoder-decoder based U-net with residual network<span><span><span> as the backbone to detect different types of brain tumors, including glioma, meningioma, and </span>pituitary tumors. Our method achieved an accuracy of 99.60%, a sensitivity of 90.20%, a specificity of 99.80%, a </span>dice similarity coefficient of 90.11%, and a precision of 90.50% for tumor extraction. In the second stage, we employ a YOLO2 (you only look once) based </span></span>transfer learning<span> approach to classify the extracted tumors, achieving a classification accuracy of 97%. Our proposed approach outperforms state-of-the-art methods found in the literature. The results demonstrate the potential of our method to aid in the diagnosis and treatment of brain tumors.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 616-633"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45686531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Non-invasive waveform analysis for emergency triage via simulated hemorrhage: An experimental study using novel dynamic lower body negative pressure model 模拟出血急诊分诊的无创波形分析:基于新型动态下体负压模型的实验研究
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.06.002
Naimahmed Nesaragi , Lars Øivind Høiseth , Hemin Ali Qadir , Leiv Arne Rosseland , Per Steinar Halvorsen , Ilangko Balasingham
{"title":"Non-invasive waveform analysis for emergency triage via simulated hemorrhage: An experimental study using novel dynamic lower body negative pressure model","authors":"Naimahmed Nesaragi ,&nbsp;Lars Øivind Høiseth ,&nbsp;Hemin Ali Qadir ,&nbsp;Leiv Arne Rosseland ,&nbsp;Per Steinar Halvorsen ,&nbsp;Ilangko Balasingham","doi":"10.1016/j.bbe.2023.06.002","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.06.002","url":null,"abstract":"<div><p>The extent to which advanced waveform analysis of non-invasive physiological signals can diagnose levels of hypovolemia remains insufficiently explored. The present study explores the discriminative ability of a deep learning (DL) framework to classify levels of ongoing hypovolemia, simulated via novel dynamic lower body negative pressure (LBNP) model among healthy volunteers. We used a dynamic LBNP protocol as opposed to the traditional model, where LBNP is applied in a predictable step-wise, progressively descending manner. This dynamic LBNP version assists in circumventing the problem posed in terms of time dependency, as in real-life pre-hospital settings intravascular blood volume may fluctuate due to volume resuscitation. A supervised DL-based framework for ternary classification was realized by segmenting the underlying noninvasive signal and labeling segments with corresponding LBNP target levels. The proposed DL model with two inputs was trained with respective time–frequency representations extracted on waveform segments to classify each of them into blood volume loss: Class 1 (mild); Class 2 (moderate); or Class 3 (severe). At the outset, the latent space derived at the end of the DL model via late fusion among both inputs assists in enhanced classification performance. When evaluated in a 3-fold cross-validation setup with stratified subjects, the experimental findings demonstrated PPG to be a potential surrogate for variations in blood volume with average classification performance, AUROC: 0.8861, AUPRC: 0.8141, <span><math><mrow><mi>F</mi><mn>1</mn></mrow></math></span>-score:72.16%, Sensitivity:79.06%, and Specificity:89.21%. Our proposed DL algorithm on PPG signal demonstrates the possibility to capture the complex interplay in physiological responses related to both bleeding and fluid resuscitation using this challenging LBNP setup.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 551-567"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BA-Net: Brightness prior guided attention network for colonic polyp segmentation BA-Net:用于结肠息肉分割的亮度优先引导注意力网络
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.08.001
Haiying Xia , Yilin Qin , Yumei Tan , Shuxiang Song
{"title":"BA-Net: Brightness prior guided attention network for colonic polyp segmentation","authors":"Haiying Xia ,&nbsp;Yilin Qin ,&nbsp;Yumei Tan ,&nbsp;Shuxiang Song","doi":"10.1016/j.bbe.2023.08.001","DOIUrl":"10.1016/j.bbe.2023.08.001","url":null,"abstract":"<div><p>Automatic polyp segmentation at colonoscopy plays an important role in the early diagnosis and surgery of colorectal cancer. However, the diversity of polyps in different images greatly increases the difficulty of accurately segmenting polyps. Manual segmentation of polyps in colonoscopic images is time-consuming and the rate of polyps missed remains high. In this paper, we propose a brightness prior guided attention network (BA-Net) for automatic polyp segmentation. Specifically, we first aggregate the high-level features of the last three layers of the encoder with an enhanced receptive field (ERF) module, which further fed to the decoder to obtain the initial prediction maps. Then, we introduce a brightness prior fusion (BF) module that fuses the brightness prior information into the multi-scale side-out high-level semantic features. The BF module aims to induce the network to localize salient regions, which may be potential polyps, to obtain better segmentation results. Finally, we propose a global reverse attention (GRA) module to combine the output of the BF module and the initial prediction map for obtaining long-range dependence and reverse refinement prediction results. With iterative refinement from higher-level semantics to lower-level semantics, our BA-Net can achieve more refined and accurate segmentation. Extensive experiments show that our BA-Net outperforms the state-of-the-art methods on six common polyp datasets.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 603-615"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44506676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based cross-modal multi-contrast network for ophthalmic diseases diagnosis 基于变压器的跨模态多对比网络眼科疾病诊断
IF 6.4 2区 医学
Biocybernetics and Biomedical Engineering Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.06.001
Yang Yu, Hongqing Zhu
{"title":"Transformer-based cross-modal multi-contrast network for ophthalmic diseases diagnosis","authors":"Yang Yu,&nbsp;Hongqing Zhu","doi":"10.1016/j.bbe.2023.06.001","DOIUrl":"10.1016/j.bbe.2023.06.001","url":null,"abstract":"<div><p><span><span>Automatic diagnosis of various ophthalmic diseases from ocular medical images is vital to support clinical decisions. Most current methods employ a single </span>imaging modality<span>, especially 2D fundus images. Considering that the diagnosis of ophthalmic diseases can greatly benefit from multiple imaging modalities, this paper further improves the accuracy of diagnosis by effectively utilizing cross-modal data. In this paper, we propose Transformer-based cross-modal multi-contrast network for efficiently fusing color fundus photograph (CFP) and optical coherence tomography (OCT) modality to diagnose ophthalmic diseases. We design multi-contrast learning strategy to extract discriminate features from cross-modal data for diagnosis. Then channel fusion head captures the semantically shared information across different modalities and the similarity features between patients of the same category. Meanwhile, we use a class-balanced training strategy to cope with the situation that medical datasets are usually class-imbalanced. Our method is evaluated on public benchmark datasets for cross-modal ophthalmic disease diagnosis. The experimental results demonstrate that our method outperforms other approaches. The codes and models are available at </span></span><span>https://github.com/ecustyy/tcmn</span><svg><path></path></svg>.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 507-527"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43259056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信