{"title":"Decoding motor imagery based on dipole feature imaging and a hybrid CNN with embedded squeeze-and-excitation block","authors":"Linlin Wang , Mingai Li","doi":"10.1016/j.bbe.2023.10.004","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.10.004","url":null,"abstract":"<div><p><span><span>Motor imagery (MI) decoding is the core of an intelligent rehabilitation system in brain computer interface<span>, and it has a potential advantage by using source signals, which have higher spatial resolution and the same time resolution compared to scalp electroencephalography (EEG). However, how to delve and utilize the personalized frequency characteristic of dipoles for improving decoding performance has not been paid sufficient attention. In this paper, a novel dipole feature imaging (DFI) and a hybrid </span></span>convolutional neural network (HCNN) with an embedded squeeze-and-excitation block (SEB), denoted as DFI-HCNN, are proposed for decoding MI tasks. EEG source </span>imaging technique<span><span><span> is used for brain source estimation, and each sub-band spectrum powers of all dipoles are calculated through frequency analysis and band division. Then, the 3D space information of dipoles is retrieved, and by using azimuthal equidistant projection algorithm it is transformed to a </span>2D plane, which is combined with </span>nearest neighbor interpolation to generate multi sub-band dipole feature images. Furthermore, a HCNN is designed and applied to the ensemble of sub-band dipole feature images, from which the importance of sub-bands is acquired to adjust the corresponding attentions adaptively by SEB. Ten-fold cross-validation experiments on two public datasets achieve the comparatively higher decoding accuracies of 84.23% and 92.62%, respectively. The experiment results show that DFI is an effective feature representation, and HCNN with an embedded SEB can enhance the useful frequency information of dipoles for improving MI decoding.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 751-762"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138423191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eun Young Choi , Seung Hoon Han , Ik Hee Ryu , Jin Kuk Kim , In Sik Lee , Eoksoo Han , Hyungsu Kim , Joon Yul Choi , Tae Keun Yoo
{"title":"Automated detection of crystalline retinopathy via fundus photography using multistage generative adversarial networks","authors":"Eun Young Choi , Seung Hoon Han , Ik Hee Ryu , Jin Kuk Kim , In Sik Lee , Eoksoo Han , Hyungsu Kim , Joon Yul Choi , Tae Keun Yoo","doi":"10.1016/j.bbe.2023.10.005","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.10.005","url":null,"abstract":"<div><h3>Purpose</h3><p>Crystalline retinopathy is characterized by reflective crystal deposits in the macula and is caused by various systemic conditions including hereditary, toxic, and embolic etiologies. Herein, we introduce a novel application of deep learning with a multistage generative adversarial network (GAN) to detect crystalline retinopathy using fundus photography.</p></div><div><h3>Methods</h3><p>The dataset comprised major classes (healthy retina, diabetic retinopathy, exudative age-related macular degeneration, and drusen) and a crystalline retinopathy class (minor set). To overcome the limited data on crystalline retinopathy, we proposed a novel multistage GAN framework. The GAN was retrained after CutMix combination by inputting the GAN-generated synthetic data as new inputs to the original training data. After the multistage CycleGAN augmented the data for crystalline retinopathy, we built a deep-learning classifier model for detection.</p></div><div><h3>Results</h3><p>Using the multistage CycleGAN facilitated realistic fundus photography synthesis with the characteristic features of retinal crystalline deposits. The proposed method outperformed typical transfer learning, prototypical networks, and knowledge distillation for both multiclass and binary classifications. The final model achieved an area under the curve of the receiver operating characteristics of 0.962 for internal validation and 0.987 for external validation for the detection of crystalline retinopathy.</p></div><div><h3>Conclusion</h3><p>We introduced a deep learning approach for detecting crystalline retinopathy, a potential biomarker of underlying systemic pathological conditions. Our approach enables realistic pathological image synthesis and more accurate prediction of crystalline retinopathy, an essential but minor retinal condition.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 725-735"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91993363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recep Sinan Arslan , Hasan Ulutas , Ahmet Sertol Köksal , Mehmet Bakir , Bülent Çiftçi
{"title":"End-to end decision support system for sleep apnea detection and Apnea-Hypopnea Index calculation using hybrid feature vector and Machine learning","authors":"Recep Sinan Arslan , Hasan Ulutas , Ahmet Sertol Köksal , Mehmet Bakir , Bülent Çiftçi","doi":"10.1016/j.bbe.2023.10.002","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.10.002","url":null,"abstract":"<div><p>Sleep apnea is a disease that occurs due to the decrease in oxygen saturation in the blood and directly affects people's lives. Detection of sleep apnea is crucial for assessing sleep quality. It is also an important parameter in the diagnosis of various other diseases (diabetes, chronic kidney disease, depression, and cardiological diseases). Recent studies show that detection of sleep apnea can be done via signal processing, especially EEG and ECG signals. However, the detection accuracy needs to be improved. In this paper, a ML model is used for the detection of sleep apnea using 19 static sensor data and 2 dynamic data (Sleep score and Arousal). The sensor data is recorded as a discrete signal and the sleep process is divided into 4.8 M segments. In this work, 19 different sensor data sets were recorded with polysomnography (PSG). These data sets have been used to perform sleep scoring. Then, arousal status marking is done. Model training was carried out with the feature vector consisting of 21 data obtained. Tests were performed with eight different machine learning techniques on a unique dataset consisting of 113 patients. After all, it was automatically determined whether people were diseased (a kind of apnea) or healthy. The proposed model had an average accuracy of 97.27%, while the recall, precision, and f-score values were 99.18%, 95.32%, and 97.20%, respectively. After all, the model that less feature engineering, less complex classification model, higher dataset usage, and higher classification performance has been revealed.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 684-699"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49766939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yudong Bao , Xu Li , Wen Wei , Shengquan Qu , Yang Zhan
{"title":"Simulation on human respiratory motion dynamics and platform construction","authors":"Yudong Bao , Xu Li , Wen Wei , Shengquan Qu , Yang Zhan","doi":"10.1016/j.bbe.2023.09.002","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.09.002","url":null,"abstract":"<div><p><span>Bronchoscopy has a crucial role in the current treatment of lung diseases, and it is typical of interventional medical instruments led by manual intervention. The scientific study of bronchoscopy is now of primary importance in eliminating problems associated with manual intervention by scientific means. However, for its intervention environment, the trachea is often treated statically, without considering the effect of tracheal deformation on bronchoscopic intervention during respiratory motion. Therefore its findings can deviate from practical application. Thus, studying kinetic problems in respiratory motion is of great importance. This paper developed a mathematical model of </span>mechanical properties<span> of respiratory motion to express respiratory force from the perspective of dynamics of respiratory motion. The dynamical model<span><span><span> was solved using MATLAB. Then, a </span>finite element model of respiratory motion was built using Mimics, and the results of respiratory force solution were used as the load of model for dynamics simulation in ABAQUS. Then, a human–computer interaction platform was designed in MATLAB APP Designer to realize </span>parametric<span> calculation and fitting of respiratory force, and a personalized human respiratory motion dynamics simulation was completed in conjunction with ABAQUS. Finally, experimental validation of the interactive platform was performed using pulmonary function test data from three patients. Validation analysis by respiration striving solution, kinetic simulation and experiment found that Dynamical model and simulation results can be better adapted to the individualized study of human respiratory motion dynamics.</span></span></span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 736-750"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134657766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou
{"title":"Corrigendum to “Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images” [Biocybern. Biomed. Eng. 43(3) (2023) 586–602]","authors":"Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou","doi":"10.1016/j.bbe.2023.10.003","DOIUrl":"10.1016/j.bbe.2023.10.003","url":null,"abstract":"","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Page 776"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0208521623000578/pdfft?md5=6a5fed5d9ac5219134f858f11ea0539f&pid=1-s2.0-S0208521623000578-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136127396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ximing Liao , Yin Wu , Nana Jiang , Jiaxing Sun , Wujian Xu , Shaoyong Gao , Jun Wang , Ting Li , Kun Wang , Qiang Li
{"title":"Automated detection of abnormal respiratory sound from electronic stethoscope and mobile phone using MobileNetV2","authors":"Ximing Liao , Yin Wu , Nana Jiang , Jiaxing Sun , Wujian Xu , Shaoyong Gao , Jun Wang , Ting Li , Kun Wang , Qiang Li","doi":"10.1016/j.bbe.2023.11.001","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.11.001","url":null,"abstract":"<div><p>Auscultation, a traditional clinical examination method using a stethoscope to quickly assess airway abnormalities, remains valuable due to its real-time, non-invasive, and easy-to-perform nature. Recent advancements in computerized respiratory sound analysis (CRSA) have provided a quantifiable approach for recording, editing, and comparing respiratory sounds, also enabling the training of artificial intelligence models to fully excavate the potential of auscultation. However, existing sound analysis models often require complex computations, leading to prolonged processing times and high calculation and memory requirements. Moreover, the limited diversity and scope of available databases limits reproducibility and robustness, mainly relying on small sample datasets primarily collected from Caucasians. In order to overcome these limitations, we developed a new Chinese adult respiratory sound database, LD-DF RSdb, using an electronic stethoscope and mobile phone. By enrolling 145 participants, 9,584 high quality recordings were collected, containing 6,435 normal sounds, 2,782 crackles, 208 wheezes, and 159 combined sounds. Subsequently, we utilized a lightweight neural network architecture, MobileNetV2, for automated categorization of the four types of respiratory sounds, achieving an appreciable overall performance with an AUC of 0.8923. This study demonstrates the feasibility and potential of using mobile phones, electronic stethoscopes, and MobileNetV2 in CRSA. The proposed method offers a convenient and promising approach to enhance overall respiratory disease management and may help address healthcare resource disparities.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 763-775"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0208521623000608/pdfft?md5=eb2d1ad12271a18266dc09d4d5b9b3c9&pid=1-s2.0-S0208521623000608-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138448149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guidi Lin , Mingzhi Chen , Minsheng Tan , Lingna Chen , Junxi Chen
{"title":"A dual-stage transformer and MLP-based network for breast ultrasound image segmentation","authors":"Guidi Lin , Mingzhi Chen , Minsheng Tan , Lingna Chen , Junxi Chen","doi":"10.1016/j.bbe.2023.09.001","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.09.001","url":null,"abstract":"<div><p><span>Automatic segmentation of breast lesions from ultrasound images plays an important role in computer-aided breast cancer diagnosis. Many deep learning<span> methods based on convolutional neural networks (CNNs) have been proposed for </span></span>breast ultrasound<span> image segmentation. However, breast ultrasound image segmentation is still challenging due to ambiguous lesion boundaries. We propose a novel dual-stage framework based on Transformer and Multi-layer perceptron<span><span><span> (MLP) for the segmentation of breast lesions. We combine the Swin Transformer block with an efficient pyramid squeezed attention block in a parallel design and introduce bi-directional interactions across branches, which can efficiently extract multi-scale long-range dependencies to improve the segmentation performance and robustness of the model. Furthermore, we introduce tokenized MLP block in the MLP stage to extract global contextual information while retaining fine-grained information to segment more complex breast lesions. We have conducted extensive experiments with state-of-the-art methods on three breast ultrasound datasets, including BUSI, BUL, and MT_BUS datasets. The dice coefficient reached 0.8127 ± 0.2178, and the intersection over union reached 0.7269 ± 0.2370 on </span>benign lesions<span> when the Hausdorff distance was maintained at 3.75 ± 1.83. The dice coefficient of malignant lesions is improved by 3.09% for BUSI dataset. The segmentation results on the BUL and MT_BUS datasets also show that our proposed model achieves better segmentation results than other methods. Moreover, the external experiments indicate that the proposed model provides better generalization capability for breast lesion segmentation. The dual-stage scheme and the proposed Transformer module achieve the fine-grained local information and long-range dependencies to relieve the burden of </span></span>radiologists.</span></span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 656-671"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
He Lyu , Fanxin Xu , Tao Jin , Siyi Zheng , Chenchen Zhou , Yang Cao , Bin Luo , Qinzhen Huang , Wei Xiang , Dong Li
{"title":"Automated detection of multi-class urinary sediment particles: An accurate deep learning approach","authors":"He Lyu , Fanxin Xu , Tao Jin , Siyi Zheng , Chenchen Zhou , Yang Cao , Bin Luo , Qinzhen Huang , Wei Xiang , Dong Li","doi":"10.1016/j.bbe.2023.09.003","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.09.003","url":null,"abstract":"<div><p>Urine microscopy is an essential diagnostic tool for kidney and urinary tract diseases, with automated analysis of urinary sediment particles improving diagnostic efficiency. However, some urinary sediment particles remain challenging to identify due to individual variations, blurred boundaries, and unbalanced samples. This research aims to mitigate the adverse effects of urine sediment particles while improving multi-class detection performance. We proposed an innovative model based on improved YOLOX for detecting urine sediment particles (YUS-Net). The combination of urine sediment data augmentation and overall pre-trained weights enhances model optimization potential. Furthermore, we incorporate the attention module into the critical feature transfer path and employ a novel loss function, Varifocal loss, to facilitate the extraction of discriminative features, which assists in the identification of densely distributed small objects. Based on the USE dataset, YUS-Net achieves the mean Average Precision (mAP) of 96.07%, 99.35% average precision, and 96.77% average recall, with a latency of 26.13 ms per image. The specific metrics for each category are as follows: cast: 99.66% AP; cryst: 100% AP; epith: 92.31% AP; epithn: 100% AP; eryth: 92.31% AP; leuko: 99.90% AP; mycete: 99.96% AP. With a practical network structure, YUS-Net achieved efficient, accurate, end-to-end urinary sediment particle detection. The model takes native high-resolution images as input without additional steps. Finally, a data augmentation strategy appropriate for the urinary microscopic image domain is established, which provides a novel approach for applying other methods in urine microscopic images.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 4","pages":"Pages 672-683"},"PeriodicalIF":6.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou
{"title":"Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images","authors":"Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou","doi":"10.1016/j.bbe.2023.08.002","DOIUrl":"10.1016/j.bbe.2023.08.002","url":null,"abstract":"<div><p><span><span>Prostate cancer is one of the most commonly diagnosed non-cutaneous malignant tumors and the sixth major cause of cancer-related death generally found in men globally. Automatic segmentation of prostate regions has a wide range of applications in prostate cancer diagnosis and treatment. It is challenging to extract powerful spatial features for precise prostate </span>segmentation methods due to the wide variation in prostate size, shape, and histopathologic heterogeneity among patients. Most of the existing CNN-based architectures often produce unsatisfactory results and inaccurate boundaries in prostate segmentation, which are caused by inadequate discriminative feature maps and the limited amount of spatial information. To address these issues, we propose a novel </span>deep learning<span> technique called Multi-Stage FCN architecture for 2D prostate segmentation that captures more precise spatial information and accurate prostate boundaries. In addition, a new prostate ultrasound image dataset known as CCH-TRUSPS was collected from Chongqing University Cancer Hospital, including prostate ultrasound images of various prostate cancer architectures. We evaluate our method on the CCH-TRUSPS dataset and the publicly available Multi-site T2-weighted MRI dataset using five commonly used metrics for medical image analysis. When compared to other CNN-based methods on the CCH-TRUSPS test set, our Multi-Stage FCN achieves the highest and best binary accuracy of 99.15%, the DSC score of 94.90%, the IoU score of 89.80%, the precision of 94.67%, and the recall of 96.49%. The statistical and visual results demonstrate that our approach outperforms previous CNN-based techniques in all ramifications and can be used for the clinical diagnosis of prostate cancer.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 586-602"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43556776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanan Wu , Shouliang Qi , Jie Feng , Runsheng Chang , Haowen Pang , Jie Hou , Mengqi Li , Yingxi Wang , Shuyue Xia , Wei Qian
{"title":"Attention-guided multiple instance learning for COPD identification: To combine the intensity and morphology","authors":"Yanan Wu , Shouliang Qi , Jie Feng , Runsheng Chang , Haowen Pang , Jie Hou , Mengqi Li , Yingxi Wang , Shuyue Xia , Wei Qian","doi":"10.1016/j.bbe.2023.06.004","DOIUrl":"10.1016/j.bbe.2023.06.004","url":null,"abstract":"<div><p><span>Chronic obstructive pulmonary disease<span> (COPD) is a complex and multi-component respiratory disease. Computed tomography (CT) images can characterize lesions in COPD patients, but the image intensity and morphology of lung components have not been fully exploited. Two datasets (Dataset 1 and 2) comprising a total of 561 subjects were obtained from two centers. A multiple instance learning (MIL) method is proposed for COPD identification. First, randomly selected slices (instances) from CT scans and multi-view 2D snapshots of the 3D </span></span>airway tree<span><span> and lung field extracted from CT images are acquired. Then, three attention-guided MIL models (slice-CT, snapshot-airway, and snapshot-lung-field models) are trained. In these models, a deep convolution<span> neural network (CNN) is utilized for feature extraction. Finally, the outputs of the above three MIL models are combined using </span></span>logistic regression to produce the final prediction. For Dataset 1, the accuracy of the slice-CT MIL model with 20 instances was 88.1%. The backbone of VGG-16 outperformed Alexnet, Resnet18, Resnet26, and Mobilenet_v2 in feature extraction. The snapshot-airway and snapshot-lung-field MIL models achieved accuracies of 89.4% and 90.0%, respectively. After the three models were combined, the accuracy reached 95.8%. The proposed model outperformed several state-of-the-art methods and afforded an accuracy of 83.1% for the external dataset (Dataset 2). The proposed weakly supervised MIL method is feasible for COPD identification. The effective CNN module and attention-guided MIL pooling module contribute to performance enhancement. The morphology information of the airway and lung field is beneficial for identifying COPD.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 568-585"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42298880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}