Qing Li , Tao Wang , RunRui Li , Yan Qiang , Bin Zhang , Jijie Sun , JuanJuan Zhao , Wei Wu
{"title":"TLIR: Two-layer iterative refinement model for limited-angle CT reconstruction","authors":"Qing Li , Tao Wang , RunRui Li , Yan Qiang , Bin Zhang , Jijie Sun , JuanJuan Zhao , Wei Wu","doi":"10.1016/j.bspc.2024.107058","DOIUrl":"10.1016/j.bspc.2024.107058","url":null,"abstract":"<div><div>Limited angle reconstruction is a typical ill-posed problem in computed tomography (CT). In practical applications, due to the limited scanning angles available for fixed scan targets and the patient’s ability to tolerate radiation, complete projection data are usually not available, and images reconstructed by conventional analytical iterative methods can suffer from severe structural distortion and tilt artefacts. In this paper, we propose a deep iterative model called TLIR to recover the structural details of the missing parts of the limited angle CT images and reconstruct high quality CT images from them. Specifically, we adapt the denoising diffusion probability model to conditional image generation for the image domain recovery problem, where the model output starts from noise-blended limited-angle CT images and iteratively refines the output images using residuals U-Net trained at various noise level data. In addition, considering that the deep model corrupts the sampled part of the sinusoidal data during inference, we propose a learnable data fidelity module called DSEM to balance the data domain exchange loss and inference information loss. The two modules are executed alternately to form our two-layer iterative refinement model. The two-layer iterative structure also makes the network more robust during training and inference. TLIR shows strong reconstruction performance at different limited angles, and shows highly competitive results in all image evaluation metrics. The model proposed in this paper is open source at <span><span>https://github.com/JinxTao/TLIR/tree/master</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Ghulam Abbas Malik , Adnan Saeed , Khurram Shehzad , Muddesar Iqbal
{"title":"DEF-SwinE2NET: Dual enhanced features guided with multi-model fusion for brain tumor classification using preprocessing optimization","authors":"Muhammad Ghulam Abbas Malik , Adnan Saeed , Khurram Shehzad , Muddesar Iqbal","doi":"10.1016/j.bspc.2024.107079","DOIUrl":"10.1016/j.bspc.2024.107079","url":null,"abstract":"<div><div>Brain tumors exhibit significant variability in shape, size, and location, making it difficult to achieve consistent and accurate classification. It requires advanced algorithms for handling diverse tumor presentations. To solve this issue, we propose a Dual-Enhanced Features Scheme (DEFS) with a Swin-Transformer model based on the EfficientNetV2S to improve the classification and reuse parameters. In DEFS, the dense-block with dilation enables to uncovering of hidden details and spatial relationships across varying scales in the model which are typically obscured by traditional convolutional-layers. This module is particularly crucial in medical imaging, where tumors and anomalies can present in various sizes and shapes. Further, the dual-attention mechanism in the enhanced Featured scheme enhances the explainability and interpretability of the model by using spatial and channel-wise information. Additionally, the Swin-Transformer-block improves the model’s capabilities to capture global patterns in brain-tumor images, which is highly advantageous in medical-imaging where the location and extent of abnormalities, such as tumors, can vary significantly. To strengthen the proposed DEF-SwinE2NET, we used EfficientNetV2S as a baseline-model due to its effectiveness and accurate classification compared to its predecessors. We evaluated DEFSwinE2NET using three benchmark datasets: two were sourced from Kaggle and one from a Figshare repositories. Several preprocessing-steps were applied to enhance the MRI-images before training including image cropping, median-filter noise-reduction, contrast-limited adaptive histogram equalization (CLAHE) for local-contrast enhancement, Laplacian-edge enhancement to highlight critical features, and data augmentation to improve model robustness and generalization. The DEF-SwinE2NET model achieves remarkable results with an accuracy of 99.43 %, a sensitivity of 99.39 %, and an F1-score of 99.41 %.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wireless capsule endoscopy anomaly classification via dynamic multi-task learning","authors":"Xingcun Li , Qinghua Wu , Kun Wu","doi":"10.1016/j.bspc.2024.107081","DOIUrl":"10.1016/j.bspc.2024.107081","url":null,"abstract":"<div><div>Wireless capsule endoscopy (WCE) provides a painless, non-invasive means for early gastrointestinal disease detection and cancer prevention. However, clinicians must diagnose only about 5% of lesion images from tens of thousands of frames, highlighting the need for computer-assisted diagnostic methods to enhance efficiency and reduce the elevated misdiagnosis rates attributed to visual fatigue. Previous research heavily relied on module design, an effective yet highly coupled method with the baseline and incurring additional computational costs. This paper proposes a dynamic multi-task learning method that combines triplet loss and weighted cross-entropy loss to respectively guide the model in learning compact fine-grained representations and establishing less biased decision boundaries, without incurring additional computational costs. Our method outperforms previous advanced methods on two publicly available datasets, achieving an F1 score of 96.47% on Kvasir-Capsule and an F1 score of 96.75% with an accuracy of 96.72% on CAD-CAP. Visualization of the representations and heatmaps confirms the model’s precision in focusing on the lesion area. The prediction model has been uploaded to <span><span>https://github.com/xli122/WCE_MTL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CheXDouble: Dual-Supervised interpretable disease diagnosis model","authors":"Zhiwei Tang , You Yang","doi":"10.1016/j.bspc.2024.107026","DOIUrl":"10.1016/j.bspc.2024.107026","url":null,"abstract":"<div><div>Chest X-ray imaging, commonly used for diagnosing cardiopulmonary diseases, typically requires radiologists to devote considerable effort to reading and interpreting the images. Moreover, diagnostic outcomes can vary due to differences in radiologists’ experience. Deep learning for chest X-ray disease diagnosis holds great promise for enhancing diagnostic accuracy and reducing the workload of radiologists. However, traditional deep learning models for medical image classification are often difficult to interpret. To address this, we introduce the Global Attention Alignment Module, which utilizes cardiopulmonary mask for supervised training. This provides the model with spatial location priors during training, thereby enhancing the interpretability of the saliency maps and the disease classification performance. Additionally, most chest X-ray datasets suffer from severe imbalances between positive and negative samples for diseases, leading to classification imbalance issues when training models. Thus, we propose the Improved Focal Loss, which dynamically adjusts the weight of negative samples in the loss function based on sample statistics, effectively mitigating the imbalance issue in the dataset. Moreover, the training of deep learning models for medical image classification requires substantial data support. Therefore, we conducted a quantitative analysis to explore the impact of five different data augmentation methods on model classification performance across various input image sizes, identifying the most effective data augmentation strategy. Ultimately, through these proposed methods, we developed the dual-supervised medical imaging disease diagnosis model CheXDouble, which surpasses previous state-of-the-art models with its highly competitive disease classification performance.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmet Sen , Miquel Aguirre , Peter H Charlton , Laurent Navarro , Stéphane Avril , Jordi Alastruey
{"title":"Machine learning-based pulse wave analysis for classification of circle of Willis topology: An in silico study with 30,618 virtual subjects","authors":"Ahmet Sen , Miquel Aguirre , Peter H Charlton , Laurent Navarro , Stéphane Avril , Jordi Alastruey","doi":"10.1016/j.bspc.2024.106999","DOIUrl":"10.1016/j.bspc.2024.106999","url":null,"abstract":"<div><h3>Background and Objective</h3><div>The topology of the circle of Willis (CoW) is crucial in cerebral circulation and significantly impacts patient management. Incomplete CoW structures increase stroke risk and post-stroke damage. Current detection methods using computed tomography and magnetic resonance scans are often invasive, time-consuming, and costly. This study investigated the use of machine learning (ML) to classify CoW topology through arterial blood flow velocity pulse waves (PWs), which can be noninvasively measured with Doppler ultrasound.</div></div><div><h3>Methods</h3><div>A database of <em>in silico</em> PWs from 30,618 virtual subjects, aged 25 to 75 years, with complete and incomplete CoW topologies was created and validated against <em>in vivo</em> data. Seven ML architectures were trained and tested using 45 combinations of carotid, vertebral and brachial artery PWs, with varying levels of artificial noise to mimic real-world measurement errors. SHapley Additive exPlanations (SHAP) were used to interpret the predictions made by the artificial neural network (ANN) models.</div></div><div><h3>Results</h3><div>A convolutional neural network achieved the highest accuracy (98%) for CoW topology classification using a combination of one vertebral and one common carotid velocity PW without noise. Under a 20% noise-to-signal ratio, a multi-layer perceptron model had the highest prediction rate (79%). All ML models performed best for topologies lacking posterior communication arteries. Mean and peak systolic velocities were identified as key features influencing ANN predictions.</div></div><div><h3>Conclusions</h3><div>ML-based PW analysis shows significant potential for efficient, noninvasive CoW topology detection via Doppler ultrasound. The dataset, post-processing tools, and ML code, are freely available to support further research.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neighborhood transformer for sparse-view X-ray 3D foot reconstruction","authors":"Wei Wang , Li An , Mingquan Zhou , Gengyin Han","doi":"10.1016/j.bspc.2024.107082","DOIUrl":"10.1016/j.bspc.2024.107082","url":null,"abstract":"<div><div>In medical imaging, Sparse-View X-ray 3D reconstruction is crucial for analyzing and diagnosing foot bone structures. However, existing methods face limitations when handling sparse view data and complex bone structures. To enhance reconstruction accuracy and detail preservation, this paper proposes an innovative Sparse-View X-ray 3D foot reconstruction technique based on Neighborhood Transformer. A new Neighborhood Position Encoding strategy is introduced, which divides X-ray images into local regions using a window mechanism and precisely selects these regions through nearest neighbor methods, thereby capturing detailed features in the images. Building upon existing NeRF (Neural Radiance Fields) technology, the paper introduces the Neighborhood Transformer module. This module significantly improves the expression capability for complex foot bone structures through depthwise separable convolutions and a dual-branch local–global Transformer network. Additionally, an adaptive weight learning strategy is applied within the Transformer module, enabling the model to better capture long-distance dependencies, thereby improving its ability to handle sparse view data.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuailin You , Chi Dong , Bo Huang , Langyuan Fu , Yaqiao Zhang , Lihong Han , Xinmeng Rong , Ying Jin , Dongxu Yi , Huazhe Yang , Zhiying Tian , Wenyan Jiang
{"title":"An edge association graph network conforming to embryonic morphology for automated grading of day 3 human embryos","authors":"Shuailin You , Chi Dong , Bo Huang , Langyuan Fu , Yaqiao Zhang , Lihong Han , Xinmeng Rong , Ying Jin , Dongxu Yi , Huazhe Yang , Zhiying Tian , Wenyan Jiang","doi":"10.1016/j.bspc.2024.107108","DOIUrl":"10.1016/j.bspc.2024.107108","url":null,"abstract":"<div><h3>Purpose</h3><div>Embryo grading is the essential component of assisted reproductive technologies and a crucial prerequisite for ensuring successful embryo transfer. An effective embryo grading method can help embryologists automatically evaluate the quality of embryos and select high-quality embryos.</div></div><div><h3>Methods</h3><div>This study enrolled 5836 embryonic images from 2880 couples who have underwent assisted reproductive therapy at our hospital between September 2016 and March 2023. We proposed an edge association graph (EAG) model that contains a two-stage network: (i) a first-stage edge segmentation network that aims to quantify embryo cells and fragments edges; and (ii) a second-stage network that utilizes quantitative edge information to construct an edge relationship graph, and extracts spatial topological information by integrating the graph neural network (GNN) to accomplish the task of embryo grading. Five embryologists of varying years of experience were invited to compare embryo grading with the EAG on an independent test set.</div></div><div><h3>Results and conclusions</h3><div>Our EAG successfully achieved automatic embryo 4-category grading and showed higher performance compared to existing state-of-arts methods based on microscopic (accuracy = 0.8696, recall = 0.8484, precision = 0.8883 and F1-score = 0.8658) and time-lapse (accuracy = 0.7671, recall = 0.6843, precision = 0.7663 and F1-score = 0.6918) images of embryos. The performance of EAG outperformed five embryologists average, which indicates its superior for embryo grading and has good potential for clinically assisted embryo reproduction applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiao Du , Weisheng Li , Yidong Peng , Qianjing Zong
{"title":"Image fusion by multiple features in the propagated filtering domain","authors":"Jiao Du , Weisheng Li , Yidong Peng , Qianjing Zong","doi":"10.1016/j.bspc.2024.106990","DOIUrl":"10.1016/j.bspc.2024.106990","url":null,"abstract":"<div><div>Visual high-contrast information, such as texture and color, contained in input biomedical imaging data should be preserved as much as possible in the fused image. To preserve the high-intensity textural and color information from input images, an image fusion method is proposed in this paper that utilizes propagated filtering and multiple features from the input images as two modalities. The method includes three steps. First, the inputs are decomposed into multiscale coarse images containing edge information and multiscale detail images containing textural information obtained by propagated filtering using different window sizes. Second, an entropy-based rule is used to combine the coarse images to contain much more edge information. A multiple features-based rule, including luminance, orientation and phase, is used to combine the detail images with the aim of preserving textural information and color information with less distortion. Finally, the fused image is obtained by adding the fused coarse and fused detail images in spatial-domain transformation. The experimental results on the fusion of co-registered biomedical image show that the proposed method preserves textural information with high-intensity and true color information.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zheng Wang , Li Chang , Tong Shi , Hui Hu , Chong Wang , Kaibin Lin , Jianglin Zhang
{"title":"Identifying diagnostic biomarkers for Erythemato-Squamous diseases using explainable machine learning","authors":"Zheng Wang , Li Chang , Tong Shi , Hui Hu , Chong Wang , Kaibin Lin , Jianglin Zhang","doi":"10.1016/j.bspc.2024.107101","DOIUrl":"10.1016/j.bspc.2024.107101","url":null,"abstract":"<div><div>Erythemato-squamous diseases (ESD) are a heterogeneous group encompassing six clinically and histopathologically overlapping subtypes, representing a substantial diagnostic challenge within dermatology. The existing body of research reveals a notable void in detailed examinations that deconvolute the distinct features endemic to each ESD variant. To bridge this knowledge gap, our study applied Explainable Artificial Intelligence (XAI) techniques to systematically elucidate the intricate diagnostic biomarker profiles unique to each ESD category. Methodological rigor was fortified through the employment of stratified cross-validation, bolstering the robustness and generalizability of our diagnostic model. The CatBoost classifier emerged as a preeminent algorithm within our analytical framework, manifesting exemplary classification prowess with an accuracy of 99.07%, precision of 99.12%, recall of 98.89%, and an F1 score of 98.97%. Central to our inquiry was the deployment of Shapley Additive exPlanations (SHAP) values, which afforded granular insight into the contributory weight of individual diagnostic biomarkers for each ESD subtype. Our findings delineated pivotal diagnostic biomarkers including saw-tooth appearance of retes (STAR), melanin incontinence (MI), vacuolisation and damage of basal layer (VDBL), polygonal papules (PP), and band-like infiltrate (BLI) as instrumental in the identification of seborrheic dermatitis, while Psoriasis was characterized by fibrosis of the papillary dermis (FPD), thinning of the suprapapillary epidermis (TSE), elongation of the rete ridges (ERR), clubbing of the rete ridges (CRR), and notable psoriatic spongiosis. This integrative approach, leveraging the analytical acumen of Random Forest coupled with the interpretability afforded by SHAP, signifies a significant advancement in the nuanced diagnostic landscape of ESD.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lung vessel segmentation and abnormality classification based on hybrid mobile-Lenet using CT image","authors":"Sadish Sendil Murugaraj , Kalpana Vadivelu , Prabhu Thirugnana Sambandam , B. Santhosh Kumar","doi":"10.1016/j.bspc.2024.107072","DOIUrl":"10.1016/j.bspc.2024.107072","url":null,"abstract":"<div><div>It is acknowledged from studies that viral pneumonia affects the lung vessels. Nevertheless, the diagnostic ability of a chest Computed Tomography (CT) imaging parameter is rarely leveraged. This research introduced the Hybrid Mobile LeNet (HM-LeNet) for lung vessel segmentation and abnormality classification. Firstly, the input image of CT is obtained from the database. Later, the preprocessing procedure is executed by utilizing the Non-Local Means (NLM) filter. Then, the lung lobe segmentation is carried out by using the K-Net. After that, the pulmonary vessel segmentation is performed. Finally, the features are extracted to classify the lung abnormality by utilizing the designed HM-LeNet, which is the integration of MobileNet and LeNet. The lung abnormalities are classified as emphysema, nodules, or pulmonary embolisms. The established HM-LeNet attained the maximum accuracy, True Positive Rate (TPR), and True Negative Rate (TNR) of 92.7%, 96.6%, and 94.7% respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}