Jie Zhou , Yulong Shi , Lin Qi , Xue Jiang , Shouliang Qi , Wei Qian
{"title":"A3-DualUD: Source-free unsupervised domain adaptation via anatomical anchor alignment and dual-path uncertainty denoising for cross-modality medical image segmentation","authors":"Jie Zhou , Yulong Shi , Lin Qi , Xue Jiang , Shouliang Qi , Wei Qian","doi":"10.1016/j.cmpb.2025.109017","DOIUrl":"10.1016/j.cmpb.2025.109017","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Automated medical image segmentation across different imaging modalities, such as MRI and CT scans, plays a crucial role in improving diagnostic accuracy, treatment planning, and surgical navigation. However, significant domain discrepancies across different modalities, stemming from various acquisition protocols and physical principles, pose a significant challenge for automated segmentation algorithms. Unsupervised domain adaptation (UDA) has gained considerable attention for addressing challenges in cross-modality medical image segmentation from different medical instruments, particularly in scenarios where target domain labels are unavailable. However, traditional UDA methods require simultaneous access to source domain data, which limits their practical applicability in medical scenarios where source data is often inaccessible due to privacy concerns and storage restrictions.</div></div><div><h3>Methods:</h3><div>In this work, we propose a novel source free unsupervised domain adaptation (SFUDA) framework, which introduces anatomical anchor alignment (A<span><math><msup><mrow></mrow><mrow><mn>3</mn></mrow></msup></math></span>) and dual-path uncertainty denoising (DualUD) to facilitate knowledge transfer from a pre-trained source model to an unlabeled target domain, without requiring access to source data. In the A<span><math><msup><mrow></mrow><mrow><mn>3</mn></mrow></msup></math></span> stage, we extract the source domain anatomical anchors that represent characteristics of each target class and align them with the target domain features through a bidirectional constraint approach, thereby reducing feature-level distribution discrepancies between the source and target domains. Additionally, we introduce DualUD stage to provide sufficient and reliable supervision for domain adaptation.</div></div><div><h3>Results:</h3><div>Extensive experiments conducted on cross-modality multi-organ segmentation tasks, using the abdominal and cardiac datasets, demonstrate the state-of-the-art performance of our proposed approach.</div></div><div><h3>Conclusion:</h3><div>Anatomical anchor alignment and dual-path uncertainty denoising effectively address the critical challenges of domain discrepancies and privacy preservation, offering a robust and effective solution for achieving SFUDA. The source code is publicly available at: <span><span>https://github.com/derekshiii/A3-DualUD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109017"},"PeriodicalIF":4.8,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144879468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangyu Xiong , Yue Sun , Xiaohong Liu , Wei Ke , Chan-Tong Lam , Qinquan Gao , Tong Tong , Shuo Li , Tao Tan
{"title":"VariMix: A variety-guided data mixing framework for explainable medical image classifications","authors":"Xiangyu Xiong , Yue Sun , Xiaohong Liu , Wei Ke , Chan-Tong Lam , Qinquan Gao , Tong Tong , Shuo Li , Tao Tan","doi":"10.1016/j.cmpb.2025.109016","DOIUrl":"10.1016/j.cmpb.2025.109016","url":null,"abstract":"<div><h3>Background and objective:</h3><div>Modern deep neural networks are highly over-parameterized, necessitating the use of data augmentation techniques to prevent overfitting and enhance generalization. Generative adversarial networks (GANs) are popular for synthesizing visually realistic images. However, these synthetic images often lack diversity and may have ambiguous class labels. Recent data mixing strategies address some of these issues by mixing image labels based on salient regions. Since the main diagnostic information is not always contained within the salient regions, we aim to address the resulting label mismatches in medical image classifications.</div></div><div><h3>Methods:</h3><div>We propose a variety-guided data mixing framework (VariMix), which exploits an absolute difference map (ADM) to address the label mismatch problems of mixed medical images. VariMix generates ADM using the image-to-image (I2I) GAN across multiple classes and allows for bidirectional mixing operations between the training samples.</div></div><div><h3>Results:</h3><div>The proposed VariMix achieves the highest accuracy of 99.30% and 94.60% with a SwinT V2 classifier on a Chest X-ray (CXR) dataset and a Retinal dataset, respectively. It also achieves the highest accuracy of 87.73%, 99.28%, 95.13%, and 95.81% with a ConvNeXt classifier on a Breast Ultrasound (US) dataset, a CXR dataset, a Retinal dataset, and a Maternal-Fetal US dataset, respectively. Furthermore, the medical expert evaluation on generated images shows the great potential of our proposed I2I GAN in improving the accuracy of medical image classifications.</div></div><div><h3>Conclusions:</h3><div>Extensive experiments demonstrate the superiority of VariMix compared with the existing GAN- and Mixup-based methods on four public datasets using Swin Transformer V2 and ConvNeXt architectures. Furthermore, by projecting the source image to the hyperplanes of the classifiers, the proposed I2I GAN can generate hyperplane difference maps between the source image and the hyperplane image, demonstrating its ability to interpret medical image classifications. The source code is provided in <span><span>https://github.com/yXiangXiong/VariMix</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109016"},"PeriodicalIF":4.8,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144865636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingkai Ji , Kaihao Gu , Hairui Wang , Tuo Zhou , Xiaomei Wu , Shengjie Yan
{"title":"Multi-electrode catheter PFA: Computational modeling and in vivo validation of lesion dynamics under energy-dose variants","authors":"Xingkai Ji , Kaihao Gu , Hairui Wang , Tuo Zhou , Xiaomei Wu , Shengjie Yan","doi":"10.1016/j.cmpb.2025.109025","DOIUrl":"10.1016/j.cmpb.2025.109025","url":null,"abstract":"<div><h3>Background and Objective</h3><div>As a non-thermal ablation modality, pulsed field ablation (PFA) has been widely applied in solid tumor treatment and cardiac ablation. Multiple factors are known to influence the effects of PFA, including pulse parameters, catheter configurations, and tissue properties. However, limited attention has been paid to the impact of discharge modes on PFA.</div></div><div><h3>Methods</h3><div>A sequential discharge mode (M2) was designed based on a multi-electrode ablation catheter and compared with the conventional simultaneous discharge mode (M1). To evaluate the ablation efficacy of different modes, three-dimensional ablation models incorporating two multi-electrode catheter configurations (circular and linear) were established. A comparative numerical simulation analysis was conducted to investigate the electric potential and field distribution characteristics of M1 and M2, followed by animal experiments to validate the simulation results. The comprehensive performance of the two discharge modes was assessed in terms of ablation efficacy and muscle contraction.</div></div><div><h3>Results</h3><div>M2 demonstrated superior ablation efficacy over M1 in both catheter configurations. For the circular catheter, the minimum lesion depth (M1: 2.08 ± 0.10 mm vs. M2: 2.64 ± 0.46 mm, <em>p</em> = 0.006) was significantly improved, and the uniformity of lesion depth (M1: 0.18 vs. M2: 0.16) was partially enhanced. Similarly, the linear catheter showed improved the minimum lesion depth (M1: 2.81 ± 0.21 mm vs. M2: 3.41 ± 0.44 mm, <em>p</em> = 0.001) and the uniformity of lesion depth (M1: 0.13 vs. M2: 0.11). This advantage may be attributed to M2’s higher pulsed energy output and optimized ablation strategy. Notably, although M2 induced slightly higher muscle contraction amplitudes than M1, both modes maintained low contraction levels without significantly compromising ablation efficacy. Specifically, peak accelerations were recorded as M1: 0.030 ± 0.028 g vs. M2: 0.039 ± 0.016 g (<em>p</em> = 0.369) for the circular catheter and M1: 0.031 ± 0.017 g vs. M2: 0.040 ± 0.016 g (<em>p</em> = 0.242) for the linear catheter.</div></div><div><h3>Conclusion</h3><div>Compared to the conventional discharge mode, the sequential discharge mode is recognized as a superior ablation strategy, achieving enhancements in lesion dimensions and uniformity while maintaining a controlled safety profile.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109025"},"PeriodicalIF":4.8,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144885441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuying Liu , Guang Feng , Qi He , Xin Chen , Jinhua Li , Bangmao Wang , Siyang Zuo
{"title":"Toward autonomous robotic gastroscopy with a novel interventional keypoint and polyp detection system","authors":"Yuying Liu , Guang Feng , Qi He , Xin Chen , Jinhua Li , Bangmao Wang , Siyang Zuo","doi":"10.1016/j.cmpb.2025.109013","DOIUrl":"10.1016/j.cmpb.2025.109013","url":null,"abstract":"<div><h3>Background and objective:</h3><div>Gastric cancer has high incidence and mortality rates globally. Gastroscopy is widely considered the gold standard for screening gastric diseases. However, accurate and stable identification of gastric diseases under endoscopy remains challenging due to its heavy reliance on the experience and operational skills of the endoscopist. The development of robotic gastroscopes offers an effective solution for screening upper gastrointestinal diseases.</div></div><div><h3>Methods:</h3><div>In this paper, we propose an innovative deep learning-based framework for autonomous robotic gastroscopy, integrating a combined anchor point (intervention keypoint) and polyp detection system (named KP-YOLO). This system enables the robotic gastroscope to operate autonomously and intelligently, facilitating both intervention navigation and diagnostic tasks. To support this framework, we have created and made publicly available two fully annotated datasets: a real gastroscopy dataset containing 3,454 images and a high-fidelity upper gastrointestinal (UGI) phantom dataset with 2,144 images.</div></div><div><h3>Results:</h3><div>The detection model trained on the real gastroscopy dataset achieved a precision (P) of 0.987 and a mean average precision (mAP) of 0.985 for intervention keypoints. For polyp detection, the model attained a P of 0.994 and an mAP of 0.964. Experimental validation was conducted utilizing a high-fidelity UGI phantom equipped with a robotic arm, motion control module, and Olympus endoscopic system.</div></div><div><h3>Conclusion:</h3><div>Guided by the proposed detection system, the robotic gastroscope successfully performs autonomous intervention navigation and intelligent polyp detection on the UGI phantom. This work demonstrates the potential of leveraging artificial intelligence to enhance the autonomy and intelligence of robotic gastroscopy, paving the way for its future clinical application.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109013"},"PeriodicalIF":4.8,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144860892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MGDMCL: A multi-omics integration framework based on masked graph dynamic learning and multi-granularity feature contrastive learning for biomedical classification","authors":"Wengxiang Chen , Hang Qiu","doi":"10.1016/j.cmpb.2025.109024","DOIUrl":"10.1016/j.cmpb.2025.109024","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Integrating multi-omics data facilitates a comprehensive understanding of the etiology of complex diseases, which is critical for achieving precision medicine. Recently, graph-based approaches have been increasingly leveraged in the integrative multi-omics data analysis due to their robust expressive capability. However, these methods still face two limitations: 1) relying predominantly on a fixed sample similarity graph (SSG) to obtain omics-specific feature representation, and 2) insufficiently exploring the interrelations between different features from various omics. To this end, we propose MGDMCL, an innovative framework for integrating multiple omics data based on masked graph dynamic learning and multi-granularity feature contrastive learning.</div></div><div><h3>Methods</h3><div>For each type of omics data, a masked graph dynamic learning approach adaptively adjusts the SSG structure and achieves the learning of a reliable SSG in a graph dynamic learning manner, obtaining multi-layer feature representations from various graph convolutional networks (GCN) layers. Then, the multi-layer feature representations of different omics are concatenated at the layer-level, and a multi-granularity feature contrastive learning is designed to learn consensus feature representations of specific layers. Furthermore, to enhance classification robustness, the true class probability is introduced to evaluate the classification confidence of consensus feature representations from different layers.</div></div><div><h3>Results</h3><div>Extensive experiments on five public datasets, including LGG, ROSMAP, LUSC, BRCA, and KIPAN, show that MGDMCL significantly surpasses state-of-the-art baselines in various biomedical classification tasks.</div></div><div><h3>Conclusions</h3><div>The proposed MGDMCL provides a more effective approach for integrative multi-omics data analysis, exhibiting great potential in biomedical classification applications. The implementation code of MGDMCL has been released at <span><span>https://www.github.com/wxchen-uestc/MGDMCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109024"},"PeriodicalIF":4.8,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144865634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diana Rito , Roberto M. Barbosa , Manuel Vieira da Silva , Carlos Sampaio Macedo , Victor Alves , Cristina P. Santos
{"title":"Mixed reality in orthopaedics: A systematic review and meta-analysis on clinical and technological aspects","authors":"Diana Rito , Roberto M. Barbosa , Manuel Vieira da Silva , Carlos Sampaio Macedo , Victor Alves , Cristina P. Santos","doi":"10.1016/j.cmpb.2025.109011","DOIUrl":"10.1016/j.cmpb.2025.109011","url":null,"abstract":"<div><div>This article presents a systematic review of mixed reality (MR)-based tools in orthopaedics, motivated by their growing clinical relevance and the lack of existing reviews that comprehensively assess both clinical outcomes and technical implementations. A literature search was conducted in October 2024, covering Scopus, IEEE, PubMed, Web of Science, Medline in EBSCO, ClinicalTrials.gov, and WHO International Clinical Trials Registry Platform databases, yielding 48 eligible studies. Data on clinical and technological aspects were extracted, including clinical applications, clinical and procedural outcomes, hardware, registration, tracking, and evaluation metrics. MR-based tools led to better clinical and procedural outcomes compared to traditional procedures, including reduced intraoperative time and radiation exposure, safer and less invasive interventions, and fewer postoperative complications. Surgical navigation was the most employed application, especially in spinal and joint procedures, likely due to their complexity and demand for high precision. Most procedures did not require intraoperative imaging, reflecting a shift towards radiation-free workflows enabled by MR guidance. Visualisations typically involved holographic surgical plans rendered through head-mounted displays, primary devices for registration and tracking, often using manual techniques with limited accuracy. Despite their promise, several limitations hinder MR’s broader adoption. Studies often lack high-level clinical evidence, standardised evaluation protocols, and robust registration and tracking methods, due to user dependency and system instability. Future studies should include control groups and report clinical and procedural outcomes to facilitate clinical translation. There is also space to explore new automated, non-invasive registration strategies and hybrid tracking solutions to mitigate occlusion, enhance alignment stability, and improve procedural reliability.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109011"},"PeriodicalIF":4.8,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zongjin Yang , Jun Wen , Deqing Huang , Aisen Yang , Rong Zhang , Bo Ren , Zhenhao Chen , Yirui Yin , Na Qin
{"title":"Efficient segmentation of intraoperative anatomical landmarks in laparoscopic cholecystectomy based on deep learning","authors":"Zongjin Yang , Jun Wen , Deqing Huang , Aisen Yang , Rong Zhang , Bo Ren , Zhenhao Chen , Yirui Yin , Na Qin","doi":"10.1016/j.cmpb.2025.109012","DOIUrl":"10.1016/j.cmpb.2025.109012","url":null,"abstract":"<div><h3>Background:</h3><div>Laparoscopic cholecystectomy is the gold standard procedure for the treatment of benign gallbladder diseases, but there is the risk of intraoperative bile duct injury, which can lead to surgical failure and cause significant social and economic burden. When surgeons rely on visual inspection to identify tissue structures during laparoscopic cholecystectomy, subjective factors such as experience, psychological factors, and fatigue can compromise the intraoperative recognition of anatomic landmarks. The positioning of anatomical landmarks by the surgeon in the pre-dissection phase of laparoscopic cholecystectomy is relatively vague and requires step-by-step exploration as the surgery progresses, becoming clearer in the post-dissection phase.</div></div><div><h3>Methods:</h3><div>To alleviate the pressure on surgeons during procedures, this study aimed to achieve real-time intraoperative navigation during laparoscopic cholecystectomy by dynamically identifying and annotating key anatomical landmarks, including the gallbladder, Calot’s triangle, and common bile duct. The study proposed a novel semantic segmentation neural network called the Channel Attention Pyramid Scene Parsing Plus Network. The network utilized pooling layers with different scales and assigned non-equal weights to extract feature information. Additionally, a spatial channel attention module was added to accurately capture important features or contextual information, improving the model’s performance and effectiveness. Training was conducted using video frames from the pre-dissection phase, while testing used video frames from the post-dissection phase.</div></div><div><h3>Results:</h3><div>All models were subjected to a 10-fold cross-validation on 1425 selected frames from 132 laparoscopic cholecystectomy videos, with training and validation conducted in two separate laparoscopic cholecystectomy stages. The proposed model CPPN achieved a mean intersection over union of 0.855 (±0.03), outperforming other segmentation neural networks. The model demonstrated optimal performance across most metrics, with an intersection over union of 0.881 (±0.01) for the gallbladder, 0.769 (±0.03) for Calot’s triangle, and 0.813 (±0.02) for the common bile duct.</div></div><div><h3>Conclusion:</h3><div>The intelligent segmentation algorithm proposed in this study has achieved the highest mean intersection over union, surpassing other models. It shows promise in assisting surgeons with the real-time assessment of critical anatomical landmarks within Calot’s triangle. This advancement could potentially reduce the risk of common bile duct injury by facilitating a more intuitive dissection of Calot’s triangle. Furthermore, it aids in the visual inspection during laparoscopic cholecystectomy procedures.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109012"},"PeriodicalIF":4.8,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144865587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quality assessment of optical coherence tomography angiography images with Relative-distance-based Patch Distribution Modeling (R-PaDiM)","authors":"Meltem Esengönül, Teresa Finisterra Araújo, Natasa Jeremic, Azin Zarghami, Hrvoje Bogunović, Ursula Schmidt-Erfurth","doi":"10.1016/j.cmpb.2025.108955","DOIUrl":"10.1016/j.cmpb.2025.108955","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Optical coherence tomography angiography (OCTA) is a non-invasive technique to visualize changes in retinal microvasculature. The relationship between the retinal blood flow and systemic diseases can be studied using biomarkers extracted from the OCTA en-face images. However, these images are prone to various artifacts that can hinder the image quality. Thus, image quality assessment is essential to improve the retinal biomarker analysis pipeline.</div></div><div><h3>Methods</h3><div>In this paper, we propose a novel method called Relative-distance-based Patch Distribution Modeling (R-PaDiM) that compares the probabilistic representation of good and bad quality images in a relative manner using patch features extracted from pre-trained encoders. With our method, it is possible to both classify OCTA en-face images into good and bad quality, and obtain patch-wise quality score maps to highlight the bad quality regions within the image for better interpretability. Five different backbones are thoroughly investigated for image quality assessment on two public and one private datasets: DRAC Challenge, OCTA-25K-IQA-SEG, and MeyeHeart.</div></div><div><h3>Results</h3><div>We achieve state-of-the-art results on all backbones and datasets. Our best results are observed on the DRAC Challenge dataset with a WideResNet-50 backbone that has an accuracy of 98.0 ± 1.1, an AUC of 99.3 ± 0.5, and a Kappa score of 86.4 ± 7.6. We also report a higher correlation between the patch-wise quality scores and the artifact affected regions compared to other methods.</div></div><div><h3>Conclusions</h3><div>The proposed method is highly robust and efficient in obtaining quality scores that correspond to the specific regions within the image related to the classification decision. Due to its versatile nature, it can be applied to many other tasks with various imaging modalities for better explainability of the deep learning models.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 108955"},"PeriodicalIF":4.8,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144865635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-attention-guided residual deep neural network with multi-scale dilated feature extraction for automated gallbladder disease diagnosis in ultrasound imaging","authors":"Suzain Rashid , Chandan J Das , Anshika Chauhan , Garima Aggarwal , Rakesh Chandra Joshi , Radim Burget , Malay Kishore Dutta","doi":"10.1016/j.cmpb.2025.109020","DOIUrl":"10.1016/j.cmpb.2025.109020","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Gallbladder diseases present a critical challenge and can cause serious complications, if not diagnosed and treated promptly. Diseases including gallstone, inflammation, and other abnormalities may lead to a number of significant consequences, such as bile duct obstructions, chronic pain, infections, and in severe cases, life-threatening sepsis or gallbladder cancer.</div></div><div><h3>Methods</h3><div>This study presents a novel deep learning-based diagnostic model using an attention-guided residual convolutional neural network to classify nine distinct gallbladder diseases, including gallstones, abdomen and retroperitoneal pathology, cholecystitis, membranous and gangrenous cholecystitis, perforation, polyps and cholesterol crystals, adenomyomatosis, carcinoma, and various causes of gallbladder wall thickening. It combines multi-scale feature extraction using dilated convolutions and, attention mechanisms for refined feature selection, and residual connections to preserve spatial information and prevent vanishing gradient issues.</div></div><div><h3>Results</h3><div>Experimental findings show an accuracy of 99.17%, and a recall of 98.94%. These findings demonstrate the reliability of the model in distinguishing between different gallbladder pathologies. The presented methodology offers a rapid, accurate, and scalable diagnostic tool, to help clinicians identify gallbladder diseases from complex radiological medical images efficiently and with high accuracy.</div></div><div><h3>Conclusions</h3><div>The proposed work has the potential to advance patient care and provides a foundation for robust, efficient, and scalable AI-assisted gallbladder disease diagnosis in clinical practice. The source code is publicly available in the GitHub repository<span><span><sup>2</sup></span></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109020"},"PeriodicalIF":4.8,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Álvaro Heredia-Lidón , Luis M. Echeverry-Quiceno , Alejandro González , Noemí Hostalet , Edith Pomarol-Clotet , Juan Fortea , Mar Fatjó-Vilas , Neus Martínez-Abadías , Xavier Sevillano , Alzheimer’s Disease Neuroimaging Initiative
{"title":"BioFace3D: An end-to-end open-source software for automated extraction of potential 3D facial biomarkers from MRI scans","authors":"Álvaro Heredia-Lidón , Luis M. Echeverry-Quiceno , Alejandro González , Noemí Hostalet , Edith Pomarol-Clotet , Juan Fortea , Mar Fatjó-Vilas , Neus Martínez-Abadías , Xavier Sevillano , Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.cmpb.2025.109010","DOIUrl":"10.1016/j.cmpb.2025.109010","url":null,"abstract":"<div><h3>Background and Objectives:</h3><div>Facial dysmorphologies have emerged as potential critical indicators in the diagnosis and prognosis of genetic, psychotic, and rare disorders. While some conditions present with severe dysmorphologies, others exhibit subtler traits that may not be perceivable to the human eye, requiring the use of precise quantitative tools for accurate identification. Manual annotation remains time-consuming and prone to inter- and intra-observer variability. Existing tools provide partial solutions, but no end-to-end automated pipeline integrates the full process of 3D facial biomarker extraction from magnetic resonance imaging.</div></div><div><h3>Methods and Results:</h3><div>We introduce BioFace3D, an open-source pipeline designed to automate the discovery of potential 3D facial biomarkers from magnetic resonance imaging. BioFace3D consists of three automated modules: (i) 3D facial model extraction from magnetic resonance images, (ii) deep learning-based registration of homologous anatomical landmarks, and (iii) computation of geometric morphometric biomarkers from landmark coordinates.</div></div><div><h3>Conclusions:</h3><div>The evaluation of BioFace3D is performed both at a global level and within each individual module, through a series of exhaustive experiments using proprietary and public datasets, demonstrating the robustness and reliability of the results obtained by the tool. Source code, along with trained models, can be found at <span><span>https://bitbucket.org/cv_her_lasalle/bioface3d</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"271 ","pages":"Article 109010"},"PeriodicalIF":4.8,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144852704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}