Medical image analysis最新文献

筛选
英文 中文
DSFNet: Dual-source and spatiotemporal-feature fusion network for bedside diagnosis of lung injuries with electrical impedance tomography DSFNet:用于电阻抗断层扫描肺损伤床边诊断的双源和时空特征融合网络
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-02-17 DOI: 10.1016/j.media.2026.104003
Zhiwei Li , Yang Wu , Kai Liu , Yingqi Zhang , Bai Chen , Hao Wang , Jiafeng Yao
{"title":"DSFNet: Dual-source and spatiotemporal-feature fusion network for bedside diagnosis of lung injuries with electrical impedance tomography","authors":"Zhiwei Li ,&nbsp;Yang Wu ,&nbsp;Kai Liu ,&nbsp;Yingqi Zhang ,&nbsp;Bai Chen ,&nbsp;Hao Wang ,&nbsp;Jiafeng Yao","doi":"10.1016/j.media.2026.104003","DOIUrl":"10.1016/j.media.2026.104003","url":null,"abstract":"<div><div>Electrical Impedance Tomography (EIT) is a promising tool for non-invasive and real-time lung monitoring, but the data heterogeneity and low spatial resolution limit its ability to diagnose lung injuries. To address these challenges, we propose DSFNet, a dual-source and spatiotemporal-feature fusion network that integrates EIT spatiotemporal boundary voltages and ventilation images to classify four lung conditions, including healthy (HE), pneumothorax (PN), pleural effusion (PE), and pneumonia (PM). The temporal dynamics modeling (TDM) module and multi-head self-attention (MHSA) module are designed to improve the temporal feature extraction and representation of DSFNet. We construct a novel EIT simulation dataset describing pathological respiratory patterns and introduce a hybrid data learning strategy that combines simulation data (SD) and experimental data (ED) to address the small sample problem and improve the accuracy of model classification. The DSFNet trained with the SD + 25 % ED pattern achieved an accuracy of 97.78 % and 96.55 % on the dynamic phantom dataset and the clinical human dataset, respectively, demonstrating its excellent performance and robustness. The SHAP analysis further revealed the feature contributions of the input data. This study provides an effective approach for bedside lung injury diagnosis based on multi-source EIT data.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 104003"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146777317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal sparse fusion transformer network with spatio-temporal decoupling for breast tumor classification 时空解耦的多模态稀疏融合变压器网络用于乳腺肿瘤分类
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-28 DOI: 10.1016/j.media.2026.103966
Jiahao Xu , Shuxin Zhuang , Yi He , Haolin Wang , Zhemin Zhuang , Huancheng Zeng
{"title":"Multimodal sparse fusion transformer network with spatio-temporal decoupling for breast tumor classification","authors":"Jiahao Xu ,&nbsp;Shuxin Zhuang ,&nbsp;Yi He ,&nbsp;Haolin Wang ,&nbsp;Zhemin Zhuang ,&nbsp;Huancheng Zeng","doi":"10.1016/j.media.2026.103966","DOIUrl":"10.1016/j.media.2026.103966","url":null,"abstract":"<div><div>Accurate analysis of tumor morphology, vascularity, and tissue stiffness under multimodal ultrasound imaging plays a critical role in the diagnosis of breast cancer. However, manual interpretation across multiple modalities is time-consuming and heavily dependent on the radiologist’s expertise. Computer-aided classification offers an efficient alternative, yet remains challenging due to significant modality heterogeneity, inconsistent image quality, and redundant information across modalities. To address these issues, we propose a novel Multimodal Sparse Fusion Transformer Network (MSFT-Net). First, a Spatio-Temporal Decoupling Attention architecture (STDA) is introduced to disentangle and extract dynamic and static features from different modalities along spatial and temporal dimensions, capturing modality-specific motion and morphological characteristics independently. Second, the Mixed-Scale Convolution Module (MSCM) obtains tumor features at multiple scales, enhancing geometric detail representation and improving receptive field coverage. Third, the Sparse Cross-Attention Module (SCAM) adaptively retains the most effective query-key interactions between modalities, thereby facilitating the aggregation of high-quality features for robust multimodal information fusion. MSFT-Net is trained and tested on a curated dataset comprising multimodal breast tumor videos collected from 458 patients, including ultrasound (US), superb microvascular imaging (SMI), and strain elastography (SE), and its generalizability is further validated on the public BraTS'21 MRI dataset. Extensive experiments demonstrate that MSFT-Net achieves superior performance in multimodal breast tumor classification compared to state-of-the-art methods, providing fast and reliable support for radiologists in diagnostic tasks.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103966"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fréchet radiomic distance (FRD): A versatile metric for comparing medical imaging datasets ferccheradiomic Distance (FRD):一种比较医学影像数据集的通用度量
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-24 DOI: 10.1016/j.media.2026.103943
Nicholas Konz , Richard Osuala , Preeti Verma , Yuwen Chen , Hanxue Gu , Haoyu Dong , Yaqian Chen , Andrew Marshall , Lidia Garrucho , Kaisar Kushibar , Daniel M. Lang , Gene S. Kim , Lars J. Grimm , John M. Lewin , James S. Duncan , Julia A. Schnabel , Oliver Diaz , Karim Lekadir , Maciej A. Mazurowski
{"title":"Fréchet radiomic distance (FRD): A versatile metric for comparing medical imaging datasets","authors":"Nicholas Konz ,&nbsp;Richard Osuala ,&nbsp;Preeti Verma ,&nbsp;Yuwen Chen ,&nbsp;Hanxue Gu ,&nbsp;Haoyu Dong ,&nbsp;Yaqian Chen ,&nbsp;Andrew Marshall ,&nbsp;Lidia Garrucho ,&nbsp;Kaisar Kushibar ,&nbsp;Daniel M. Lang ,&nbsp;Gene S. Kim ,&nbsp;Lars J. Grimm ,&nbsp;John M. Lewin ,&nbsp;James S. Duncan ,&nbsp;Julia A. Schnabel ,&nbsp;Oliver Diaz ,&nbsp;Karim Lekadir ,&nbsp;Maciej A. Mazurowski","doi":"10.1016/j.media.2026.103943","DOIUrl":"10.1016/j.media.2026.103943","url":null,"abstract":"<div><div>Determining whether two sets of images belong to the same or different distributions or domains is a crucial task in modern medical image analysis and deep learning; for example, to evaluate the output quality of image generative models. Currently, metrics used for this task either rely on the (potentially biased) choice of some downstream task, such as segmentation, or adopt task-independent perceptual metrics (<em>e.g.</em>, Fréchet Inception Distance/FID) from natural imaging, which we show insufficiently capture anatomical features. To this end, we introduce a new perceptual metric tailored for medical images, FRD (Fréchet Radiomic Distance), which utilizes standardized, clinically meaningful, and interpretable image features. We show that FRD is superior to other image distribution metrics for a range of medical imaging applications, including out-of-domain (OOD) detection, the evaluation of image-to-image translation (by correlating more with downstream task performance as well as anatomical consistency and realism), and the evaluation of unconditional image generation. Moreover, FRD offers additional benefits such as stability and computational efficiency at low sample sizes, sensitivity to image corruptions and adversarial attacks, feature interpretability, and correlation with radiologist-perceived image quality. Additionally, we address key gaps in the literature by presenting an extensive framework for the multifaceted evaluation of image similarity metrics in medical imaging—including the first large-scale comparative study of generative models for medical image translation—and release an accessible codebase to facilitate future research. Our results are supported by thorough experiments spanning a variety of datasets, modalities, and downstream tasks, highlighting the broad potential of FRD for medical image analysis.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103943"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146048368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PH2ST: Prompt-guided hypergraph learning for spatial transcriptomics prediction in whole slide images PH2ST:用于全幻灯片图像空间转录组学预测的快速引导超图学习
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-02-26 DOI: 10.1016/j.media.2026.104008
Yi Niu , Jiashuai Liu , Yingkang Zhan , Jiangbo Shi , Di Zhang , Marika Reinius , Ines Machado , Mireia Crispin-Ortuzar , Jialun Wu , Chen Li , Zeyu Gao
{"title":"PH2ST: Prompt-guided hypergraph learning for spatial transcriptomics prediction in whole slide images","authors":"Yi Niu ,&nbsp;Jiashuai Liu ,&nbsp;Yingkang Zhan ,&nbsp;Jiangbo Shi ,&nbsp;Di Zhang ,&nbsp;Marika Reinius ,&nbsp;Ines Machado ,&nbsp;Mireia Crispin-Ortuzar ,&nbsp;Jialun Wu ,&nbsp;Chen Li ,&nbsp;Zeyu Gao","doi":"10.1016/j.media.2026.104008","DOIUrl":"10.1016/j.media.2026.104008","url":null,"abstract":"<div><div>Spatial Transcriptomics (ST) reveals the spatial distribution of gene expression in tissues, offering critical insights into biological processes and disease mechanisms. However, the high cost, limited coverage, and technical complexity of current ST technologies restrict their widespread use in clinical and research settings, making obtaining high-resolution transcriptomic profiles across large tissue areas challenging. Predicting ST from H&amp;E-stained histology images has emerged as a promising alternative to address these limitations but remains challenging due to the heterogeneous relationship between histomorphology and gene expression, which is affected by substantial variability across patients and tissue sections. In response, we propose PH2ST, a prompt-guided hypergraph learning framework, which leverages limited ST signals to guide multi-scale histological representation learning for accurate and robust spatial gene expression prediction. Extensive evaluations on two public ST datasets and multiple prompt sampling strategies simulating real-world scenarios demonstrate that PH2ST not only outperforms existing state-of-the-art methods, but also shows strong potential for practical applications such as imputing missing spots, ST super-resolution, and local-to-global prediction, highlighting its value for scalable and cost-effective spatial gene expression mapping in biomedical contexts.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 104008"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147334544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable uncertainty quantification for 2D/3D anatomical landmark localization using multi-output conformal prediction 使用多输出适形预测进行二维/三维解剖地标定位的可靠不确定性量化
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-27 DOI: 10.1016/j.media.2026.103953
Jef Jonkers , Frank Coopman , Luc Duchateau , Glenn Van Wallendael , Sofie Van Hoecke
{"title":"Reliable uncertainty quantification for 2D/3D anatomical landmark localization using multi-output conformal prediction","authors":"Jef Jonkers ,&nbsp;Frank Coopman ,&nbsp;Luc Duchateau ,&nbsp;Glenn Van Wallendael ,&nbsp;Sofie Van Hoecke","doi":"10.1016/j.media.2026.103953","DOIUrl":"10.1016/j.media.2026.103953","url":null,"abstract":"<div><div>Automatic anatomical landmark localization in medical imaging requires not just accurate predictions but reliable uncertainty quantification for effective clinical decision support. Current uncertainty quantification approaches often fall short, particularly when combined with normality assumptions, systematically underestimating total predictive uncertainty. This paper introduces conformal prediction as a framework for reliable uncertainty quantification in anatomical landmark localization, addressing a critical gap in automatic landmark localization. We present two novel approaches guaranteeing finite-sample validity for multi-output prediction: multi-output regression-as-classification conformal prediction (M-R2CCP) and its variant multi-output regression to classification conformal prediction set to region (M-R2C2R). Unlike conventional methods that produce axis-aligned hyperrectangular or ellipsoidal regions, our approaches generate flexible, non-convex prediction regions that better capture the underlying uncertainty structure of landmark predictions. Through extensive empirical evaluation across multiple 2D and 3D datasets, we demonstrate that our methods consistently outperform existing multi-output conformal prediction approaches in both validity and efficiency. This work represents a significant advancement in reliable uncertainty estimation for anatomical landmark localization, providing clinicians with trustworthy confidence measures for their diagnoses. While developed for medical imaging, these methods show promise for broader applications in multi-output regression problems.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103953"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146071492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AtlasMorph: Learning conditional deformable templates for brain MRI AtlasMorph:学习脑MRI的条件可变形模板
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2025-12-11 DOI: 10.1016/j.media.2025.103893
Marianne Rakic , Andrew Hoopes , Mazdak S. Abulnaga , Mert R. Sabuncu , John V. Guttag , Adrian V. Dalca , for the Alzheimer’s Disease Neuroimaging Initiative
{"title":"AtlasMorph: Learning conditional deformable templates for brain MRI","authors":"Marianne Rakic ,&nbsp;Andrew Hoopes ,&nbsp;Mazdak S. Abulnaga ,&nbsp;Mert R. Sabuncu ,&nbsp;John V. Guttag ,&nbsp;Adrian V. Dalca ,&nbsp;for the Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.media.2025.103893","DOIUrl":"10.1016/j.media.2025.103893","url":null,"abstract":"<div><div>Deformable templates, or atlases, are images that represent a prototypical anatomy for a population, and are often enhanced with probabilistic anatomical label maps. They are commonly used in medical image analysis for population studies and computational anatomy tasks such as registration and segmentation. Because developing a template is a computationally expensive process, relatively few templates are available. As a result, analysis is often conducted with sub-optimal templates that are not truly representative of the study population, especially when there are large variations within this population.</div><div>We propose a machine learning framework that uses convolutional registration neural networks to efficiently learn a function that outputs templates conditioned on subject-specific attributes, such as age and sex. We also leverage segmentations, when available, to produce anatomical segmentation maps for the resulting templates. The learned network can also be used to register subject images to the templates. We demonstrate our method on a compilation of 3D brain MRI datasets, and show that it can learn high-quality templates that are representative of populations. We find that annotated conditional templates enable better registration than their unlabeled unconditional counterparts, and outperform other templates construction methods.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103893"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cellflow: Advancing pathological image augmentation from spatial views to temporal trajectories 细胞流:推进病理图像增强从空间视图到时间轨迹
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-02-11 DOI: 10.1016/j.media.2026.103995
Zeyu Liu , Tianyi Zhang , Yufang He , Bo Wen , Haoran Guo , Peng Zhang , Chenbin Ma , Shangqing Lyu , Yunlu Feng , Yu Zhao , Yueming Jin , Dachun Zhao , Guanglei Zhang
{"title":"Cellflow: Advancing pathological image augmentation from spatial views to temporal trajectories","authors":"Zeyu Liu ,&nbsp;Tianyi Zhang ,&nbsp;Yufang He ,&nbsp;Bo Wen ,&nbsp;Haoran Guo ,&nbsp;Peng Zhang ,&nbsp;Chenbin Ma ,&nbsp;Shangqing Lyu ,&nbsp;Yunlu Feng ,&nbsp;Yu Zhao ,&nbsp;Yueming Jin ,&nbsp;Dachun Zhao ,&nbsp;Guanglei Zhang","doi":"10.1016/j.media.2026.103995","DOIUrl":"10.1016/j.media.2026.103995","url":null,"abstract":"<div><div>Deep learning has advanced pathological image analysis but remains constrained by limited annotated data, especially for fine-grained diagnostic tasks such as tumor subtyping, grading, and cellularity assessment. While data augmentation alleviates this issue, existing methods are restricted to spatial manipulations that lack morphological plausibility and overlook the temporal attributes of pathological state transition. To address this gap, we propose <strong>Cellflow</strong>, the first temporal-aware generative framework for pathological image augmentation. Cellflow models pathological transition as smooth trajectories on a biological image manifold, generating intermediate states via a stair-based diffusion bridge with classifier-guided probability-flow ordinary differential equations. This design produces morphologically plausible sequences that capture both cellular details and tissue-level architecture. Evaluated on 7 diverse datasets across organs, staining modalities, and diagnostic tasks, Cellflow consistently outperforms 6 spatial augmentation methods and 4 state-of-the-art generative models, yielding improved classification performance, higher image fidelity, and preservation of temporal coherence. Quantitative cellularity analysis provides additional validation of the biological authenticity of transition sequences. By introducing temporal modeling into pathological data augmentation, Cellflow establishes a paradigm shift from spatial manipulations to biologically grounded temporal trajectories that advances robust model training, rare disease exploration, and educational simulation in computational pathology. The Code is available at <span><span>https://github.com/Rowerliu/Cellflow</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103995"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146160398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ESM-AnatTractNet: Advanced deep learning model of true positive eloquent white matter tractography to improve preoperative evaluation of pediatric epilepsy surgery ESM-AnatTractNet:一种先进的真阳性白质束造影深度学习模型,用于改善小儿癫痫手术术前评估
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-29 DOI: 10.1016/j.media.2026.103969
Min-Hee Lee , Bohan Xiao , Soumyanil Banerjee , Hiroshi Uda , Yoon Ho Hwang , Csaba Juhász , Eishi Asano , Ming Dong , Jeong-Won Jeong
{"title":"ESM-AnatTractNet: Advanced deep learning model of true positive eloquent white matter tractography to improve preoperative evaluation of pediatric epilepsy surgery","authors":"Min-Hee Lee ,&nbsp;Bohan Xiao ,&nbsp;Soumyanil Banerjee ,&nbsp;Hiroshi Uda ,&nbsp;Yoon Ho Hwang ,&nbsp;Csaba Juhász ,&nbsp;Eishi Asano ,&nbsp;Ming Dong ,&nbsp;Jeong-Won Jeong","doi":"10.1016/j.media.2026.103969","DOIUrl":"10.1016/j.media.2026.103969","url":null,"abstract":"<div><div>Accurate preoperative identification of true positive white matter pathways involved in critical eloquent functions such as motor, language, and vision plays a vital role in minimizing the risk of postoperative functional deficits and improving postoperative functional outcomes in pediatric epilepsy surgery. This study proposes a novel deep learning model: “ESM-AnatTractNet” that can accurately classify true positive eloquent white matter pathways across preoperative diffusion weighted imaging tractography data of 85 drug-resistant epilepsy patients (age: 10.70 ± 4.41 years). To enhance geometric and anatomical consistency of true positive tract classification, the ESM-AnatTractNet integrated two features in a point-cloud-based framework, 1) electro-physiologically confirmed spatial coordinates using electrical stimulation mapping (ESM) and 2) anatomically-contexted labels of the end-to-end neural connection using a standard brain atlas. Its overall performance was validated by accurately classifying 14 eloquent functional areas in whole brain, objectively optimizing resection margins to preserve eloquent functions using Kalman filter, and precisely predicting postoperative language outcomes using canonical correlation. Our ESM-AnatTractNet outperformed other baseline models, achieving an accuracy of 97% in correctly classifying eloquent areas within 10mm spatial resolution of clinical subdural grid electroencephalography. The Kalman filter analysis achieved 94% accuracy in predicting no deficits when the ESM-AnatTractNet-defined preservation zones were not resected. Postoperative decrease in language-related white matter connection efficacy defined by the ESM-AnatTractNet analysis was significantly associated with worse postoperative language outcome (R=0.73, p &lt; 0.001). Our findings demonstrate that the ESM-AnatTractNet improves non-invasive localization of true positive eloquent white matter pathways, supporting its potential to enhance current preoperative evaluation of pediatric epilepsy surgery.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103969"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting lesion tracking in 3D total body photography 重新审视三维全身摄影中的病灶跟踪
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-28 DOI: 10.1016/j.media.2026.103963
Wei-Lun Huang , Minghao Xue , Zhiyou Liu , Davood Tashayyod , Jun Kang , Amir Gandjbakhche , Misha Kazhdan , Mehran Armand
{"title":"Revisiting lesion tracking in 3D total body photography","authors":"Wei-Lun Huang ,&nbsp;Minghao Xue ,&nbsp;Zhiyou Liu ,&nbsp;Davood Tashayyod ,&nbsp;Jun Kang ,&nbsp;Amir Gandjbakhche ,&nbsp;Misha Kazhdan ,&nbsp;Mehran Armand","doi":"10.1016/j.media.2026.103963","DOIUrl":"10.1016/j.media.2026.103963","url":null,"abstract":"<div><div>Melanoma is the most deadly form of skin cancer. Tracking the evolution of nevi and detecting new lesions across the body is essential for the early detection of melanoma. Despite prior work on longitudinal tracking of skin lesions in 3D total body photography, there are still several challenges, including 1) low accuracy for finding correct lesion pairs across scans, 2) sensitivity to noisy lesion detection, and 3) lack of large-scale datasets with numerous annotated lesion pairs. We propose a framework that takes in a pair of 3D textured meshes, matches lesions in the context of total body photography, and identifies unmatchable lesions. We start by computing correspondence maps bringing the source and target meshes to a template mesh. Using these maps to define source/target signals over the template domain, we construct a flow field aligning the mapped signals. The initial correspondence maps are then refined by advecting forward/backward along the flow field. Finally, lesion assignment is performed using the refined correspondence maps. We propose the first large-scale dataset for skin lesion tracking with 25K lesion pairs across 198 subjects. The proposed method achieves a success rate of 90.1% (at 10 mm criterion) for all pairs of annotated lesions and a matching accuracy of 98.1% for subjects with more than 200 lesions.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103963"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VHU-Net: Variational hadamard U-Net for body MRI bias field correction VHU-Net:用于身体MRI偏场校正的变分Hadamard U-Net
IF 11.8 1区 医学
Medical image analysis Pub Date : 2026-05-01 Epub Date: 2026-01-20 DOI: 10.1016/j.media.2026.103955
Xin Zhu , Ahmet Enis Cetin , Gorkem Durak , Batuhan Gundogdu , Ziliang Hong , Hongyi Pan , Ertugrul Aktas , Elif Keles , Hatice Savas , Aytekin Oto , Hiten Patel , Adam B. Murphy , Ashley Ross , Frank Miller , Baris Turkbey , Ulas Bagci
{"title":"VHU-Net: Variational hadamard U-Net for body MRI bias field correction","authors":"Xin Zhu ,&nbsp;Ahmet Enis Cetin ,&nbsp;Gorkem Durak ,&nbsp;Batuhan Gundogdu ,&nbsp;Ziliang Hong ,&nbsp;Hongyi Pan ,&nbsp;Ertugrul Aktas ,&nbsp;Elif Keles ,&nbsp;Hatice Savas ,&nbsp;Aytekin Oto ,&nbsp;Hiten Patel ,&nbsp;Adam B. Murphy ,&nbsp;Ashley Ross ,&nbsp;Frank Miller ,&nbsp;Baris Turkbey ,&nbsp;Ulas Bagci","doi":"10.1016/j.media.2026.103955","DOIUrl":"10.1016/j.media.2026.103955","url":null,"abstract":"<div><div>Bias field artifacts in magnetic resonance imaging (MRI) scans introduce spatially smooth intensity inhomogeneities that degrade image quality and hinder downstream analysis. To address this challenge, we propose a novel variational Hadamard U-Net (VHU-Net) for effective body MRI bias field correction. The encoder comprises multiple convolutional Hadamard transform blocks (ConvHTBlocks), each integrating convolutional layers with a Hadamard transform (HT) layer. Specifically, the HT layer performs channel-wise frequency decomposition to isolate low-frequency components, while a subsequent scaling layer and semi-soft thresholding mechanism suppress redundant high-frequency noise. To compensate for the HT layer’s inability to model inter-channel dependencies, the decoder incorporates an inverse HT-reconstructed transformer block, enabling global, frequency-aware attention for the recovery of spatially consistent bias fields. The stacked decoder ConvHTBlocks further enhance the capacity to reconstruct the underlying ground-truth bias field. Building on the principles of variational inference, we formulate a new evidence lower bound (ELBO) as the training objective, promoting sparsity in the latent space while ensuring accurate bias field estimation. Comprehensive experiments on body MRI datasets demonstrate the superiority of VHU-Net over existing state-of-the-art methods in terms of intensity uniformity. Moreover, the corrected images yield substantial downstream improvements in segmentation accuracy. Our framework offers computational efficiency, interpretability, and robust performance across multi-center datasets, making it suitable for clinical deployment. The codes are available at <span><span>https://github.com/Holmes696/Probabilistic-Hadamard-U-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103955"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书