Medical image analysis最新文献

筛选
英文 中文
Identifying multilayer network hub by graph representation learning
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-16 DOI: 10.1016/j.media.2025.103463
Defu Yang , Minjeong Kim , Yu Zhang , Guorong Wu
{"title":"Identifying multilayer network hub by graph representation learning","authors":"Defu Yang ,&nbsp;Minjeong Kim ,&nbsp;Yu Zhang ,&nbsp;Guorong Wu","doi":"10.1016/j.media.2025.103463","DOIUrl":"10.1016/j.media.2025.103463","url":null,"abstract":"<div><div>The recent advances in neuroimaging technology allow us to understand how the human brain is wired in vivo and how functional activity is synchronized across multiple regions. Growing evidence shows that the complexity of the functional connectivity is far beyond the widely used mono-layer network. Indeed, the hierarchical processing information among distinct brain regions and across multiple channels requires using a more advanced multilayer model to understand the synchronization across the brain that underlies functional brain networks. However, the principled approach for characterizing network organization in the context of multilayer topologies is largely unexplored. In this work, we present a novel multi-variate hub identification method that takes both the intra- and inter-layer network topologies into account. Specifically, we put the spotlight on the multilayer graph embeddings that allow us to separate connector hubs (connecting across network modules) with their peripheral nodes. The removal of these hub nodes breaks down the entire multilayer brain network into a set of disconnected communities. We have evaluated our novel multilayer hub identification method in task-based and resting-state functional images. Complimenting ongoing findings using mono-layer brain networks, our multilayer network analysis provides a new understanding of brain network topology that links functional connectivities with brain states and disease progression.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103463"},"PeriodicalIF":10.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143023985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Illuminating the unseen: Advancing MRI domain generalization through causality
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-16 DOI: 10.1016/j.media.2025.103459
Yunqi Wang , Tianjiao Zeng , Furui Liu , Qi Dou , Peng Cao , Hing-Chiu Chang , Qiao Deng , Edward S. Hui
{"title":"Illuminating the unseen: Advancing MRI domain generalization through causality","authors":"Yunqi Wang ,&nbsp;Tianjiao Zeng ,&nbsp;Furui Liu ,&nbsp;Qi Dou ,&nbsp;Peng Cao ,&nbsp;Hing-Chiu Chang ,&nbsp;Qiao Deng ,&nbsp;Edward S. Hui","doi":"10.1016/j.media.2025.103459","DOIUrl":"10.1016/j.media.2025.103459","url":null,"abstract":"<div><div>Deep learning methods have shown promise in accelerated MRI reconstruction but face significant challenges under domain shifts between training and testing datasets, such as changes in image contrasts, anatomical regions, and acquisition strategies. To address these challenges, we present the first domain generalization framework specifically designed for accelerated MRI reconstruction to robustness across unseen domains. The framework employs progressive strategies to enforce domain invariance, starting with image-level fidelity consistency to ensure robust reconstruction quality across domains, and feature alignment to capture domain-invariant representations. Advancing beyond these foundations, we propose a novel approach enforcing mechanism-level invariance, termed GenCA-MRI, which aligns intrinsic causal relationships within MRI data. We further develop a computational strategy that significantly reduces the complexity of causal alignment, ensuring its feasibility for real-world applications. Extensive experiments validate the framework’s effectiveness, demonstrating both numerical and visual improvements over the baseline algorithm. GenCA-MRI presents the overall best performance, achieving a PSNR improvement up to 2.15 dB on fastMRI and 1.24 dB on IXI dataset at 8<span><math><mo>×</mo></math></span> acceleration, with superior performance in preserving anatomical details and mitigating domain-shift problem.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103459"},"PeriodicalIF":10.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VSNet: Vessel Structure-aware Network for hepatic and portal vein segmentation
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-16 DOI: 10.1016/j.media.2025.103458
Jichen Xu , Anqi Dong , Yang Yang , Shuo Jin , Jianping Zeng , Zhengqing Xu , Wei Jiang , Liang Zhang , Jiahong Dong , Bo Wang
{"title":"VSNet: Vessel Structure-aware Network for hepatic and portal vein segmentation","authors":"Jichen Xu ,&nbsp;Anqi Dong ,&nbsp;Yang Yang ,&nbsp;Shuo Jin ,&nbsp;Jianping Zeng ,&nbsp;Zhengqing Xu ,&nbsp;Wei Jiang ,&nbsp;Liang Zhang ,&nbsp;Jiahong Dong ,&nbsp;Bo Wang","doi":"10.1016/j.media.2025.103458","DOIUrl":"10.1016/j.media.2025.103458","url":null,"abstract":"<div><div>Identifying and segmenting hepatic and portal veins (two predominant vascular systems in the liver, from CT scans) play a crucial role for clinicians in preoperative planning for treatment strategies. However, existing segmentation models often struggle to capture fine details of minor veins. In this article, we introduce Vessel Structure-aware Network (VSNet), a multi-task learning model with vessel-growing decoder, to address the challenge. VSNet excels at accurate segmentation by capturing the topological features of both minor veins while preserving correct connectivity from minor vessels to trucks. We also build and publish the largest dataset (303 cases) for hepatic and portal vessel segmentation. Through comprehensive experiments, we demonstrate that VSNet achieves the best Dice for hepatic vein of 0.824 and portal vein of 0.807 on our proposed dataset, significantly outperforming other popular segmentation models. The source code and dataset are publicly available at <span><span>https://github.com/XXYZB/VSNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103458"},"PeriodicalIF":10.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143101596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UnICLAM: Contrastive representation learning with adversarial masking for unified and interpretable Medical Vision Question Answering
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-15 DOI: 10.1016/j.media.2025.103464
Chenlu Zhan , Peng Peng , Hongwei Wang , Gaoang Wang , Yu Lin , Tao Chen , Hongsen Wang
{"title":"UnICLAM: Contrastive representation learning with adversarial masking for unified and interpretable Medical Vision Question Answering","authors":"Chenlu Zhan ,&nbsp;Peng Peng ,&nbsp;Hongwei Wang ,&nbsp;Gaoang Wang ,&nbsp;Yu Lin ,&nbsp;Tao Chen ,&nbsp;Hongsen Wang","doi":"10.1016/j.media.2025.103464","DOIUrl":"10.1016/j.media.2025.103464","url":null,"abstract":"<div><div>Medical Visual Question Answering aims to assist doctors in decision-making when answering clinical questions regarding radiology images. Nevertheless, current models learn cross-modal representations through residing vision and text encoders in dual separate spaces, which inevitably leads to indirect semantic alignment. In this paper, we propose UnICLAM, a Unified and Interpretable Medical-VQA model through Contrastive Representation Learning with Adversarial Masking. To achieve the learning of an aligned image–text representation, we first establish a unified dual-stream pre-training structure with the gradually soft-parameter sharing strategy for alignment. Specifically, the proposed strategy learns a constraint for the vision and text encoders to be close in the same space, which is gradually loosened as the number of layers increases, so as to narrow the distance between the two different modalities. For grasping the unified semantic cross-modal representation, we extend the adversarial masking data augmentation to the contrastive representation learning of vision and text in a unified manner. While the encoder training minimizes the distance between the original and masking samples, the adversarial masking module keeps adversarial learning to conversely maximize the distance. We also intuitively take a further exploration of the unified adversarial masking augmentation method, which improves the potential <em>ante-hoc</em> interpretability with remarkable performance and efficiency. Experimental results on VQA-RAD and SLAKE benchmarks demonstrate that UnICLAM outperforms existing 11 state-of-the-art Medical-VQA methods. More importantly, we make an additional discussion about the performance of UnICLAM in diagnosing heart failure, verifying that UnICLAM exhibits superior few-shot adaption performance in practical disease diagnosis. The codes and models will be released upon the acceptance of the paper.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103464"},"PeriodicalIF":10.7,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIRE: Scale-invariant, rotation-equivariant estimation of artery orientations using graph neural networks
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-15 DOI: 10.1016/j.media.2025.103467
Dieuwertje Alblas , Julian Suk , Christoph Brune , Kak Khee Yeung , Jelmer M. Wolterink
{"title":"SIRE: Scale-invariant, rotation-equivariant estimation of artery orientations using graph neural networks","authors":"Dieuwertje Alblas ,&nbsp;Julian Suk ,&nbsp;Christoph Brune ,&nbsp;Kak Khee Yeung ,&nbsp;Jelmer M. Wolterink","doi":"10.1016/j.media.2025.103467","DOIUrl":"10.1016/j.media.2025.103467","url":null,"abstract":"<div><div>The orientation of a blood vessel as visualized in 3D medical images is an important descriptor of its geometry that can be used for centerline extraction and subsequent segmentation, labeling, and visualization. Blood vessels appear at multiple scales and levels of tortuosity, and determining the exact orientation of a vessel is a challenging problem. Recent works have used 3D convolutional neural networks (CNNs) for this purpose, but CNNs are sensitive to variations in vessel size and orientation. We present SIRE: a scale-invariant rotation-equivariant estimator for local vessel orientation. SIRE is modular and has strongly generalizing properties due to symmetry preservations.</div><div>SIRE consists of a gauge equivariant mesh CNN (GEM-CNN) that operates in parallel on multiple nested spherical meshes with different sizes. The features on each mesh are a projection of image intensities within the corresponding sphere. These features are intrinsic to the sphere and, in combination with the gauge equivariant properties of GEM-CNN, lead to SO(3) rotation equivariance. Approximate scale invariance is achieved by weight sharing and use of a symmetric maximum aggregation function to combine predictions at multiple scales. Hence, SIRE can be trained with arbitrarily oriented vessels with varying radii to generalize to vessels with a wide range of calibres and tortuosity.</div><div>We demonstrate the efficacy of SIRE using three datasets containing vessels of varying scales; the vascular model repository (VMR), the ASOCA coronary artery set, and an in-house set of abdominal aortic aneurysms (AAAs). We embed SIRE in a centerline tracker which accurately tracks large calibre AAAs, regardless of the data SIRE is trained with. Moreover, a tracker can use SIRE to track small-calibre tortuous coronary arteries, even when trained only with large-calibre, non-tortuous AAAs. Additional experiments are performed to verify the rotational equivariant and scale invariant properties of SIRE.</div><div>In conclusion, by incorporating SO(3) and scale symmetries, SIRE can be used to determine orientations of vessels outside of the training domain, offering a robust and data-efficient solution to geometric analysis of blood vessels in 3D medical images.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103467"},"PeriodicalIF":10.7,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When multiple instance learning meets foundation models: Advancing histological whole slide image analysis
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-14 DOI: 10.1016/j.media.2025.103456
Hongming Xu , Mingkang Wang , Duanbo Shi , Huamin Qin , Yunpeng Zhang , Zaiyi Liu , Anant Madabhushi , Peng Gao , Fengyu Cong , Cheng Lu
{"title":"When multiple instance learning meets foundation models: Advancing histological whole slide image analysis","authors":"Hongming Xu ,&nbsp;Mingkang Wang ,&nbsp;Duanbo Shi ,&nbsp;Huamin Qin ,&nbsp;Yunpeng Zhang ,&nbsp;Zaiyi Liu ,&nbsp;Anant Madabhushi ,&nbsp;Peng Gao ,&nbsp;Fengyu Cong ,&nbsp;Cheng Lu","doi":"10.1016/j.media.2025.103456","DOIUrl":"10.1016/j.media.2025.103456","url":null,"abstract":"<div><div>Deep multiple instance learning (MIL) pipelines are the mainstream weakly supervised learning methodologies for whole slide image (WSI) classification. However, it remains unclear how these widely used approaches compare to each other, given the recent proliferation of foundation models (FMs) for patch-level embedding and the diversity of slide-level aggregations. This paper implemented and systematically compared six FMs and six recent MIL methods by organizing different feature extractions and aggregations across seven clinically relevant end-to-end prediction tasks using WSIs from 4044 patients with four different cancer types. We tested state-of-the-art (SOTA) FMs in computational pathology, including CTransPath, PathoDuet, PLIP, CONCH, and UNI, as patch-level feature extractors. Feature aggregators, such as attention-based pooling, transformers, and dynamic graphs were thoroughly tested. Our experiments on cancer grading, biomarker status prediction, and microsatellite instability (MSI) prediction suggest that (1) FMs like UNI, trained with more diverse histological images, outperform generic models with smaller training datasets in patch embeddings, significantly enhancing downstream MIL classification accuracy and model training convergence speed, (2) instance feature fine-tuning, known as online feature re-embedding, to capture both fine-grained details and spatial interactions can often further improve WSI classification performance, (3) FMs advance MIL models by enabling promising grading classifications, biomarker status, and MSI predictions without requiring pixel- or patch-level annotations. These findings encourage the development of advanced, domain-specific FMs, aimed at more universally applicable diagnostic tasks, aligning with the evolving needs of clinical AI in pathology.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103456"},"PeriodicalIF":10.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic spectrum-driven hierarchical learning network for polyp segmentation
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-14 DOI: 10.1016/j.media.2024.103449
Haolin Wang , Kai-Ni Wang , Jie Hua , Yi Tang , Yang Chen , Guang-Quan Zhou , Shuo Li
{"title":"Dynamic spectrum-driven hierarchical learning network for polyp segmentation","authors":"Haolin Wang ,&nbsp;Kai-Ni Wang ,&nbsp;Jie Hua ,&nbsp;Yi Tang ,&nbsp;Yang Chen ,&nbsp;Guang-Quan Zhou ,&nbsp;Shuo Li","doi":"10.1016/j.media.2024.103449","DOIUrl":"10.1016/j.media.2024.103449","url":null,"abstract":"<div><div>Accurate automatic polyp segmentation in colonoscopy is crucial for the prompt prevention of colorectal cancer. However, the heterogeneous nature of polyps and differences in lighting and visibility conditions present significant challenges in achieving reliable and consistent segmentation across different cases. Therefore, this study proposes a novel dynamic spectrum-driven hierarchical learning model (DSHNet), the first to specifically leverage image frequency domain information to explore region-level salience differences among and within polyps for precise segmentation. A novel spectral decoupler is advanced to separate low-frequency and high-frequency components, leveraging their distinct characteristics to guide the model in learning valuable frequency features without bias through automatic masking. The low-frequency driven region-level saliency modeling then generates dynamic convolution kernels with individual frequency-aware features, which regulate region-level saliency modeling together with the supervision of the hierarchy of labels, thus enabling adaptation to polyp heterogeneous and illumination variation simultaneously. Meanwhile, the high-frequency attention module is designed to preserve the detailed information at the skip connections, which complements the focus on spatial features at various stages. Experimental results demonstrate that the proposed method outperforms other state-of-the-art polyp segmentation techniques, achieving robust and superior results on five diverse datasets. Codes are available at <span><span>https://github.com/gardnerzhou/DSHNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103449"},"PeriodicalIF":10.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-center brain age prediction via dual-modality fusion convolutional network 基于双模融合卷积网络的多中心脑年龄预测
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-10 DOI: 10.1016/j.media.2025.103455
Xuebin Chang , Xiaoyan Jia , Simon B. Eickhoff , Debo Dong , Wei Zeng
{"title":"Multi-center brain age prediction via dual-modality fusion convolutional network","authors":"Xuebin Chang ,&nbsp;Xiaoyan Jia ,&nbsp;Simon B. Eickhoff ,&nbsp;Debo Dong ,&nbsp;Wei Zeng","doi":"10.1016/j.media.2025.103455","DOIUrl":"10.1016/j.media.2025.103455","url":null,"abstract":"<div><div>Accurate prediction of brain age is crucial for identifying deviations between typical individual brain development trajectories and neuropsychiatric disease progression. Although current research has made progress, the effective application of brain age prediction models to multi-center datasets, particularly those with small-sample sizes, remains a significant challenge that is yet to be addressed. To this end, we propose a multi-center data correction method, which employs a domain adaptation correction strategy with Wasserstein distance of optimal transport, along with maximum mean discrepancy to improve the generalizability of brain-age prediction models on small-sample datasets. Additionally, most of the existing brain age models based on neuroimage identify the task of predicting brain age as a regression or classification problem, which may affect the accuracy of the prediction. Therefore, we propose a brain dual-modality fused convolutional neural network model (BrainDCN) for brain age prediction, and optimize this model by introducing a joint loss function of mean absolute error and cross-entropy, which identifies the prediction of brain age as both a regression and classification task. Furthermore, to highlight age-related features, we construct weighting matrices and vectors from a single-center training set and apply them to multi-center datasets to weight important features. We validate the BrainDCN model on the CamCAN dataset and achieve the lowest average absolute error compared to state-of-the-art models, demonstrating its superiority. Notably, the joint loss function and weighted features can further improve the prediction accuracy. More importantly, our proposed multi-center correction method is tested on four neuroimaging datasets and achieves the lowest average absolute error compared to widely used correction methods, highlighting the superior performance of the method in cross-center data integration and analysis. Furthermore, the application to multi-center schizophrenia data shows a mean accelerated aging compared to normal controls. Thus, this research establishes a pivotal methodological foundation for multi-center brain age prediction studies, exhibiting considerable applicability in clinical contexts, which are predominantly characterized by small-sample datasets.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103455"},"PeriodicalIF":10.7,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142990539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measurement of biomechanical properties of transversely isotropic biological tissue using traveling wave expansion 用行波扩展法测量横向各向同性生物组织的生物力学特性
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-09 DOI: 10.1016/j.media.2025.103457
Shengyuan Ma , Zhao He , Runke Wang , Aili Zhang , Qingfang Sun , Jun Liu , Fuhua Yan , Michael S. Sacks , Xi-Qiao Feng , Guang-Zhong Yang , Yuan Feng
{"title":"Measurement of biomechanical properties of transversely isotropic biological tissue using traveling wave expansion","authors":"Shengyuan Ma ,&nbsp;Zhao He ,&nbsp;Runke Wang ,&nbsp;Aili Zhang ,&nbsp;Qingfang Sun ,&nbsp;Jun Liu ,&nbsp;Fuhua Yan ,&nbsp;Michael S. Sacks ,&nbsp;Xi-Qiao Feng ,&nbsp;Guang-Zhong Yang ,&nbsp;Yuan Feng","doi":"10.1016/j.media.2025.103457","DOIUrl":"10.1016/j.media.2025.103457","url":null,"abstract":"<div><div>The anisotropic mechanical properties of fiber-embedded biological tissues are essential for understanding their development, aging, disease progression, and response to therapy. However, accurate and fast assessment of mechanical anisotropy in <em>vivo</em> using elastography remains challenging. To address the dilemma of achieving both accuracy and efficiency in this inverse problem involving complex wave equations, we propose a computational framework that utilizes the traveling wave expansion model. This framework leverages the unique wave characteristics of transversely isotropic material and physically meaningful operator combinations. The analytical solutions for inversion are derived and engineering optimization is made to adapt to actual scenarios. Measurement results using simulations, <em>ex vivo</em> muscle tissue, and <em>in vivo</em> human white matter validate the framework in determining <em>in vivo</em> anisotropic biomechanical properties, highlighting its potential for measurement of a variety of fiber-embedded biological tissues.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103457"},"PeriodicalIF":10.7,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142990543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks in histopathology: Emerging trends and future directions 组织病理学中的神经网络图:新兴趋势和未来方向。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-07 DOI: 10.1016/j.media.2024.103444
Siemen Brussee , Giorgio Buzzanca , Anne M.R. Schrader , Jesper Kers
{"title":"Graph neural networks in histopathology: Emerging trends and future directions","authors":"Siemen Brussee ,&nbsp;Giorgio Buzzanca ,&nbsp;Anne M.R. Schrader ,&nbsp;Jesper Kers","doi":"10.1016/j.media.2024.103444","DOIUrl":"10.1016/j.media.2024.103444","url":null,"abstract":"<div><div>Histopathological analysis of whole slide images (WSIs) has seen a surge in the utilization of deep learning methods, particularly Convolutional Neural Networks (CNNs). However, CNNs often fail to capture the intricate spatial dependencies inherent in WSIs. Graph Neural Networks (GNNs) present a promising alternative, adept at directly modeling pairwise interactions and effectively discerning the topological tissue and cellular structures within WSIs. Recognizing the pressing need for deep learning techniques that harness the topological structure of WSIs, the application of GNNs in histopathology has experienced rapid growth. In this comprehensive review, we survey GNNs in histopathology, discuss their applications, and explore emerging trends that pave the way for future advancements in the field. We begin by elucidating the fundamentals of GNNs and their potential applications in histopathology. Leveraging quantitative literature analysis, we explore four emerging trends: <em>Hierarchical GNNs</em>, <em>Adaptive Graph Structure Learning</em>, <em>Multimodal GNNs</em>, and <em>Higher-order GNNs</em>. Through an in-depth exploration of these trends, we offer insights into the evolving landscape of GNNs in histopathological analysis. Based on our findings, we propose future directions to propel the field forward. Our analysis serves to guide researchers and practitioners towards innovative approaches and methodologies, fostering advancements in histopathological analysis through the lens of graph neural networks.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103444"},"PeriodicalIF":10.7,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142965949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信