arXiv - EE - Image and Video Processing最新文献

筛选
英文 中文
The JPEG Pleno Learning-based Point Cloud Coding Standard: Serving Man and Machine 基于点云学习的 JPEG Pleno 编码标准:为人类和机器服务
arXiv - EE - Image and Video Processing Pub Date : 2024-09-12 DOI: arxiv-2409.08130
André F. R. GuardaInstituto de Telecomunicações, Lisbon, Portugal, Nuno M. M. RodriguesInstituto de Telecomunicações, Lisbon, PortugalESTG, Politécnico de Leiria, Leiria, Portugal, Fernando PereiraInstituto de Telecomunicações, Lisbon, PortugalInstituto Superior Técnico - Universidade de Lisboa, Lisbon, Portugal
{"title":"The JPEG Pleno Learning-based Point Cloud Coding Standard: Serving Man and Machine","authors":"André F. R. GuardaInstituto de Telecomunicações, Lisbon, Portugal, Nuno M. M. RodriguesInstituto de Telecomunicações, Lisbon, PortugalESTG, Politécnico de Leiria, Leiria, Portugal, Fernando PereiraInstituto de Telecomunicações, Lisbon, PortugalInstituto Superior Técnico - Universidade de Lisboa, Lisbon, Portugal","doi":"arxiv-2409.08130","DOIUrl":"https://doi.org/arxiv-2409.08130","url":null,"abstract":"Efficient point cloud coding has become increasingly critical for multiple\u0000applications such as virtual reality, autonomous driving, and digital twin\u0000systems, where rich and interactive 3D data representations may functionally\u0000make the difference. Deep learning has emerged as a powerful tool in this\u0000domain, offering advanced techniques for compressing point clouds more\u0000efficiently than conventional coding methods while also allowing effective\u0000computer vision tasks performed in the compressed domain thus, for the first\u0000time, making available a common compressed visual representation effective for\u0000both man and machine. Taking advantage of this potential, JPEG has recently\u0000finalized the JPEG Pleno Learning-based Point Cloud Coding (PCC) standard\u0000offering efficient lossy coding of static point clouds, targeting both human\u0000visualization and machine processing by leveraging deep learning models for\u0000geometry and color coding. The geometry is processed directly in its original\u00003D form using sparse convolutional neural networks, while the color data is\u0000projected onto 2D images and encoded using the also learning-based JPEG AI\u0000standard. The goal of this paper is to provide a complete technical description\u0000of the JPEG PCC standard, along with a thorough benchmarking of its performance\u0000against the state-of-the-art, while highlighting its main strengths and\u0000weaknesses. In terms of compression performance, JPEG PCC outperforms the\u0000conventional MPEG PCC standards, especially in geometry coding, achieving\u0000significant rate reductions. Color compression performance is less competitive\u0000but this is overcome by the power of a full learning-based coding framework for\u0000both geometry and color and the associated effective compressed domain\u0000processing.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AD-Lite Net: A Lightweight and Concatenated CNN Model for Alzheimer's Detection from MRI Images AD-Lite Net:从核磁共振成像图像中检测阿尔茨海默氏症的轻量级简并 CNN 模型
arXiv - EE - Image and Video Processing Pub Date : 2024-09-12 DOI: arxiv-2409.08170
Santanu Roy, Archit Gupta, Shubhi Tiwari, Palak Sahu
{"title":"AD-Lite Net: A Lightweight and Concatenated CNN Model for Alzheimer's Detection from MRI Images","authors":"Santanu Roy, Archit Gupta, Shubhi Tiwari, Palak Sahu","doi":"arxiv-2409.08170","DOIUrl":"https://doi.org/arxiv-2409.08170","url":null,"abstract":"Alzheimer's Disease (AD) is a non-curable progressive neurodegenerative\u0000disorder that affects the human brain, leading to a decline in memory,\u0000cognitive abilities, and eventually, the ability to carry out daily tasks.\u0000Manual diagnosis of Alzheimer's disease from MRI images is fraught with less\u0000sensitivity and it is a very tedious process for neurologists. Therefore, there\u0000is a need for an automatic Computer Assisted Diagnosis (CAD) system, which can\u0000detect AD at early stages with higher accuracy. In this research, we have\u0000proposed a novel AD-Lite Net model (trained from scratch), that could alleviate\u0000the aforementioned problem. The novelties we bring here in this research are,\u0000(I) We have proposed a very lightweight CNN model by incorporating Depth Wise\u0000Separable Convolutional (DWSC) layers and Global Average Pooling (GAP) layers.\u0000(II) We have leveraged a ``parallel concatenation block'' (pcb), in the\u0000proposed AD-Lite Net model. This pcb consists of a Transformation layer\u0000(Tx-layer), followed by two convolutional layers, which are thereby\u0000concatenated with the original base model. This Tx-layer converts the features\u0000into very distinct kind of features, which are imperative for the Alzheimer's\u0000disease. As a consequence, the proposed AD-Lite Net model with ``parallel\u0000concatenation'' converges faster and automatically mitigates the class\u0000imbalance problem from the MRI datasets in a very generalized way. For the\u0000validity of our proposed model, we have implemented it on three different MRI\u0000datasets. Furthermore, we have combined the ADNI and AD datasets and\u0000subsequently performed a 10-fold cross-validation experiment to verify the\u0000model's generalization ability. Extensive experimental results showed that our\u0000proposed model has outperformed all the existing CNN models, and one recent\u0000trend Vision Transformer (ViT) model by a significant margin.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"157 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DreamBeast: Distilling 3D Fantastical Animals with Part-Aware Knowledge Transfer 梦幻野兽利用部分感知知识转移提炼 3D 梦幻动物
arXiv - EE - Image and Video Processing Pub Date : 2024-09-12 DOI: arxiv-2409.08271
Runjia Li, Junlin Han, Luke Melas-Kyriazi, Chunyi Sun, Zhaochong An, Zhongrui Gui, Shuyang Sun, Philip Torr, Tomas Jakab
{"title":"DreamBeast: Distilling 3D Fantastical Animals with Part-Aware Knowledge Transfer","authors":"Runjia Li, Junlin Han, Luke Melas-Kyriazi, Chunyi Sun, Zhaochong An, Zhongrui Gui, Shuyang Sun, Philip Torr, Tomas Jakab","doi":"arxiv-2409.08271","DOIUrl":"https://doi.org/arxiv-2409.08271","url":null,"abstract":"We present DreamBeast, a novel method based on score distillation sampling\u0000(SDS) for generating fantastical 3D animal assets composed of distinct parts.\u0000Existing SDS methods often struggle with this generation task due to a limited\u0000understanding of part-level semantics in text-to-image diffusion models. While\u0000recent diffusion models, such as Stable Diffusion 3, demonstrate a better\u0000part-level understanding, they are prohibitively slow and exhibit other common\u0000problems associated with single-view diffusion models. DreamBeast overcomes\u0000this limitation through a novel part-aware knowledge transfer mechanism. For\u0000each generated asset, we efficiently extract part-level knowledge from the\u0000Stable Diffusion 3 model into a 3D Part-Affinity implicit representation. This\u0000enables us to instantly generate Part-Affinity maps from arbitrary camera\u0000views, which we then use to modulate the guidance of a multi-view diffusion\u0000model during SDS to create 3D assets of fantastical animals. DreamBeast\u0000significantly enhances the quality of generated 3D creatures with\u0000user-specified part compositions while reducing computational overhead, as\u0000demonstrated by extensive quantitative and qualitative evaluations.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OCTAMamba: A State-Space Model Approach for Precision OCTA Vasculature Segmentation OCTAMamba:用于精确 OCTA 血管分割的状态空间模型方法
arXiv - EE - Image and Video Processing Pub Date : 2024-09-12 DOI: arxiv-2409.08000
Shun Zou, Zhuo Zhang, Guangwei Gao
{"title":"OCTAMamba: A State-Space Model Approach for Precision OCTA Vasculature Segmentation","authors":"Shun Zou, Zhuo Zhang, Guangwei Gao","doi":"arxiv-2409.08000","DOIUrl":"https://doi.org/arxiv-2409.08000","url":null,"abstract":"Optical Coherence Tomography Angiography (OCTA) is a crucial imaging\u0000technique for visualizing retinal vasculature and diagnosing eye diseases such\u0000as diabetic retinopathy and glaucoma. However, precise segmentation of OCTA\u0000vasculature remains challenging due to the multi-scale vessel structures and\u0000noise from poor image quality and eye lesions. In this study, we proposed\u0000OCTAMamba, a novel U-shaped network based on the Mamba architecture, designed\u0000to segment vasculature in OCTA accurately. OCTAMamba integrates a Quad Stream\u0000Efficient Mining Embedding Module for local feature extraction, a Multi-Scale\u0000Dilated Asymmetric Convolution Module to capture multi-scale vasculature, and a\u0000Focused Feature Recalibration Module to filter noise and highlight target\u0000areas. Our method achieves efficient global modeling and local feature\u0000extraction while maintaining linear complexity, making it suitable for\u0000low-computation medical applications. Extensive experiments on the OCTA 3M,\u0000OCTA 6M, and ROSSA datasets demonstrated that OCTAMamba outperforms\u0000state-of-the-art methods, providing a new reference for efficient OCTA\u0000segmentation. Code is available at https://github.com/zs1314/OCTAMamba","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective Segmentation of Post-Treatment Gliomas Using Simple Approaches: Artificial Sequence Generation and Ensemble Models 使用简单方法有效分割治疗后胶质瘤:人工序列生成和集合模型
arXiv - EE - Image and Video Processing Pub Date : 2024-09-12 DOI: arxiv-2409.08143
Heejong Kim, Leo Milecki, Mina C Moghadam, Fengbei Liu, Minh Nguyen, Eric Qiu, Abhishek Thanki, Mert R Sabuncu
{"title":"Effective Segmentation of Post-Treatment Gliomas Using Simple Approaches: Artificial Sequence Generation and Ensemble Models","authors":"Heejong Kim, Leo Milecki, Mina C Moghadam, Fengbei Liu, Minh Nguyen, Eric Qiu, Abhishek Thanki, Mert R Sabuncu","doi":"arxiv-2409.08143","DOIUrl":"https://doi.org/arxiv-2409.08143","url":null,"abstract":"Segmentation is a crucial task in the medical imaging field and is often an\u0000important primary step or even a prerequisite to the analysis of medical\u0000volumes. Yet treatments such as surgery complicate the accurate delineation of\u0000regions of interest. The BraTS Post-Treatment 2024 Challenge published the\u0000first public dataset for post-surgery glioma segmentation and addresses the\u0000aforementioned issue by fostering the development of automated segmentation\u0000tools for glioma in MRI data. In this effort, we propose two straightforward\u0000approaches to enhance the segmentation performances of deep learning-based\u0000methodologies. First, we incorporate an additional input based on a simple\u0000linear combination of the available MRI sequences input, which highlights\u0000enhancing tumors. Second, we employ various ensembling methods to weigh the\u0000contribution of a battery of models. Our results demonstrate that these\u0000approaches significantly improve segmentation performance compared to baseline\u0000models, underscoring the effectiveness of these simple approaches in improving\u0000medical image segmentation tasks.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement 用于视网膜眼底图像增强的情境感知优化传输学习
arXiv - EE - Image and Video Processing Pub Date : 2024-09-12 DOI: arxiv-2409.07862
Vamsi Krishna Vasa, Peijie Qiu, Wenhui Zhu, Yujian Xiong, Oana Dumitrascu, Yalin Wang
{"title":"Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement","authors":"Vamsi Krishna Vasa, Peijie Qiu, Wenhui Zhu, Yujian Xiong, Oana Dumitrascu, Yalin Wang","doi":"arxiv-2409.07862","DOIUrl":"https://doi.org/arxiv-2409.07862","url":null,"abstract":"Retinal fundus photography offers a non-invasive way to diagnose and monitor\u0000a variety of retinal diseases, but is prone to inherent quality glitches\u0000arising from systemic imperfections or operator/patient-related factors.\u0000However, high-quality retinal images are crucial for carrying out accurate\u0000diagnoses and automated analyses. The fundus image enhancement is typically\u0000formulated as a distribution alignment problem, by finding a one-to-one mapping\u0000between a low-quality image and its high-quality counterpart. This paper\u0000proposes a context-informed optimal transport (OT) learning framework for\u0000tackling unpaired fundus image enhancement. In contrast to standard generative\u0000image enhancement methods, which struggle with handling contextual information\u0000(e.g., over-tampered local structures and unwanted artifacts), the proposed\u0000context-aware OT learning paradigm better preserves local structures and\u0000minimizes unwanted artifacts. Leveraging deep contextual features, we derive\u0000the proposed context-aware OT using the earth mover's distance and show that\u0000the proposed context-OT has a solid theoretical guarantee. Experimental results\u0000on a large-scale dataset demonstrate the superiority of the proposed method\u0000over several state-of-the-art supervised and unsupervised methods in terms of\u0000signal-to-noise ratio, structural similarity index, as well as two downstream\u0000tasks. The code is available at\u0000url{https://github.com/Retinal-Research/Contextual-OT}.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoPET Challenge: Tumour Synthesis for Data Augmentation AutoPET 挑战:用于数据增强的肿瘤合成
arXiv - EE - Image and Video Processing Pub Date : 2024-09-12 DOI: arxiv-2409.08068
Lap Yan Lennon Chan, Chenxin Li, Yixuan Yuan
{"title":"AutoPET Challenge: Tumour Synthesis for Data Augmentation","authors":"Lap Yan Lennon Chan, Chenxin Li, Yixuan Yuan","doi":"arxiv-2409.08068","DOIUrl":"https://doi.org/arxiv-2409.08068","url":null,"abstract":"Accurate lesion segmentation in whole-body PET/CT scans is crucial for cancer\u0000diagnosis and treatment planning, but limited datasets often hinder the\u0000performance of automated segmentation models. In this paper, we explore the\u0000potential of leveraging the deep prior from a generative model to serve as a\u0000data augmenter for automated lesion segmentation in PET/CT scans. We adapt the\u0000DiffTumor method, originally designed for CT images, to generate synthetic\u0000PET-CT images with lesions. Our approach trains the generative model on the\u0000AutoPET dataset and uses it to expand the training data. We then compare the\u0000performance of segmentation models trained on the original and augmented\u0000datasets. Our findings show that the model trained on the augmented dataset\u0000achieves a higher Dice score, demonstrating the potential of our data\u0000augmentation approach. In a nutshell, this work presents a promising direction\u0000for improving lesion segmentation in whole-body PET/CT scans with limited\u0000datasets, potentially enhancing the accuracy and reliability of cancer\u0000diagnostics.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"95 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model Ensemble for Brain Tumor Segmentation in Magnetic Resonance Imaging 磁共振成像中的脑肿瘤分割模型组合
arXiv - EE - Image and Video Processing Pub Date : 2024-09-12 DOI: arxiv-2409.08232
Daniel Capellán-Martín, Zhifan Jiang, Abhijeet Parida, Xinyang Liu, Van Lam, Hareem Nisar, Austin Tapp, Sarah Elsharkawi, Maria J. Ledesma-Carbayo, Syed Muhammad Anwar, Marius George Linguraru
{"title":"Model Ensemble for Brain Tumor Segmentation in Magnetic Resonance Imaging","authors":"Daniel Capellán-Martín, Zhifan Jiang, Abhijeet Parida, Xinyang Liu, Van Lam, Hareem Nisar, Austin Tapp, Sarah Elsharkawi, Maria J. Ledesma-Carbayo, Syed Muhammad Anwar, Marius George Linguraru","doi":"arxiv-2409.08232","DOIUrl":"https://doi.org/arxiv-2409.08232","url":null,"abstract":"Segmenting brain tumors in multi-parametric magnetic resonance imaging\u0000enables performing quantitative analysis in support of clinical trials and\u0000personalized patient care. This analysis provides the potential to impact\u0000clinical decision-making processes, including diagnosis and prognosis. In 2023,\u0000the well-established Brain Tumor Segmentation (BraTS) challenge presented a\u0000substantial expansion with eight tasks and 4,500 brain tumor cases. In this\u0000paper, we present a deep learning-based ensemble strategy that is evaluated for\u0000newly included tumor cases in three tasks: pediatric brain tumors (PED),\u0000intracranial meningioma (MEN), and brain metastases (MET). In particular, we\u0000ensemble outputs from state-of-the-art nnU-Net and Swin UNETR models on a\u0000region-wise basis. Furthermore, we implemented a targeted post-processing\u0000strategy based on a cross-validated threshold search to improve the\u0000segmentation results for tumor sub-regions. The evaluation of our proposed\u0000method on unseen test cases for the three tasks resulted in lesion-wise Dice\u0000scores for PED: 0.653, 0.809, 0.826; MEN: 0.876, 0.867, 0.849; and MET: 0.555,\u00000.6, 0.58; for the enhancing tumor, tumor core, and whole tumor, respectively.\u0000Our method was ranked first for PED, third for MEN, and fourth for MET,\u0000respectively.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep intra-operative illumination calibration of hyperspectral cameras 高光谱相机的深度术中照明校准
arXiv - EE - Image and Video Processing Pub Date : 2024-09-11 DOI: arxiv-2409.07094
Alexander Baumann, Leonardo Ayala, Alexander Studier-Fischer, Jan Sellner, Berkin Özdemir, Karl-Friedrich Kowalewski, Slobodan Ilic, Silvia Seidlitz, Lena Maier-Hein
{"title":"Deep intra-operative illumination calibration of hyperspectral cameras","authors":"Alexander Baumann, Leonardo Ayala, Alexander Studier-Fischer, Jan Sellner, Berkin Özdemir, Karl-Friedrich Kowalewski, Slobodan Ilic, Silvia Seidlitz, Lena Maier-Hein","doi":"arxiv-2409.07094","DOIUrl":"https://doi.org/arxiv-2409.07094","url":null,"abstract":"Hyperspectral imaging (HSI) is emerging as a promising novel imaging modality\u0000with various potential surgical applications. Currently available cameras,\u0000however, suffer from poor integration into the clinical workflow because they\u0000require the lights to be switched off, or the camera to be manually\u0000recalibrated as soon as lighting conditions change. Given this critical\u0000bottleneck, the contribution of this paper is threefold: (1) We demonstrate\u0000that dynamically changing lighting conditions in the operating room\u0000dramatically affect the performance of HSI applications, namely physiological\u0000parameter estimation, and surgical scene segmentation. (2) We propose a novel\u0000learning-based approach to automatically recalibrating hyperspectral images\u0000during surgery and show that it is sufficiently accurate to replace the tedious\u0000process of white reference-based recalibration. (3) Based on a total of 742 HSI\u0000cubes from a phantom, porcine models, and rats we show that our recalibration\u0000method not only outperforms previously proposed methods, but also generalizes\u0000across species, lighting conditions, and image processing tasks. Due to its\u0000simple workflow integration as well as high accuracy, speed, and generalization\u0000capabilities, our method could evolve as a central component in clinical\u0000surgical HSI.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EVENet: Evidence-based Ensemble Learning for Uncertainty-aware Brain Parcellation Using Diffusion MRI EVENet:利用弥散核磁共振成像进行基于证据的集合学习,以实现不确定性感知的大脑分层
arXiv - EE - Image and Video Processing Pub Date : 2024-09-11 DOI: arxiv-2409.07020
Chenjun Li, Dian Yang, Shun Yao, Shuyue Wang, Ye Wu, Le Zhang, Qiannuo Li, Kang Ik Kevin Cho, Johanna Seitz-Holland, Lipeng Ning, Jon Haitz Legarreta, Yogesh Rathi, Carl-Fredrik Westin, Lauren J. O'Donnell, Nir A. Sochen, Ofer Pasternak, Fan Zhang
{"title":"EVENet: Evidence-based Ensemble Learning for Uncertainty-aware Brain Parcellation Using Diffusion MRI","authors":"Chenjun Li, Dian Yang, Shun Yao, Shuyue Wang, Ye Wu, Le Zhang, Qiannuo Li, Kang Ik Kevin Cho, Johanna Seitz-Holland, Lipeng Ning, Jon Haitz Legarreta, Yogesh Rathi, Carl-Fredrik Westin, Lauren J. O'Donnell, Nir A. Sochen, Ofer Pasternak, Fan Zhang","doi":"arxiv-2409.07020","DOIUrl":"https://doi.org/arxiv-2409.07020","url":null,"abstract":"In this study, we developed an Evidence-based Ensemble Neural Network, namely\u0000EVENet, for anatomical brain parcellation using diffusion MRI. The key\u0000innovation of EVENet is the design of an evidential deep learning framework to\u0000quantify predictive uncertainty at each voxel during a single inference. Using\u0000EVENet, we obtained accurate parcellation and uncertainty estimates across\u0000different datasets from healthy and clinical populations and with different\u0000imaging acquisitions. The overall network includes five parallel subnetworks,\u0000where each is dedicated to learning the FreeSurfer parcellation for a certain\u0000diffusion MRI parameter. An evidence-based ensemble methodology is then\u0000proposed to fuse the individual outputs. We perform experimental evaluations on\u0000large-scale datasets from multiple imaging sources, including high-quality\u0000diffusion MRI data from healthy adults and clinically diffusion MRI data from\u0000participants with various brain diseases (schizophrenia, bipolar disorder,\u0000attention-deficit/hyperactivity disorder, Parkinson's disease, cerebral small\u0000vessel disease, and neurosurgical patients with brain tumors). Compared to\u0000several state-of-the-art methods, our experimental results demonstrate highly\u0000improved parcellation accuracy across the multiple testing datasets despite the\u0000differences in dMRI acquisition protocols and health conditions. Furthermore,\u0000thanks to the uncertainty estimation, our EVENet approach demonstrates a good\u0000ability to detect abnormal brain regions in patients with lesions, enhancing\u0000the interpretability and reliability of the segmentation results.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信