arXiv - EE - Image and Video Processing最新文献

筛选
英文 中文
From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging 从 FDG 到 PSMA:PET/CT 成像中多示踪剂、多中心病灶分割的搭便车指南
arXiv - EE - Image and Video Processing Pub Date : 2024-09-14 DOI: arxiv-2409.09478
Maximilian Rokuss, Balint Kovacs, Yannick Kirchhoff, Shuhan Xiao, Constantin Ulrich, Klaus H. Maier-Hein, Fabian Isensee
{"title":"From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging","authors":"Maximilian Rokuss, Balint Kovacs, Yannick Kirchhoff, Shuhan Xiao, Constantin Ulrich, Klaus H. Maier-Hein, Fabian Isensee","doi":"arxiv-2409.09478","DOIUrl":"https://doi.org/arxiv-2409.09478","url":null,"abstract":"Automated lesion segmentation in PET/CT scans is crucial for improving\u0000clinical workflows and advancing cancer diagnostics. However, the task is\u0000challenging due to physiological variability, different tracers used in PET\u0000imaging, and diverse imaging protocols across medical centers. To address this,\u0000the autoPET series was created to challenge researchers to develop algorithms\u0000that generalize across diverse PET/CT environments. This paper presents our\u0000solution for the autoPET III challenge, targeting multitracer, multicenter\u0000generalization using the nnU-Net framework with the ResEncL architecture. Key\u0000techniques include misalignment data augmentation and multi-modal pretraining\u0000across CT, MR, and PET datasets to provide an initial anatomical understanding.\u0000We incorporate organ supervision as a multitask approach, enabling the model to\u0000distinguish between physiological uptake and tracer-specific patterns, which is\u0000particularly beneficial in cases where no lesions are present. Compared to the\u0000default nnU-Net, which achieved a Dice score of 57.61, or the larger ResEncL\u0000(65.31) our model significantly improved performance with a Dice score of\u000068.40, alongside a reduction in false positive (FPvol: 7.82) and false negative\u0000(FNvol: 10.35) volumes. These results underscore the effectiveness of combining\u0000advanced network design, augmentation, pretraining, and multitask learning for\u0000PET/CT lesion segmentation. Code is publicly available at\u0000https://github.com/MIC-DKFZ/autopet-3-submission.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"92 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Neural Orientation Distribution Fields on High Resolution Diffusion MRI Scans 估算高分辨率弥散核磁共振成像扫描的神经定向分布场
arXiv - EE - Image and Video Processing Pub Date : 2024-09-14 DOI: arxiv-2409.09387
Mohammed Munzer Dwedari, William Consagra, Philip Müller, Özgün Turgut, Daniel Rueckert, Yogesh Rathi
{"title":"Estimating Neural Orientation Distribution Fields on High Resolution Diffusion MRI Scans","authors":"Mohammed Munzer Dwedari, William Consagra, Philip Müller, Özgün Turgut, Daniel Rueckert, Yogesh Rathi","doi":"arxiv-2409.09387","DOIUrl":"https://doi.org/arxiv-2409.09387","url":null,"abstract":"The Orientation Distribution Function (ODF) characterizes key brain\u0000microstructural properties and plays an important role in understanding brain\u0000structural connectivity. Recent works introduced Implicit Neural Representation\u0000(INR) based approaches to form a spatially aware continuous estimate of the ODF\u0000field and demonstrated promising results in key tasks of interest when compared\u0000to conventional discrete approaches. However, traditional INR methods face\u0000difficulties when scaling to large-scale images, such as modern\u0000ultra-high-resolution MRI scans, posing challenges in learning fine structures\u0000as well as inefficiencies in training and inference speed. In this work, we\u0000propose HashEnc, a grid-hash-encoding-based estimation of the ODF field and\u0000demonstrate its effectiveness in retaining structural and textural features. We\u0000show that HashEnc achieves a 10% enhancement in image quality while requiring\u00003x less computational resources than current methods. Our code can be found at\u0000https://github.com/MunzerDw/NODF-HashEnc.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MotionTTT: 2D Test-Time-Training Motion Estimation for 3D Motion Corrected MRI MotionTTT:三维运动校正磁共振成像的二维测试-时间-训练运动估计
arXiv - EE - Image and Video Processing Pub Date : 2024-09-14 DOI: arxiv-2409.09370
Tobit Klug, Kun Wang, Stefan Ruschke, Reinhard Heckel
{"title":"MotionTTT: 2D Test-Time-Training Motion Estimation for 3D Motion Corrected MRI","authors":"Tobit Klug, Kun Wang, Stefan Ruschke, Reinhard Heckel","doi":"arxiv-2409.09370","DOIUrl":"https://doi.org/arxiv-2409.09370","url":null,"abstract":"A major challenge of the long measurement times in magnetic resonance imaging\u0000(MRI), an important medical imaging technology, is that patients may move\u0000during data acquisition. This leads to severe motion artifacts in the\u0000reconstructed images and volumes. In this paper, we propose a deep\u0000learning-based test-time-training method for accurate motion estimation. The\u0000key idea is that a neural network trained for motion-free reconstruction has a\u0000small loss if there is no motion, thus optimizing over motion parameters passed\u0000through the reconstruction network enables accurate estimation of motion. The\u0000estimated motion parameters enable to correct for the motion and to reconstruct\u0000accurate motion-corrected images. Our method uses 2D reconstruction networks to\u0000estimate rigid motion in 3D, and constitutes the first deep learning based\u0000method for 3D rigid motion estimation towards 3D-motion-corrected MRI. We show\u0000that our method can provably reconstruct motion parameters for a simple signal\u0000and neural network model. We demonstrate the effectiveness of our method for\u0000both retrospectively simulated motion and prospectively collected real\u0000motion-corrupted data.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Deep Unfolding with Direct Diffusion Bridges for Computed Tomography Reconstruction 将深度展开与直接扩散桥整合用于计算机断层扫描重建
arXiv - EE - Image and Video Processing Pub Date : 2024-09-14 DOI: arxiv-2409.09477
Herman Verinaz-Jadan, Su Yan
{"title":"Integrating Deep Unfolding with Direct Diffusion Bridges for Computed Tomography Reconstruction","authors":"Herman Verinaz-Jadan, Su Yan","doi":"arxiv-2409.09477","DOIUrl":"https://doi.org/arxiv-2409.09477","url":null,"abstract":"Computed Tomography (CT) is widely used in healthcare for detailed imaging.\u0000However, Low-dose CT, despite reducing radiation exposure, often results in\u0000images with compromised quality due to increased noise. Traditional methods,\u0000including preprocessing, post-processing, and model-based approaches that\u0000leverage physical principles, are employed to improve the quality of image\u0000reconstructions from noisy projections or sinograms. Recently, deep learning\u0000has significantly advanced the field, with diffusion models outperforming both\u0000traditional methods and other deep learning approaches. These models\u0000effectively merge deep learning with physics, serving as robust priors for the\u0000inverse problem in CT. However, they typically require prolonged computation\u0000times during sampling. This paper introduces the first approach to merge deep\u0000unfolding with Direct Diffusion Bridges (DDBs) for CT, integrating the physics\u0000into the network architecture and facilitating the transition from degraded to\u0000clean images by bypassing excessively noisy intermediate stages commonly\u0000encountered in diffusion models. Moreover, this approach includes a tailored\u0000training procedure that eliminates errors typically accumulated during\u0000sampling. The proposed approach requires fewer sampling steps and demonstrates\u0000improved fidelity metrics, outperforming many existing state-of-the-art\u0000techniques.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Deep-Unfolding Network for MA-XRF Super-Resolution on Old Master Paintings Using Minimal Training Data 利用最少的训练数据对古画进行 MA-XRF 超分辨的对抗性深度展开网络
arXiv - EE - Image and Video Processing Pub Date : 2024-09-14 DOI: arxiv-2409.09483
Herman Verinaz-Jadan, Su Yan, Catherine Higgitt, Pier Luigi Dragotti
{"title":"Adversarial Deep-Unfolding Network for MA-XRF Super-Resolution on Old Master Paintings Using Minimal Training Data","authors":"Herman Verinaz-Jadan, Su Yan, Catherine Higgitt, Pier Luigi Dragotti","doi":"arxiv-2409.09483","DOIUrl":"https://doi.org/arxiv-2409.09483","url":null,"abstract":"High-quality element distribution maps enable precise analysis of the\u0000material composition and condition of Old Master paintings. These maps are\u0000typically produced from data acquired through Macro X-ray fluorescence (MA-XRF)\u0000scanning, a non-invasive technique that collects spectral information. However,\u0000MA-XRF is often limited by a trade-off between acquisition time and resolution.\u0000Achieving higher resolution requires longer scanning times, which can be\u0000impractical for detailed analysis of large artworks. Super-resolution MA-XRF\u0000provides an alternative solution by enhancing the quality of MA-XRF scans while\u0000reducing the need for extended scanning sessions. This paper introduces a\u0000tailored super-resolution approach to improve MA-XRF analysis of Old Master\u0000paintings. Our method proposes a novel adversarial neural network architecture\u0000for MA-XRF, inspired by the Learned Iterative Shrinkage-Thresholding Algorithm.\u0000It is specifically designed to work in an unsupervised manner, making efficient\u0000use of the limited available data. This design avoids the need for extensive\u0000datasets or pre-trained networks, allowing it to be trained using just a single\u0000high-resolution RGB image alongside low-resolution MA-XRF data. Numerical\u0000results demonstrate that our method outperforms existing state-of-the-art\u0000super-resolution techniques for MA-XRF scans of Old Master paintings.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Stochastic Terrain Mapping and Processing for Autonomous Safe Landing 用于自主安全着陆的实时随机地形测绘和处理技术
arXiv - EE - Image and Video Processing Pub Date : 2024-09-14 DOI: arxiv-2409.09309
Kento Tomita, Koki Ho
{"title":"Real-Time Stochastic Terrain Mapping and Processing for Autonomous Safe Landing","authors":"Kento Tomita, Koki Ho","doi":"arxiv-2409.09309","DOIUrl":"https://doi.org/arxiv-2409.09309","url":null,"abstract":"Onboard terrain sensing and mapping for safe planetary landings often suffer\u0000from missed hazardous features, e.g., small rocks, due to the large\u0000observational range and the limited resolution of the obtained terrain data. To\u0000this end, this paper develops a novel real-time stochastic terrain mapping\u0000algorithm that accounts for topographic uncertainty between the sampled points,\u0000or the uncertainty due to the sparse 3D terrain measurements. We introduce a\u0000Gaussian digital elevation map that is efficiently constructed using the\u0000combination of Delauney triangulation and local Gaussian process regression.\u0000The geometric investigation of the lander-terrain interaction is exploited to\u0000efficiently evaluate the marginally conservative local slope and roughness\u0000while avoiding the costly computation of the local plane. The conservativeness\u0000is proved in the paper. The developed real-time uncertainty quantification\u0000pipeline enables stochastic landing safety evaluation under challenging\u0000operational conditions, such as a large observational range or limited sensor\u0000capability, which is a critical stepping stone for the development of\u0000predictive guidance algorithms for safe autonomous planetary landing. Detailed\u0000reviews on background and related works are also presented.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MANGO: Disentangled Image Transformation Manifolds with Grouped Operators 芒果带分组算子的离散图像变换漫场
arXiv - EE - Image and Video Processing Pub Date : 2024-09-14 DOI: arxiv-2409.09542
Brighton Ancelin, Yenho Chen, Peimeng Guan, Chiraag Kaushik, Belen Martin-Urcelay, Alex Saad-Falcon, Nakul Singh
{"title":"MANGO: Disentangled Image Transformation Manifolds with Grouped Operators","authors":"Brighton Ancelin, Yenho Chen, Peimeng Guan, Chiraag Kaushik, Belen Martin-Urcelay, Alex Saad-Falcon, Nakul Singh","doi":"arxiv-2409.09542","DOIUrl":"https://doi.org/arxiv-2409.09542","url":null,"abstract":"Learning semantically meaningful image transformations (i.e. rotation,\u0000thickness, blur) directly from examples can be a challenging task. Recently,\u0000the Manifold Autoencoder (MAE) proposed using a set of Lie group operators to\u0000learn image transformations directly from examples. However, this approach has\u0000limitations, as the learned operators are not guaranteed to be disentangled and\u0000the training routine is prohibitively expensive when scaling up the model. To\u0000address these limitations, we propose MANGO (transformation Manifolds with\u0000Grouped Operators) for learning disentangled operators that describe image\u0000transformations in distinct latent subspaces. Moreover, our approach allows\u0000practitioners the ability to define which transformations they aim to model,\u0000thus improving the semantic meaning of the learned operators. Through our\u0000experiments, we demonstrate that MANGO enables composition of image\u0000transformations and introduces a one-phase training routine that leads to a\u0000100x speedup over prior works.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"196 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model 使用混合 Yolo-SAM 2 模型在结肠镜检查中进行自我提示息肉分割
arXiv - EE - Image and Video Processing Pub Date : 2024-09-14 DOI: arxiv-2409.09484
Mobina Mansoori, Sajjad Shahabodini, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi
{"title":"Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model","authors":"Mobina Mansoori, Sajjad Shahabodini, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi","doi":"arxiv-2409.09484","DOIUrl":"https://doi.org/arxiv-2409.09484","url":null,"abstract":"Early diagnosis and treatment of polyps during colonoscopy are essential for\u0000reducing the incidence and mortality of Colorectal Cancer (CRC). However, the\u0000variability in polyp characteristics and the presence of artifacts in\u0000colonoscopy images and videos pose significant challenges for accurate and\u0000efficient polyp detection and segmentation. This paper presents a novel\u0000approach to polyp segmentation by integrating the Segment Anything Model (SAM\u00002) with the YOLOv8 model. Our method leverages YOLOv8's bounding box\u0000predictions to autonomously generate input prompts for SAM 2, thereby reducing\u0000the need for manual annotations. We conducted exhaustive tests on five\u0000benchmark colonoscopy image datasets and two colonoscopy video datasets,\u0000demonstrating that our method exceeds state-of-the-art models in both image and\u0000video segmentation tasks. Notably, our approach achieves high segmentation\u0000accuracy using only bounding box annotations, significantly reducing annotation\u0000time and effort. This advancement holds promise for enhancing the efficiency\u0000and scalability of polyp detection in clinical settings\u0000https://github.com/sajjad-sh33/YOLO_SAM2.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAISI: Medical AI for Synthetic Imaging MAISI:用于合成成像的医学人工智能
arXiv - EE - Image and Video Processing Pub Date : 2024-09-13 DOI: arxiv-2409.11169
Pengfei Guo, Can Zhao, Dong Yang, Ziyue Xu, Vishwesh Nath, Yucheng Tang, Benjamin Simon, Mason Belue, Stephanie Harmon, Baris Turkbey, Daguang Xu
{"title":"MAISI: Medical AI for Synthetic Imaging","authors":"Pengfei Guo, Can Zhao, Dong Yang, Ziyue Xu, Vishwesh Nath, Yucheng Tang, Benjamin Simon, Mason Belue, Stephanie Harmon, Baris Turkbey, Daguang Xu","doi":"arxiv-2409.11169","DOIUrl":"https://doi.org/arxiv-2409.11169","url":null,"abstract":"Medical imaging analysis faces challenges such as data scarcity, high\u0000annotation costs, and privacy concerns. This paper introduces the Medical AI\u0000for Synthetic Imaging (MAISI), an innovative approach using the diffusion model\u0000to generate synthetic 3D computed tomography (CT) images to address those\u0000challenges. MAISI leverages the foundation volume compression network and the\u0000latent diffusion model to produce high-resolution CT images (up to a landmark\u0000volume dimension of 512 x 512 x 768 ) with flexible volume dimensions and voxel\u0000spacing. By incorporating ControlNet, MAISI can process organ segmentation,\u0000including 127 anatomical structures, as additional conditions and enables the\u0000generation of accurately annotated synthetic images that can be used for\u0000various downstream tasks. Our experiment results show that MAISI's capabilities\u0000in generating realistic, anatomically accurate images for diverse regions and\u0000conditions reveal its promising potential to mitigate challenges using\u0000synthetic data.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectral U-Net: Enhancing Medical Image Segmentation via Spectral Decomposition 光谱 U-网络:通过光谱分解增强医学图像分割功能
arXiv - EE - Image and Video Processing Pub Date : 2024-09-13 DOI: arxiv-2409.09216
Yaopeng Peng, Milan Sonka, Danny Z. Chen
{"title":"Spectral U-Net: Enhancing Medical Image Segmentation via Spectral Decomposition","authors":"Yaopeng Peng, Milan Sonka, Danny Z. Chen","doi":"arxiv-2409.09216","DOIUrl":"https://doi.org/arxiv-2409.09216","url":null,"abstract":"This paper introduces Spectral U-Net, a novel deep learning network based on\u0000spectral decomposition, by exploiting Dual Tree Complex Wavelet Transform\u0000(DTCWT) for down-sampling and inverse Dual Tree Complex Wavelet Transform\u0000(iDTCWT) for up-sampling. We devise the corresponding Wave-Block and\u0000iWave-Block, integrated into the U-Net architecture, aiming at mitigating\u0000information loss during down-sampling and enhancing detail reconstruction\u0000during up-sampling. In the encoder, we first decompose the feature map into\u0000high and low-frequency components using DTCWT, enabling down-sampling while\u0000mitigating information loss. In the decoder, we utilize iDTCWT to reconstruct\u0000higher-resolution feature maps from down-sampled features. Evaluations on the\u0000Retina Fluid, Brain Tumor, and Liver Tumor segmentation datasets with the\u0000nnU-Net framework demonstrate the superiority of the proposed Spectral U-Net.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信