arXiv - CS - Computer Vision and Pattern Recognition最新文献

筛选
英文 中文
LLM-wrapper: Black-Box Semantic-Aware Adaptation of Vision-Language Foundation Models LLM-wrapper:视觉语言基础模型的黑盒语义感知适配
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-18 DOI: arxiv-2409.11919
Amaia Cardiel, Eloi Zablocki, Oriane Siméoni, Elias Ramzi, Matthieu Cord
{"title":"LLM-wrapper: Black-Box Semantic-Aware Adaptation of Vision-Language Foundation Models","authors":"Amaia Cardiel, Eloi Zablocki, Oriane Siméoni, Elias Ramzi, Matthieu Cord","doi":"arxiv-2409.11919","DOIUrl":"https://doi.org/arxiv-2409.11919","url":null,"abstract":"Vision Language Models (VLMs) have shown impressive performances on numerous\u0000tasks but their zero-shot capabilities can be limited compared to dedicated or\u0000fine-tuned models. Yet, fine-tuning VLMs comes with limitations as it requires\u0000`white-box' access to the model's architecture and weights as well as expertise\u0000to design the fine-tuning objectives and optimize the hyper-parameters, which\u0000are specific to each VLM and downstream task. In this work, we propose\u0000LLM-wrapper, a novel approach to adapt VLMs in a `black-box' manner by\u0000leveraging large language models (LLMs) so as to reason on their outputs. We\u0000demonstrate the effectiveness of LLM-wrapper on Referring Expression\u0000Comprehension (REC), a challenging open-vocabulary task that requires spatial\u0000and semantic reasoning. Our approach significantly boosts the performance of\u0000off-the-shelf models, resulting in competitive results when compared with\u0000classic fine-tuning.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound Image Enhancement with the Variance of Diffusion Models 利用扩散模型方差增强超声图像
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.11380
Yuxin Zhang, Clément Huneau, Jérôme Idier, Diana Mateus
{"title":"Ultrasound Image Enhancement with the Variance of Diffusion Models","authors":"Yuxin Zhang, Clément Huneau, Jérôme Idier, Diana Mateus","doi":"arxiv-2409.11380","DOIUrl":"https://doi.org/arxiv-2409.11380","url":null,"abstract":"Ultrasound imaging, despite its widespread use in medicine, often suffers\u0000from various sources of noise and artifacts that impact the signal-to-noise\u0000ratio and overall image quality. Enhancing ultrasound images requires a\u0000delicate balance between contrast, resolution, and speckle preservation. This\u0000paper introduces a novel approach that integrates adaptive beamforming with\u0000denoising diffusion-based variance imaging to address this challenge. By\u0000applying Eigenspace-Based Minimum Variance (EBMV) beamforming and employing a\u0000denoising diffusion model fine-tuned on ultrasound data, our method computes\u0000the variance across multiple diffusion-denoised samples to produce high-quality\u0000despeckled images. This approach leverages both the inherent multiplicative\u0000noise of ultrasound and the stochastic nature of diffusion models. Experimental\u0000results on a publicly available dataset demonstrate the effectiveness of our\u0000method in achieving superior image reconstructions from single plane-wave\u0000acquisitions. The code is available at:\u0000https://github.com/Yuxin-Zhang-Jasmine/IUS2024_Diffusion.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLAck: Semantic, Location, and Appearance Aware Open-Vocabulary Tracking SLAck:语义、位置和外观感知开放词汇跟踪
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.11235
Siyuan Li, Lei Ke, Yung-Hsu Yang, Luigi Piccinelli, Mattia Segù, Martin Danelljan, Luc Van Gool
{"title":"SLAck: Semantic, Location, and Appearance Aware Open-Vocabulary Tracking","authors":"Siyuan Li, Lei Ke, Yung-Hsu Yang, Luigi Piccinelli, Mattia Segù, Martin Danelljan, Luc Van Gool","doi":"arxiv-2409.11235","DOIUrl":"https://doi.org/arxiv-2409.11235","url":null,"abstract":"Open-vocabulary Multiple Object Tracking (MOT) aims to generalize trackers to\u0000novel categories not in the training set. Currently, the best-performing\u0000methods are mainly based on pure appearance matching. Due to the complexity of\u0000motion patterns in the large-vocabulary scenarios and unstable classification\u0000of the novel objects, the motion and semantics cues are either ignored or\u0000applied based on heuristics in the final matching steps by existing methods. In\u0000this paper, we present a unified framework SLAck that jointly considers\u0000semantics, location, and appearance priors in the early steps of association\u0000and learns how to integrate all valuable information through a lightweight\u0000spatial and temporal object graph. Our method eliminates complex\u0000post-processing heuristics for fusing different cues and boosts the association\u0000performance significantly for large-scale open-vocabulary tracking. Without\u0000bells and whistles, we outperform previous state-of-the-art methods for novel\u0000classes tracking on the open-vocabulary MOT and TAO TETA benchmarks. Our code\u0000is available at\u0000href{https://github.com/siyuanliii/SLAck}{github.com/siyuanliii/SLAck}.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STCMOT: Spatio-Temporal Cohesion Learning for UAV-Based Multiple Object Tracking STCMOT:基于无人机的多目标跟踪时空聚合学习
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.11234
Jianbo Ma, Chuanming Tang, Fei Wu, Can Zhao, Jianlin Zhang, Zhiyong Xu
{"title":"STCMOT: Spatio-Temporal Cohesion Learning for UAV-Based Multiple Object Tracking","authors":"Jianbo Ma, Chuanming Tang, Fei Wu, Can Zhao, Jianlin Zhang, Zhiyong Xu","doi":"arxiv-2409.11234","DOIUrl":"https://doi.org/arxiv-2409.11234","url":null,"abstract":"Multiple object tracking (MOT) in Unmanned Aerial Vehicle (UAV) videos is\u0000important for diverse applications in computer vision. Current MOT trackers\u0000rely on accurate object detection results and precise matching of target\u0000reidentification (ReID). These methods focus on optimizing target spatial\u0000attributes while overlooking temporal cues in modelling object relationships,\u0000especially for challenging tracking conditions such as object deformation and\u0000blurring, etc. To address the above-mentioned issues, we propose a novel\u0000Spatio-Temporal Cohesion Multiple Object Tracking framework (STCMOT), which\u0000utilizes historical embedding features to model the representation of ReID and\u0000detection features in a sequential order. Concretely, a temporal embedding\u0000boosting module is introduced to enhance the discriminability of individual\u0000embedding based on adjacent frame cooperation. While the trajectory embedding\u0000is then propagated by a temporal detection refinement module to mine salient\u0000target locations in the temporal field. Extensive experiments on the\u0000VisDrone2019 and UAVDT datasets demonstrate our STCMOT sets a new\u0000state-of-the-art performance in MOTA and IDF1 metrics. The source codes are\u0000released at https://github.com/ydhcg-BoBo/STCMOT.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing Catastrophic Forgetting in Online Class Incremental Learning Using Self-Distillation 利用自我发散减少在线课堂增量学习中的灾难性遗忘
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.11329
Kotaro Nagata, Hiromu Ono, Kazuhiro Hotta
{"title":"Reducing Catastrophic Forgetting in Online Class Incremental Learning Using Self-Distillation","authors":"Kotaro Nagata, Hiromu Ono, Kazuhiro Hotta","doi":"arxiv-2409.11329","DOIUrl":"https://doi.org/arxiv-2409.11329","url":null,"abstract":"In continual learning, there is a serious problem of catastrophic forgetting,\u0000in which previous knowledge is forgotten when a model learns new tasks. Various\u0000methods have been proposed to solve this problem. Replay methods which replay\u0000data from previous tasks in later training, have shown good accuracy. However,\u0000replay methods have a generalizability problem from a limited memory buffer. In\u0000this paper, we tried to solve this problem by acquiring transferable knowledge\u0000through self-distillation using highly generalizable output in shallow layer as\u0000a teacher. Furthermore, when we deal with a large number of classes or\u0000challenging data, there is a risk of learning not converging and not\u0000experiencing overfitting. Therefore, we attempted to achieve more efficient and\u0000thorough learning by prioritizing the storage of easily misclassified samples\u0000through a new method of memory update. We confirmed that our proposed method\u0000outperformed conventional methods by experiments on CIFAR10, CIFAR100, and\u0000MiniimageNet datasets.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-OCT-SelfNet: Integrating Self-Supervised Learning with Multi-Source Data Fusion for Enhanced Multi-Class Retinal Disease Classification Multi-OCT-SelfNet:将自我监督学习与多源数据融合相结合,增强多类视网膜疾病分类能力
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.11375
Fatema-E- Jannat, Sina Gholami, Jennifer I. Lim, Theodore Leng, Minhaj Nur Alam, Hamed Tabkhi
{"title":"Multi-OCT-SelfNet: Integrating Self-Supervised Learning with Multi-Source Data Fusion for Enhanced Multi-Class Retinal Disease Classification","authors":"Fatema-E- Jannat, Sina Gholami, Jennifer I. Lim, Theodore Leng, Minhaj Nur Alam, Hamed Tabkhi","doi":"arxiv-2409.11375","DOIUrl":"https://doi.org/arxiv-2409.11375","url":null,"abstract":"In the medical domain, acquiring large datasets poses significant challenges\u0000due to privacy concerns. Nonetheless, the development of a robust deep-learning\u0000model for retinal disease diagnosis necessitates a substantial dataset for\u0000training. The capacity to generalize effectively on smaller datasets remains a\u0000persistent challenge. The scarcity of data presents a significant barrier to\u0000the practical implementation of scalable medical AI solutions. To address this\u0000issue, we've combined a wide range of data sources to improve performance and\u0000generalization to new data by giving it a deeper understanding of the data\u0000representation from multi-modal datasets and developed a self-supervised\u0000framework based on large language models (LLMs), SwinV2 to gain a deeper\u0000understanding of multi-modal dataset representations, enhancing the model's\u0000ability to extrapolate to new data for the detection of eye diseases using\u0000optical coherence tomography (OCT) images. We adopt a two-phase training\u0000methodology, self-supervised pre-training, and fine-tuning on a downstream\u0000supervised classifier. An ablation study conducted across three datasets\u0000employing various encoder backbones, without data fusion, with low data\u0000availability setting, and without self-supervised pre-training scenarios,\u0000highlights the robustness of our method. Our findings demonstrate consistent\u0000performance across these diverse conditions, showcasing superior generalization\u0000capabilities compared to the baseline model, ResNet-50.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLIP Adaptation by Intra-modal Overlap Reduction 通过减少模内重叠进行 CLIP 适应
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.11338
Alexey Kravets, Vinay Namboodiri
{"title":"CLIP Adaptation by Intra-modal Overlap Reduction","authors":"Alexey Kravets, Vinay Namboodiri","doi":"arxiv-2409.11338","DOIUrl":"https://doi.org/arxiv-2409.11338","url":null,"abstract":"Numerous methods have been proposed to adapt a pre-trained foundational CLIP\u0000model for few-shot classification. As CLIP is trained on a large corpus, it\u0000generalises well through adaptation to few-shot classification. In this work,\u0000we analyse the intra-modal overlap in image space in terms of embedding\u0000representation. Our analysis shows that, due to contrastive learning,\u0000embeddings from CLIP model exhibit high cosine similarity distribution overlap\u0000in the image space between paired and unpaired examples affecting the\u0000performance of few-shot training-free classification methods which rely on\u0000similarity in the image space for their predictions. To tackle intra-modal\u0000overlap we propose to train a lightweight adapter on a generic set of samples\u0000from the Google Open Images dataset demonstrating that this improves accuracy\u0000for few-shot training-free classification. We validate our contribution through\u0000extensive empirical analysis and demonstrate that reducing the intra-modal\u0000overlap leads to a) improved performance on a number of standard datasets, b)\u0000increased robustness to distribution shift and c) higher feature variance\u0000rendering the features more discriminative for downstream tasks.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OSV: One Step is Enough for High-Quality Image to Video Generation OSV:一步即可生成高质量图像到视频
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.11367
Xiaofeng Mao, Zhengkai Jiang, Fu-Yun Wang, Wenbing Zhu, Jiangning Zhang, Hao Chen, Mingmin Chi, Yabiao Wang
{"title":"OSV: One Step is Enough for High-Quality Image to Video Generation","authors":"Xiaofeng Mao, Zhengkai Jiang, Fu-Yun Wang, Wenbing Zhu, Jiangning Zhang, Hao Chen, Mingmin Chi, Yabiao Wang","doi":"arxiv-2409.11367","DOIUrl":"https://doi.org/arxiv-2409.11367","url":null,"abstract":"Video diffusion models have shown great potential in generating high-quality\u0000videos, making them an increasingly popular focus. However, their inherent\u0000iterative nature leads to substantial computational and time costs. While\u0000efforts have been made to accelerate video diffusion by reducing inference\u0000steps (through techniques like consistency distillation) and GAN training\u0000(these approaches often fall short in either performance or training\u0000stability). In this work, we introduce a two-stage training framework that\u0000effectively combines consistency distillation with GAN training to address\u0000these challenges. Additionally, we propose a novel video discriminator design,\u0000which eliminates the need for decoding the video latents and improves the final\u0000performance. Our model is capable of producing high-quality videos in merely\u0000one-step, with the flexibility to perform multi-step refinement for further\u0000performance enhancement. Our quantitative evaluation on the OpenWebVid-1M\u0000benchmark shows that our model significantly outperforms existing methods.\u0000Notably, our 1-step performance(FVD 171.15) exceeds the 8-step performance of\u0000the consistency distillation based method, AnimateLCM (FVD 184.79), and\u0000approaches the 25-step performance of advanced Stable Video Diffusion (FVD\u0000156.94).","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NSSR-DIL: Null-Shot Image Super-Resolution Using Deep Identity Learning NSSR-DIL:利用深度身份学习实现空镜头图像超分辨率
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.12165
Sree Rama Vamsidhar S, Rama Krishna Gorthi
{"title":"NSSR-DIL: Null-Shot Image Super-Resolution Using Deep Identity Learning","authors":"Sree Rama Vamsidhar S, Rama Krishna Gorthi","doi":"arxiv-2409.12165","DOIUrl":"https://doi.org/arxiv-2409.12165","url":null,"abstract":"The present State-of-the-Art (SotA) Image Super-Resolution (ISR) methods\u0000employ Deep Learning (DL) techniques using a large amount of image data. The\u0000primary limitation to extending the existing SotA ISR works for real-world\u0000instances is their computational and time complexities. In this paper, contrary\u0000to the existing methods, we present a novel and computationally efficient ISR\u0000algorithm that is independent of the image dataset to learn the ISR task. The\u0000proposed algorithm reformulates the ISR task from generating the Super-Resolved\u0000(SR) images to computing the inverse of the kernels that span the degradation\u0000space. We introduce Deep Identity Learning, exploiting the identity relation\u0000between the degradation and inverse degradation models. The proposed approach\u0000neither relies on the ISR dataset nor on a single input low-resolution (LR)\u0000image (like the self-supervised method i.e. ZSSR) to model the ISR task. Hence\u0000we term our model as Null-Shot Super-Resolution Using Deep Identity Learning\u0000(NSSR-DIL). The proposed NSSR-DIL model requires fewer computational resources,\u0000at least by an order of 10, and demonstrates a competitive performance on\u0000benchmark ISR datasets. Another salient aspect of our proposition is that the\u0000NSSR-DIL framework detours retraining the model and remains the same for\u0000varying scale factors like X2, X3, and X4. This makes our highly efficient ISR\u0000model more suitable for real-world applications.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Few-Shot Semantic Segmentation in Remote Sensing: Challenge and Benchmark 遥感中的广义少镜头语义分割:挑战与基准
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2024-09-17 DOI: arxiv-2409.11227
Clifford Broni-Bediako, Junshi Xia, Jian Song, Hongruixuan Chen, Mennatullah Siam, Naoto Yokoya
{"title":"Generalized Few-Shot Semantic Segmentation in Remote Sensing: Challenge and Benchmark","authors":"Clifford Broni-Bediako, Junshi Xia, Jian Song, Hongruixuan Chen, Mennatullah Siam, Naoto Yokoya","doi":"arxiv-2409.11227","DOIUrl":"https://doi.org/arxiv-2409.11227","url":null,"abstract":"Learning with limited labelled data is a challenging problem in various\u0000applications, including remote sensing. Few-shot semantic segmentation is one\u0000approach that can encourage deep learning models to learn from few labelled\u0000examples for novel classes not seen during the training. The generalized\u0000few-shot segmentation setting has an additional challenge which encourages\u0000models not only to adapt to the novel classes but also to maintain strong\u0000performance on the training base classes. While previous datasets and\u0000benchmarks discussed the few-shot segmentation setting in remote sensing, we\u0000are the first to propose a generalized few-shot segmentation benchmark for\u0000remote sensing. The generalized setting is more realistic and challenging,\u0000which necessitates exploring it within the remote sensing context. We release\u0000the dataset augmenting OpenEarthMap with additional classes labelled for the\u0000generalized few-shot evaluation setting. The dataset is released during the\u0000OpenEarthMap land cover mapping generalized few-shot challenge in the L3D-IVU\u0000workshop in conjunction with CVPR 2024. In this work, we summarize the dataset\u0000and challenge details in addition to providing the benchmark results on the two\u0000phases of the challenge for the validation and test sets.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信