Ye Wang, Yaxiong Wang, Guoshuai Zhao, Xueming Qian
{"title":"Knowledge Adaptation Network for Few-Shot Class-Incremental Learning","authors":"Ye Wang, Yaxiong Wang, Guoshuai Zhao, Xueming Qian","doi":"arxiv-2409.11770","DOIUrl":"https://doi.org/arxiv-2409.11770","url":null,"abstract":"Few-shot class-incremental learning (FSCIL) aims to incrementally recognize\u0000new classes using a few samples while maintaining the performance on previously\u0000learned classes. One of the effective methods to solve this challenge is to\u0000construct prototypical evolution classifiers. Despite the advancement achieved\u0000by most existing methods, the classifier weights are simply initialized using\u0000mean features. Because representations for new classes are weak and biased, we\u0000argue such a strategy is suboptimal. In this paper, we tackle this issue from\u0000two aspects. Firstly, thanks to the development of foundation models, we employ\u0000a foundation model, the CLIP, as the network pedestal to provide a general\u0000representation for each class. Secondly, to generate a more reliable and\u0000comprehensive instance representation, we propose a Knowledge Adapter (KA)\u0000module that summarizes the data-specific knowledge from training data and fuses\u0000it into the general representation. Additionally, to tune the knowledge learned\u0000from the base classes to the upcoming classes, we propose a mechanism of\u0000Incremental Pseudo Episode Learning (IPEL) by simulating the actual FSCIL.\u0000Taken together, our proposed method, dubbed as Knowledge Adaptation Network\u0000(KANet), achieves competitive performance on a wide range of datasets,\u0000including CIFAR100, CUB200, and ImageNet-R.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ultrasound Image Enhancement with the Variance of Diffusion Models","authors":"Yuxin Zhang, Clément Huneau, Jérôme Idier, Diana Mateus","doi":"arxiv-2409.11380","DOIUrl":"https://doi.org/arxiv-2409.11380","url":null,"abstract":"Ultrasound imaging, despite its widespread use in medicine, often suffers\u0000from various sources of noise and artifacts that impact the signal-to-noise\u0000ratio and overall image quality. Enhancing ultrasound images requires a\u0000delicate balance between contrast, resolution, and speckle preservation. This\u0000paper introduces a novel approach that integrates adaptive beamforming with\u0000denoising diffusion-based variance imaging to address this challenge. By\u0000applying Eigenspace-Based Minimum Variance (EBMV) beamforming and employing a\u0000denoising diffusion model fine-tuned on ultrasound data, our method computes\u0000the variance across multiple diffusion-denoised samples to produce high-quality\u0000despeckled images. This approach leverages both the inherent multiplicative\u0000noise of ultrasound and the stochastic nature of diffusion models. Experimental\u0000results on a publicly available dataset demonstrate the effectiveness of our\u0000method in achieving superior image reconstructions from single plane-wave\u0000acquisitions. The code is available at:\u0000https://github.com/Yuxin-Zhang-Jasmine/IUS2024_Diffusion.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siyuan Li, Lei Ke, Yung-Hsu Yang, Luigi Piccinelli, Mattia Segù, Martin Danelljan, Luc Van Gool
{"title":"SLAck: Semantic, Location, and Appearance Aware Open-Vocabulary Tracking","authors":"Siyuan Li, Lei Ke, Yung-Hsu Yang, Luigi Piccinelli, Mattia Segù, Martin Danelljan, Luc Van Gool","doi":"arxiv-2409.11235","DOIUrl":"https://doi.org/arxiv-2409.11235","url":null,"abstract":"Open-vocabulary Multiple Object Tracking (MOT) aims to generalize trackers to\u0000novel categories not in the training set. Currently, the best-performing\u0000methods are mainly based on pure appearance matching. Due to the complexity of\u0000motion patterns in the large-vocabulary scenarios and unstable classification\u0000of the novel objects, the motion and semantics cues are either ignored or\u0000applied based on heuristics in the final matching steps by existing methods. In\u0000this paper, we present a unified framework SLAck that jointly considers\u0000semantics, location, and appearance priors in the early steps of association\u0000and learns how to integrate all valuable information through a lightweight\u0000spatial and temporal object graph. Our method eliminates complex\u0000post-processing heuristics for fusing different cues and boosts the association\u0000performance significantly for large-scale open-vocabulary tracking. Without\u0000bells and whistles, we outperform previous state-of-the-art methods for novel\u0000classes tracking on the open-vocabulary MOT and TAO TETA benchmarks. Our code\u0000is available at\u0000href{https://github.com/siyuanliii/SLAck}{github.com/siyuanliii/SLAck}.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"STCMOT: Spatio-Temporal Cohesion Learning for UAV-Based Multiple Object Tracking","authors":"Jianbo Ma, Chuanming Tang, Fei Wu, Can Zhao, Jianlin Zhang, Zhiyong Xu","doi":"arxiv-2409.11234","DOIUrl":"https://doi.org/arxiv-2409.11234","url":null,"abstract":"Multiple object tracking (MOT) in Unmanned Aerial Vehicle (UAV) videos is\u0000important for diverse applications in computer vision. Current MOT trackers\u0000rely on accurate object detection results and precise matching of target\u0000reidentification (ReID). These methods focus on optimizing target spatial\u0000attributes while overlooking temporal cues in modelling object relationships,\u0000especially for challenging tracking conditions such as object deformation and\u0000blurring, etc. To address the above-mentioned issues, we propose a novel\u0000Spatio-Temporal Cohesion Multiple Object Tracking framework (STCMOT), which\u0000utilizes historical embedding features to model the representation of ReID and\u0000detection features in a sequential order. Concretely, a temporal embedding\u0000boosting module is introduced to enhance the discriminability of individual\u0000embedding based on adjacent frame cooperation. While the trajectory embedding\u0000is then propagated by a temporal detection refinement module to mine salient\u0000target locations in the temporal field. Extensive experiments on the\u0000VisDrone2019 and UAVDT datasets demonstrate our STCMOT sets a new\u0000state-of-the-art performance in MOTA and IDF1 metrics. The source codes are\u0000released at https://github.com/ydhcg-BoBo/STCMOT.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reducing Catastrophic Forgetting in Online Class Incremental Learning Using Self-Distillation","authors":"Kotaro Nagata, Hiromu Ono, Kazuhiro Hotta","doi":"arxiv-2409.11329","DOIUrl":"https://doi.org/arxiv-2409.11329","url":null,"abstract":"In continual learning, there is a serious problem of catastrophic forgetting,\u0000in which previous knowledge is forgotten when a model learns new tasks. Various\u0000methods have been proposed to solve this problem. Replay methods which replay\u0000data from previous tasks in later training, have shown good accuracy. However,\u0000replay methods have a generalizability problem from a limited memory buffer. In\u0000this paper, we tried to solve this problem by acquiring transferable knowledge\u0000through self-distillation using highly generalizable output in shallow layer as\u0000a teacher. Furthermore, when we deal with a large number of classes or\u0000challenging data, there is a risk of learning not converging and not\u0000experiencing overfitting. Therefore, we attempted to achieve more efficient and\u0000thorough learning by prioritizing the storage of easily misclassified samples\u0000through a new method of memory update. We confirmed that our proposed method\u0000outperformed conventional methods by experiments on CIFAR10, CIFAR100, and\u0000MiniimageNet datasets.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatema-E- Jannat, Sina Gholami, Jennifer I. Lim, Theodore Leng, Minhaj Nur Alam, Hamed Tabkhi
{"title":"Multi-OCT-SelfNet: Integrating Self-Supervised Learning with Multi-Source Data Fusion for Enhanced Multi-Class Retinal Disease Classification","authors":"Fatema-E- Jannat, Sina Gholami, Jennifer I. Lim, Theodore Leng, Minhaj Nur Alam, Hamed Tabkhi","doi":"arxiv-2409.11375","DOIUrl":"https://doi.org/arxiv-2409.11375","url":null,"abstract":"In the medical domain, acquiring large datasets poses significant challenges\u0000due to privacy concerns. Nonetheless, the development of a robust deep-learning\u0000model for retinal disease diagnosis necessitates a substantial dataset for\u0000training. The capacity to generalize effectively on smaller datasets remains a\u0000persistent challenge. The scarcity of data presents a significant barrier to\u0000the practical implementation of scalable medical AI solutions. To address this\u0000issue, we've combined a wide range of data sources to improve performance and\u0000generalization to new data by giving it a deeper understanding of the data\u0000representation from multi-modal datasets and developed a self-supervised\u0000framework based on large language models (LLMs), SwinV2 to gain a deeper\u0000understanding of multi-modal dataset representations, enhancing the model's\u0000ability to extrapolate to new data for the detection of eye diseases using\u0000optical coherence tomography (OCT) images. We adopt a two-phase training\u0000methodology, self-supervised pre-training, and fine-tuning on a downstream\u0000supervised classifier. An ablation study conducted across three datasets\u0000employing various encoder backbones, without data fusion, with low data\u0000availability setting, and without self-supervised pre-training scenarios,\u0000highlights the robustness of our method. Our findings demonstrate consistent\u0000performance across these diverse conditions, showcasing superior generalization\u0000capabilities compared to the baseline model, ResNet-50.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"188 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CLIP Adaptation by Intra-modal Overlap Reduction","authors":"Alexey Kravets, Vinay Namboodiri","doi":"arxiv-2409.11338","DOIUrl":"https://doi.org/arxiv-2409.11338","url":null,"abstract":"Numerous methods have been proposed to adapt a pre-trained foundational CLIP\u0000model for few-shot classification. As CLIP is trained on a large corpus, it\u0000generalises well through adaptation to few-shot classification. In this work,\u0000we analyse the intra-modal overlap in image space in terms of embedding\u0000representation. Our analysis shows that, due to contrastive learning,\u0000embeddings from CLIP model exhibit high cosine similarity distribution overlap\u0000in the image space between paired and unpaired examples affecting the\u0000performance of few-shot training-free classification methods which rely on\u0000similarity in the image space for their predictions. To tackle intra-modal\u0000overlap we propose to train a lightweight adapter on a generic set of samples\u0000from the Google Open Images dataset demonstrating that this improves accuracy\u0000for few-shot training-free classification. We validate our contribution through\u0000extensive empirical analysis and demonstrate that reducing the intra-modal\u0000overlap leads to a) improved performance on a number of standard datasets, b)\u0000increased robustness to distribution shift and c) higher feature variance\u0000rendering the features more discriminative for downstream tasks.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OSV: One Step is Enough for High-Quality Image to Video Generation","authors":"Xiaofeng Mao, Zhengkai Jiang, Fu-Yun Wang, Wenbing Zhu, Jiangning Zhang, Hao Chen, Mingmin Chi, Yabiao Wang","doi":"arxiv-2409.11367","DOIUrl":"https://doi.org/arxiv-2409.11367","url":null,"abstract":"Video diffusion models have shown great potential in generating high-quality\u0000videos, making them an increasingly popular focus. However, their inherent\u0000iterative nature leads to substantial computational and time costs. While\u0000efforts have been made to accelerate video diffusion by reducing inference\u0000steps (through techniques like consistency distillation) and GAN training\u0000(these approaches often fall short in either performance or training\u0000stability). In this work, we introduce a two-stage training framework that\u0000effectively combines consistency distillation with GAN training to address\u0000these challenges. Additionally, we propose a novel video discriminator design,\u0000which eliminates the need for decoding the video latents and improves the final\u0000performance. Our model is capable of producing high-quality videos in merely\u0000one-step, with the flexibility to perform multi-step refinement for further\u0000performance enhancement. Our quantitative evaluation on the OpenWebVid-1M\u0000benchmark shows that our model significantly outperforms existing methods.\u0000Notably, our 1-step performance(FVD 171.15) exceeds the 8-step performance of\u0000the consistency distillation based method, AnimateLCM (FVD 184.79), and\u0000approaches the 25-step performance of advanced Stable Video Diffusion (FVD\u0000156.94).","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Esat Kalfaoglu, Halil Ibrahim Ozturk, Ozsel Kilinc, Alptekin Temizel
{"title":"TopoMaskV2: Enhanced Instance-Mask-Based Formulation for the Road Topology Problem","authors":"M. Esat Kalfaoglu, Halil Ibrahim Ozturk, Ozsel Kilinc, Alptekin Temizel","doi":"arxiv-2409.11325","DOIUrl":"https://doi.org/arxiv-2409.11325","url":null,"abstract":"Recently, the centerline has become a popular representation of lanes due to\u0000its advantages in solving the road topology problem. To enhance centerline\u0000prediction, we have developed a new approach called TopoMask. Unlike previous\u0000methods that rely on keypoints or parametric methods, TopoMask utilizes an\u0000instance-mask-based formulation coupled with a masked-attention-based\u0000transformer architecture. We introduce a quad-direction label representation to\u0000enrich the mask instances with flow information and design a corresponding\u0000post-processing technique for mask-to-centerline conversion. Additionally, we\u0000demonstrate that the instance-mask formulation provides complementary\u0000information to parametric Bezier regressions, and fusing both outputs leads to\u0000improved detection and topology performance. Moreover, we analyze the\u0000shortcomings of the pillar assumption in the Lift Splat technique and adapt a\u0000multi-height bin configuration. Experimental results show that TopoMask\u0000achieves state-of-the-art performance in the OpenLane-V2 dataset, increasing\u0000from 44.1 to 49.4 for Subset-A and 44.7 to 51.8 for Subset-B in the V1.1 OLS\u0000baseline.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LPT++: Efficient Training on Mixture of Long-tailed Experts","authors":"Bowen Dong, Pan Zhou, Wangmeng Zuo","doi":"arxiv-2409.11323","DOIUrl":"https://doi.org/arxiv-2409.11323","url":null,"abstract":"We introduce LPT++, a comprehensive framework for long-tailed classification\u0000that combines parameter-efficient fine-tuning (PEFT) with a learnable model\u0000ensemble. LPT++ enhances frozen Vision Transformers (ViTs) through the\u0000integration of three core components. The first is a universal long-tailed\u0000adaptation module, which aggregates long-tailed prompts and visual adapters to\u0000adapt the pretrained model to the target domain, meanwhile improving its\u0000discriminative ability. The second is the mixture of long-tailed experts\u0000framework with a mixture-of-experts (MoE) scorer, which adaptively calculates\u0000reweighting coefficients for confidence scores from both visual-only and\u0000visual-language (VL) model experts to generate more accurate predictions.\u0000Finally, LPT++ employs a three-phase training framework, wherein each critical\u0000module is learned separately, resulting in a stable and effective long-tailed\u0000classification training paradigm. Besides, we also propose the simple version\u0000of LPT++ namely LPT, which only integrates visual-only pretrained ViT and\u0000long-tailed prompts to formulate a single model method. LPT can clearly\u0000illustrate how long-tailed prompts works meanwhile achieving comparable\u0000performance without VL pretrained models. Experiments show that, with only ~1%\u0000extra trainable parameters, LPT++ achieves comparable accuracy against all the\u0000counterparts.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}