{"title":"Enriching visual feature representations for vision–language tasks using spectral transforms","authors":"Oscar Ondeng, Heywood Ouma, Peter Akuon","doi":"10.1016/j.imavis.2024.105390","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents a novel approach to enrich visual feature representations for vision–language tasks, such as image classification and captioning, by incorporating spectral transforms. Although spectral transforms have been widely utilized in signal processing, their application in deep learning has been relatively under-explored. We conducted extensive experiments on various transforms, including the Discrete Fourier Transform (DFT), Discrete Cosine Transform, Discrete Hartley Transform, and Hadamard Transform. Our findings highlight the effectiveness of the DFT, mainly when using the magnitude of complex outputs, in enriching visual features. The proposed method, validated on the MS COCO and Kylberg datasets, demonstrates superior performance compared to previous models, with a 4.8% improvement in CIDEr scores for image captioning tasks. Additionally, our approach enhances caption diversity by up to 3.1% and improves generation speed by up to 2% in Transformer models. These results underscore the potential of spectral feature enrichment in advancing vision–language tasks.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105390"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624004955","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a novel approach to enrich visual feature representations for vision–language tasks, such as image classification and captioning, by incorporating spectral transforms. Although spectral transforms have been widely utilized in signal processing, their application in deep learning has been relatively under-explored. We conducted extensive experiments on various transforms, including the Discrete Fourier Transform (DFT), Discrete Cosine Transform, Discrete Hartley Transform, and Hadamard Transform. Our findings highlight the effectiveness of the DFT, mainly when using the magnitude of complex outputs, in enriching visual features. The proposed method, validated on the MS COCO and Kylberg datasets, demonstrates superior performance compared to previous models, with a 4.8% improvement in CIDEr scores for image captioning tasks. Additionally, our approach enhances caption diversity by up to 3.1% and improves generation speed by up to 2% in Transformer models. These results underscore the potential of spectral feature enrichment in advancing vision–language tasks.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.