Huayong Liu, Cong Huang, Hanjun Jin, Xiaosi Fu, Pei Shi
{"title":"基于视觉变换的多损失融合图像检索","authors":"Huayong Liu, Cong Huang, Hanjun Jin, Xiaosi Fu, Pei Shi","doi":"10.1117/12.2685738","DOIUrl":null,"url":null,"abstract":"Through hash learning, the image retrieval based on deep hash algorithm encodes the image into a fixed length hash code for fast retrieval and matching. However, previous deep hash retrieval models based on convolutional neural networks extract local information of the image using pooling and convolution technology, which requires deeper networks to obtain long distance dependency, leading to high complexity and computation. In this paper, we propose a visual Transformer model based on self-attention to learn long dependencies of images and enhance the extraction ability of image features. Furthermore, a loss function with multiple loss fusion is proposed, which combines hash contrastive loss, classification loss, and quantization loss, to fully utilize image label information to improve the quality of hash coding by learning more potential semantic information. Experimental results demonstrate the superior performance of the proposed method over multiple classical deep hash retrieval methods based on CNN and two transformer-based hash retrieval methods, on two different datasets and different lengths of hash code.","PeriodicalId":305812,"journal":{"name":"International Conference on Electronic Information Technology","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual transformer-based image retrieval with multiple loss fusion\",\"authors\":\"Huayong Liu, Cong Huang, Hanjun Jin, Xiaosi Fu, Pei Shi\",\"doi\":\"10.1117/12.2685738\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Through hash learning, the image retrieval based on deep hash algorithm encodes the image into a fixed length hash code for fast retrieval and matching. However, previous deep hash retrieval models based on convolutional neural networks extract local information of the image using pooling and convolution technology, which requires deeper networks to obtain long distance dependency, leading to high complexity and computation. In this paper, we propose a visual Transformer model based on self-attention to learn long dependencies of images and enhance the extraction ability of image features. Furthermore, a loss function with multiple loss fusion is proposed, which combines hash contrastive loss, classification loss, and quantization loss, to fully utilize image label information to improve the quality of hash coding by learning more potential semantic information. Experimental results demonstrate the superior performance of the proposed method over multiple classical deep hash retrieval methods based on CNN and two transformer-based hash retrieval methods, on two different datasets and different lengths of hash code.\",\"PeriodicalId\":305812,\"journal\":{\"name\":\"International Conference on Electronic Information Technology\",\"volume\":\"98 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Electronic Information Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2685738\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Electronic Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2685738","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Visual transformer-based image retrieval with multiple loss fusion
Through hash learning, the image retrieval based on deep hash algorithm encodes the image into a fixed length hash code for fast retrieval and matching. However, previous deep hash retrieval models based on convolutional neural networks extract local information of the image using pooling and convolution technology, which requires deeper networks to obtain long distance dependency, leading to high complexity and computation. In this paper, we propose a visual Transformer model based on self-attention to learn long dependencies of images and enhance the extraction ability of image features. Furthermore, a loss function with multiple loss fusion is proposed, which combines hash contrastive loss, classification loss, and quantization loss, to fully utilize image label information to improve the quality of hash coding by learning more potential semantic information. Experimental results demonstrate the superior performance of the proposed method over multiple classical deep hash retrieval methods based on CNN and two transformer-based hash retrieval methods, on two different datasets and different lengths of hash code.