{"title":"Free-VSC:来自视觉基础模型的自由语义,用于无监督视频语义压缩","authors":"Yuan Tian, Guo Lu, Guangtao Zhai","doi":"arxiv-2409.11718","DOIUrl":null,"url":null,"abstract":"Unsupervised video semantic compression (UVSC), i.e., compressing videos to\nbetter support various analysis tasks, has recently garnered attention.\nHowever, the semantic richness of previous methods remains limited, due to the\nsingle semantic learning objective, limited training data, etc. To address\nthis, we propose to boost the UVSC task by absorbing the off-the-shelf rich\nsemantics from VFMs. Specifically, we introduce a VFMs-shared semantic\nalignment layer, complemented by VFM-specific prompts, to flexibly align\nsemantics between the compressed video and various VFMs. This allows different\nVFMs to collaboratively build a mutually-enhanced semantic space, guiding the\nlearning of the compression model. Moreover, we introduce a dynamic\ntrajectory-based inter-frame compression scheme, which first estimates the\nsemantic trajectory based on the historical content, and then traverses along\nthe trajectory to predict the future semantics as the coding context. This\nreduces the overall bitcost of the system, further improving the compression\nefficiency. Our approach outperforms previous coding methods on three\nmainstream tasks and six datasets.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Free-VSC: Free Semantics from Visual Foundation Models for Unsupervised Video Semantic Compression\",\"authors\":\"Yuan Tian, Guo Lu, Guangtao Zhai\",\"doi\":\"arxiv-2409.11718\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unsupervised video semantic compression (UVSC), i.e., compressing videos to\\nbetter support various analysis tasks, has recently garnered attention.\\nHowever, the semantic richness of previous methods remains limited, due to the\\nsingle semantic learning objective, limited training data, etc. To address\\nthis, we propose to boost the UVSC task by absorbing the off-the-shelf rich\\nsemantics from VFMs. Specifically, we introduce a VFMs-shared semantic\\nalignment layer, complemented by VFM-specific prompts, to flexibly align\\nsemantics between the compressed video and various VFMs. This allows different\\nVFMs to collaboratively build a mutually-enhanced semantic space, guiding the\\nlearning of the compression model. Moreover, we introduce a dynamic\\ntrajectory-based inter-frame compression scheme, which first estimates the\\nsemantic trajectory based on the historical content, and then traverses along\\nthe trajectory to predict the future semantics as the coding context. This\\nreduces the overall bitcost of the system, further improving the compression\\nefficiency. Our approach outperforms previous coding methods on three\\nmainstream tasks and six datasets.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11718\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11718","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Free-VSC: Free Semantics from Visual Foundation Models for Unsupervised Video Semantic Compression
Unsupervised video semantic compression (UVSC), i.e., compressing videos to
better support various analysis tasks, has recently garnered attention.
However, the semantic richness of previous methods remains limited, due to the
single semantic learning objective, limited training data, etc. To address
this, we propose to boost the UVSC task by absorbing the off-the-shelf rich
semantics from VFMs. Specifically, we introduce a VFMs-shared semantic
alignment layer, complemented by VFM-specific prompts, to flexibly align
semantics between the compressed video and various VFMs. This allows different
VFMs to collaboratively build a mutually-enhanced semantic space, guiding the
learning of the compression model. Moreover, we introduce a dynamic
trajectory-based inter-frame compression scheme, which first estimates the
semantic trajectory based on the historical content, and then traverses along
the trajectory to predict the future semantics as the coding context. This
reduces the overall bitcost of the system, further improving the compression
efficiency. Our approach outperforms previous coding methods on three
mainstream tasks and six datasets.