TransUser's:基于变换器的突出物体检测,用于生成 360° 视频中的用户体验

I. Khan, Kyungjin Han, Jong Weon Lee
{"title":"TransUser's:基于变换器的突出物体检测,用于生成 360° 视频中的用户体验","authors":"I. Khan, Kyungjin Han, Jong Weon Lee","doi":"10.1109/AIxVR59861.2024.00042","DOIUrl":null,"url":null,"abstract":"A 360-degree video stream enables users to view their point of interest while giving them the sense of 'being there'. Performing head or hand manipulations to watch the salient objects and sceneries in such a video is a very tiresome task and the user may miss the interesting events. Compared to this, the automatic selection of a user's Point of Interest (PoI) in a 360° video is extremely challenging due to subjective viewpoints and varying degrees of satisfaction. To handle these challenges, we employed an attention-based transformer approach to detect salient objects inside the immersive contents. In the proposed framework, first, an input 360° video is converted into frames where each frame is passed to a CNNbased encoder. The CNN encoder generates feature maps of the input framers. Further, for an attention-based network, we used a stack of three transformers encoder with position embeddings to generate position-awareness embeddings of the encoded feature maps. Each transformer encoder is based on a multihead self-attention block and a multi-layer perceptron with various sets of attention blocks. Finally, encoded features and position embeddings from the transformer encoder are passed through a CNN decoder network to predict the salient object inside the 360° video frames. We evaluated our results on four immersive videos to find the effectiveness of the proposed framework. Further, we also compared our results with state-of-the-art methods where the proposed method outperformed the other existing models.","PeriodicalId":518749,"journal":{"name":"2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)","volume":"194 2","pages":"256-260"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TransUser's: A Transformer Based Salient Object Detection for Users Experience Generation in 360° Videos\",\"authors\":\"I. Khan, Kyungjin Han, Jong Weon Lee\",\"doi\":\"10.1109/AIxVR59861.2024.00042\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A 360-degree video stream enables users to view their point of interest while giving them the sense of 'being there'. Performing head or hand manipulations to watch the salient objects and sceneries in such a video is a very tiresome task and the user may miss the interesting events. Compared to this, the automatic selection of a user's Point of Interest (PoI) in a 360° video is extremely challenging due to subjective viewpoints and varying degrees of satisfaction. To handle these challenges, we employed an attention-based transformer approach to detect salient objects inside the immersive contents. In the proposed framework, first, an input 360° video is converted into frames where each frame is passed to a CNNbased encoder. The CNN encoder generates feature maps of the input framers. Further, for an attention-based network, we used a stack of three transformers encoder with position embeddings to generate position-awareness embeddings of the encoded feature maps. Each transformer encoder is based on a multihead self-attention block and a multi-layer perceptron with various sets of attention blocks. Finally, encoded features and position embeddings from the transformer encoder are passed through a CNN decoder network to predict the salient object inside the 360° video frames. We evaluated our results on four immersive videos to find the effectiveness of the proposed framework. Further, we also compared our results with state-of-the-art methods where the proposed method outperformed the other existing models.\",\"PeriodicalId\":518749,\"journal\":{\"name\":\"2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)\",\"volume\":\"194 2\",\"pages\":\"256-260\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIxVR59861.2024.00042\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIxVR59861.2024.00042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

360 度视频流可让用户观看他们感兴趣的点,同时给他们一种 "身临其境 "的感觉。在这样的视频中,通过头部或手部操作来观看突出物体和风景是一项非常令人厌烦的任务,用户可能会错过有趣的事件。与此相比,在 360° 视频中自动选择用户的兴趣点(PoI)是一项极具挑战性的工作,因为用户的主观观点和满意程度各不相同。为了应对这些挑战,我们采用了一种基于注意力的转换器方法来检测沉浸式内容中的突出对象。在提议的框架中,首先将输入的 360° 视频转换成帧,然后将每个帧传递给基于 CNN 的编码器。CNN 编码器生成输入帧的特征图。此外,对于基于注意力的网络,我们使用了带有位置嵌入的三个变换器编码器堆栈,以生成编码特征图的位置感知嵌入。每个变换器编码器都是基于多头自我注意块和多层感知器与不同的注意块组。最后,来自变换器编码器的编码特征和位置嵌入通过 CNN 解码器网络来预测 360° 视频帧内的突出物体。我们在四段身临其境的视频中对结果进行了评估,以发现所提框架的有效性。此外,我们还将结果与最先进的方法进行了比较,发现所提出的方法优于其他现有模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TransUser's: A Transformer Based Salient Object Detection for Users Experience Generation in 360° Videos
A 360-degree video stream enables users to view their point of interest while giving them the sense of 'being there'. Performing head or hand manipulations to watch the salient objects and sceneries in such a video is a very tiresome task and the user may miss the interesting events. Compared to this, the automatic selection of a user's Point of Interest (PoI) in a 360° video is extremely challenging due to subjective viewpoints and varying degrees of satisfaction. To handle these challenges, we employed an attention-based transformer approach to detect salient objects inside the immersive contents. In the proposed framework, first, an input 360° video is converted into frames where each frame is passed to a CNNbased encoder. The CNN encoder generates feature maps of the input framers. Further, for an attention-based network, we used a stack of three transformers encoder with position embeddings to generate position-awareness embeddings of the encoded feature maps. Each transformer encoder is based on a multihead self-attention block and a multi-layer perceptron with various sets of attention blocks. Finally, encoded features and position embeddings from the transformer encoder are passed through a CNN decoder network to predict the salient object inside the 360° video frames. We evaluated our results on four immersive videos to find the effectiveness of the proposed framework. Further, we also compared our results with state-of-the-art methods where the proposed method outperformed the other existing models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信