{"title":"Deep Learning Attention-Ranked Media Space Generation for Virtual Reality Equirectangular Scene Augmentation","authors":"Joshua Bercich, Vera Chung, Xiaoming Chen","doi":"10.1109/TENSYMP55890.2023.10223646","DOIUrl":null,"url":null,"abstract":"Virtual Reality has fastened its growth radicalising industries such as tertiary education, marketing, and entertainment. Developments in virtual world-building like the Metaverse yields challenges such as the prohibitive technical skill requirement. This work constructed a method of generating attention-ranked media spaces through deep learning as a solution to this issue mitigating unskilled demand for scene augmentation. Two segmentation tasks were addressed: true-perspective view-port media space inferencing, and gaze attention predictions for equirectangular 360-degree projections. Combining results produced ranked spaces providing multimedia implantation locations. Ablation studies assessed TranSalNet, a leading attention Transformer, for attention-saliency accounting for model pre-encoders. This was compared against U-Net for media space generation. Weak attention supervision and architecture overparameterisation limitations were addressed with modified Salient Object Subitizing and DT-Fixup algorithms respectively. These contributions yielded an overall improvement from second-best models demonstrating experimental success.","PeriodicalId":314726,"journal":{"name":"2023 IEEE Region 10 Symposium (TENSYMP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Region 10 Symposium (TENSYMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TENSYMP55890.2023.10223646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Virtual Reality has fastened its growth radicalising industries such as tertiary education, marketing, and entertainment. Developments in virtual world-building like the Metaverse yields challenges such as the prohibitive technical skill requirement. This work constructed a method of generating attention-ranked media spaces through deep learning as a solution to this issue mitigating unskilled demand for scene augmentation. Two segmentation tasks were addressed: true-perspective view-port media space inferencing, and gaze attention predictions for equirectangular 360-degree projections. Combining results produced ranked spaces providing multimedia implantation locations. Ablation studies assessed TranSalNet, a leading attention Transformer, for attention-saliency accounting for model pre-encoders. This was compared against U-Net for media space generation. Weak attention supervision and architecture overparameterisation limitations were addressed with modified Salient Object Subitizing and DT-Fixup algorithms respectively. These contributions yielded an overall improvement from second-best models demonstrating experimental success.