Chenxing Li;Yiping Duan;Qiyuan Du;Shiqi Sun;Xin Deng;Xiaoming Tao
{"title":"VR+HD:基于时空场景图的视频语义重构","authors":"Chenxing Li;Yiping Duan;Qiyuan Du;Shiqi Sun;Xin Deng;Xiaoming Tao","doi":"10.1109/JSTSP.2023.3323654","DOIUrl":null,"url":null,"abstract":"With the development of computer science and deep learning networks, AI generation technology is becoming increasingly mature. Video has become one of the most important information carriers in our daily life because of their large amount of data and information. However, because of their large amount of information and complex semantics, video generation models, especially High Definition (HD) video, have been a difficult problem in the field of deep learning. Video semantic representation and semantic reconstruction are difficult tasks. Because video content is changeable and information is highly correlated, we propose a HD video generation model from a spatio-temporal scene graph: the spatio-temporal scene graph to video (StSg2vid) model. First, we enter the spatio-temporal scene graph sequence as the semantic representation model of the information in each frame of the video. The scene graph used to describe the semantic information of each frame contains the motion progress of the object in the video at that moment, which is equivalent to a clock. A spatio-temporal scene graph transmits the relationship information between objects through the graph convolutional neural network and predicts the scene layout of the moment. Lastly, the image generation model predicts the frame image of the current moment. The frame at each moment depends on the scene layout at the current moment and the frame and scene layout at the previous moment. We introduced the flow net, wrapping prediction model and the spatially-adaptive normalization (SPADE) network to generate images of each frame forecast. We used the Action genome dataset. Compared with the current state-of-the-art algorithms, the videos generated by our model achieve better results in both quantitative indicators and user evaluations. In addition, we also generalized the StSg2vid model into virtual reality (VR) videos of indoor scenes, preliminarily explored the generation method of VR videos, and achieved good results.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":null,"pages":null},"PeriodicalIF":8.7000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VR+HD: Video Semantic Reconstruction From Spatio-Temporal Scene Graphs\",\"authors\":\"Chenxing Li;Yiping Duan;Qiyuan Du;Shiqi Sun;Xin Deng;Xiaoming Tao\",\"doi\":\"10.1109/JSTSP.2023.3323654\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the development of computer science and deep learning networks, AI generation technology is becoming increasingly mature. Video has become one of the most important information carriers in our daily life because of their large amount of data and information. However, because of their large amount of information and complex semantics, video generation models, especially High Definition (HD) video, have been a difficult problem in the field of deep learning. Video semantic representation and semantic reconstruction are difficult tasks. Because video content is changeable and information is highly correlated, we propose a HD video generation model from a spatio-temporal scene graph: the spatio-temporal scene graph to video (StSg2vid) model. First, we enter the spatio-temporal scene graph sequence as the semantic representation model of the information in each frame of the video. The scene graph used to describe the semantic information of each frame contains the motion progress of the object in the video at that moment, which is equivalent to a clock. A spatio-temporal scene graph transmits the relationship information between objects through the graph convolutional neural network and predicts the scene layout of the moment. Lastly, the image generation model predicts the frame image of the current moment. The frame at each moment depends on the scene layout at the current moment and the frame and scene layout at the previous moment. We introduced the flow net, wrapping prediction model and the spatially-adaptive normalization (SPADE) network to generate images of each frame forecast. We used the Action genome dataset. Compared with the current state-of-the-art algorithms, the videos generated by our model achieve better results in both quantitative indicators and user evaluations. In addition, we also generalized the StSg2vid model into virtual reality (VR) videos of indoor scenes, preliminarily explored the generation method of VR videos, and achieved good results.\",\"PeriodicalId\":13038,\"journal\":{\"name\":\"IEEE Journal of Selected Topics in Signal Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":8.7000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Selected Topics in Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10278415/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10278415/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
VR+HD: Video Semantic Reconstruction From Spatio-Temporal Scene Graphs
With the development of computer science and deep learning networks, AI generation technology is becoming increasingly mature. Video has become one of the most important information carriers in our daily life because of their large amount of data and information. However, because of their large amount of information and complex semantics, video generation models, especially High Definition (HD) video, have been a difficult problem in the field of deep learning. Video semantic representation and semantic reconstruction are difficult tasks. Because video content is changeable and information is highly correlated, we propose a HD video generation model from a spatio-temporal scene graph: the spatio-temporal scene graph to video (StSg2vid) model. First, we enter the spatio-temporal scene graph sequence as the semantic representation model of the information in each frame of the video. The scene graph used to describe the semantic information of each frame contains the motion progress of the object in the video at that moment, which is equivalent to a clock. A spatio-temporal scene graph transmits the relationship information between objects through the graph convolutional neural network and predicts the scene layout of the moment. Lastly, the image generation model predicts the frame image of the current moment. The frame at each moment depends on the scene layout at the current moment and the frame and scene layout at the previous moment. We introduced the flow net, wrapping prediction model and the spatially-adaptive normalization (SPADE) network to generate images of each frame forecast. We used the Action genome dataset. Compared with the current state-of-the-art algorithms, the videos generated by our model achieve better results in both quantitative indicators and user evaluations. In addition, we also generalized the StSg2vid model into virtual reality (VR) videos of indoor scenes, preliminarily explored the generation method of VR videos, and achieved good results.
期刊介绍:
The IEEE Journal of Selected Topics in Signal Processing (JSTSP) focuses on the Field of Interest of the IEEE Signal Processing Society, which encompasses the theory and application of various signal processing techniques. These techniques include filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals using digital or analog devices. The term "signal" covers a wide range of data types, including audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical, and others.
The journal format allows for in-depth exploration of signal processing topics, enabling the Society to cover both established and emerging areas. This includes interdisciplinary fields such as biomedical engineering and language processing, as well as areas not traditionally associated with engineering.