{"title":"Discussion on Conceptual Form of The Third Generation Camera Robot System","authors":"Jing He, Shitou Liu, Yixin Zhao, Qiang Liu, Yixue Dou","doi":"10.1109/ICCST53801.2021.00010","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00010","url":null,"abstract":"By reviewing the development of camera robot system, this paper extracts function and applicative characteristics of the first two generation systems. According to the actual requirements of film shooting and the evolution path by nature, this paper proposes the conceptual form of the fully automatic third-generation camera robot system suitable for photographer control. To explain major differences between the third-generation and the earlier two clearly, this paper uses comparison methods in narration procedure. With consideration of the application, development status, on-site requirements and engineering technology limitations of camera robot system in recent years, the design principles and functional advantages of third-generation system are discussed, and the corresponding key research directions are proposed.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114953021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on Poverty Alleviation Path of Film and Television Design in the Context of Media Convergence","authors":"Guobin Peng, Xudong Pi, Jiajia Zhang","doi":"10.1109/ICCST53801.2021.00044","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00044","url":null,"abstract":"As a common cause of the whole society, poverty alleviation is the basis of the work to explore the diversification of poverty alleviation paths in the overall project. Media convergence is the combination of the typical advantages of traditional media and new media, which has provided a breakthrough for poverty alleviation both from the media and content creation. With the rapid development of social science and technology and the popularization and application of digitalization, the advantages of artistic language of film and television design have gradually emerged in the poverty alleviation pattern, which has improved the transmission efficiency of poverty alleviation information. The hierarchical type of film and television expands the range of the audience. It makes poverty alleviation information reach the entire social group through the media and video symbols, thus indirectly promoting poverty alleviation.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129849103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Full-Reference Video Quality Assessment Based on Spatiotemporal Visual Sensitivity","authors":"Huiyuan Fu, Da Pan, Ping Shi","doi":"10.1109/ICCST53801.2021.00071","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00071","url":null,"abstract":"Video streaming services have become one of the important businesses of network service providers. Accurately predicting video perceptual quality score can help providing high-quality video services. Many video quality assessment (VQA) methods were trying to simulate human visual system (HVS) to get a better performance. In this paper, we proposed a full-reference video quality assessment (FR-VQA) method named DeepVQA-FBSA. Our method is based on spatiotemporal visual sensitivity. It firstly uses a convolutional neural network (CNN) to obtain the visual sensitivity maps of frames according to the input spatiotemporal information. Then visual sensitivity maps are used to obtain the perceptual features of every frame which we called frame-level features in this paper. The frame-level features are then feed into a Feature Based Self-attention (FBSA) module to fusion to the video-level features and used to predict the video quality score. The experimental results showed that the predicted results of our method have great consistency with the subjective evaluation results.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"412 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125412931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Movie Scene Argument Extraction with Trigger Action Information","authors":"Qian Yi, Guixuan Zhang, Jie Liu, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00103","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00103","url":null,"abstract":"Movie scene argument is an essential part of the movie scene. Movie scene argument extraction can help to understand the movie plot. In this paper we propose a movie scene argument extraction model, which utilizes the trigger action paraphrase as extra information to help improve the argument extraction. Specifically, we obtain the paraphrase of trigger from the dictionary and employ attention mechanism to encode them into an argument oriented embedding vector. Then we use the argument oriented embedding vector and the instance embedding for argument extraction. Experimental results on a movie scene event extraction dataset and a widely used open domain event extraction dataset prove effectiveness of our model.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125436021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Named Entity Recognition of traditional architectural text based on BERT","authors":"Yifu Li, Wenjun Hou, Bing Bai","doi":"10.1109/ICCST53801.2021.00047","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00047","url":null,"abstract":"Traditional architecture is an important component carrier of traditional culture. Through deep learning models, relevant entities can be automatically extracted from unstructured texts to provide data support for the protection and inheritance of traditional architecture. However, research on text information extraction oriented to this field has not been effectively carried out. In this paper, a data set of nearly 50,000 words in this field is collected, sorted out, and annotated, five types of entity labels are defined, annotation specifications are clarified, and a method of Named Entity Recognition based on pre-training model is proposed. BERT (Bidirectional Encoder Representations from Transformers) pre-training model is used to capture dynamic word vector information, Bi-directional Long Short-Term Memory (BiLSTM) module is used to capture bidirectional contextual information with positive and reverse sequences. Finally, classification mapping between labels is completed by the Conditional Random Field (CRF) module. The experiment shows that compared with other models, the BERT-BiLSTM-CRF model proposed in this experiment has a better recognition effect in this field, with F1 reaching 95.45%.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121292252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lightweight Image Super-Resolution via Dual Feature Aggregation Network","authors":"Shang Li, Guixuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00104","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00104","url":null,"abstract":"With the power of deep learning, super-resolution (SR) methods enjoy a dramatic boost of performance. However, they usually have a large model size and high computational complexity, which hinders the application in devices with limited memory and computing power. Some lightweight SR methods solve this issue by directly designing shallower architectures, but it will affect SR performance. In this paper, we propose the dual feature aggregation strategy (DFA). It enhances the feature utilization via feature reuse, which largely improves the representation ability while only introducing marginal computational cost. Thus, a smaller model could achieve better cost-effectiveness with DFA. Specifically, DFA consists of local and global feature aggregation modules (LAM and GAM). They work together to further fuse hierarchical features adaptively along the channel and spatial dimensions. Extensive experiments suggest that the proposed network performs favorably against the state-of-the-art SR methods in terms of visual quality, memory footprint, and computational complexity.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"8 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on the Development and Application of Virtual Reality Simulation Technology in Investigation and Research Courses","authors":"Xing Fang, K. Un, Xi Zhang","doi":"10.1109/ICCST53801.2021.00028","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00028","url":null,"abstract":"The purpose of this study is to develop the investigation and research Courses, improve the design education methods, and adapt to the rapid changes of the platform era. In order to improve the design education of the investigation and research courses, virtual reality simulation experiment is a simulation experiment based on head tracking VR, which is used to display the investigation results, so that users can investigate the subjects given by a variety of NPCs and apply them to the design. The core elements include User Virtual Realistic Control, Virtual AI System and Research Information Collection. Although the content of the simulation is very simple, the experience gained through the simulation will have a positive impact on the design education of the investigation team.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131600209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Survey upon College Students’ Consumption Behavior on Short Video Platforms","authors":"Caiwen Zhao, Gu Wang, Guowei Li, Li Ding","doi":"10.1109/ICCST53801.2021.00133","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00133","url":null,"abstract":"The rapid development of the short video industry has not only offered individuals innovative ways for entertainment and social contacts, but also facilitated a new online consumption mode. Whereas, during the process of consumption upon short video platforms, there are a flurry of problems, such as ingratiating the general consumption trend, blind consumption, lax quality management of some products, and difficult protection of consumers’ rights and benefits. In this paper, the questionnaire survey method was employed to investigate and analyze College Students’ usage behavior and consumption behavior on short video platforms. It is found that (1) the content and style of short videos, (2) the individual charisma of the vlogger, (3) the user’s personal preference and (4) the platform’s purchase mode are the main factors impacting college students’ consumption. Consequently, it indicates a survey reference for short video platforms to improve user stickiness.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134338899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Survey on Original Video-based Watermarking","authors":"Zhongze Lv, Hu Guan, Ying Huang, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00098","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00098","url":null,"abstract":"With the quick development of communication technology, digital video has become an indispensable media content for people’s daily life and entertainment, and its copyright protection has also turned into a vital research topic. The digital video watermarking is expected to become a technology that can solve the problem of digital video network infringement. Based on various embedding positions, the digital video watermarking algorithm includes two major categories: the compressed video-based and original video-based watermarking algorithm. The latter has stronger versatility than the former, hence, in this paper, we mainly focus on the survey of the original video-based watermarking algorithm.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131769422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Details Enhancement In Low Contrast Region Of Inspection Image Based On Fuzzy Rough Set","authors":"Junbao Zheng, Junpeng Ji, Xiu Liu","doi":"10.1109/ICCST53801.2021.00100","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00100","url":null,"abstract":"In the large dynamic range of inspection images, low-density objects mostly exist in the areas with low contrast, which makes it difficult to identify or show them. The existing image enhancement algorithms mostly do not consider this characteristic of the X-ray inspection image. To solve the detail enhancement problem for quarantine inspection, an fuzzy rough set method is proposed to extract the low-density quarantine objects in X-ray inspection image. Firstly, after negative operating and noise filtering, the inspection image is divided into two parts with rough set method, one is the region of interest that may have the low-density quarantine object, and the other on the contrary. Then, within the region of interest, a fuzzy degree is used to determine the probability of a certain pixel to belong to the quarantine target. Finally, according to the pixel classification results, some pixel values are adjusted in HSV space to show quarantine target distinctly. The capability of detail enhancement in low-contrast region of high dynamic image is also evaluated with the experiments on simulation data and real X-ray images.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131715511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}