2021 International Conference on Culture-oriented Science & Technology (ICCST)最新文献

筛选
英文 中文
Discussion on Conceptual Form of The Third Generation Camera Robot System 第三代摄像机器人系统概念形式探讨
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00010
Jing He, Shitou Liu, Yixin Zhao, Qiang Liu, Yixue Dou
{"title":"Discussion on Conceptual Form of The Third Generation Camera Robot System","authors":"Jing He, Shitou Liu, Yixin Zhao, Qiang Liu, Yixue Dou","doi":"10.1109/ICCST53801.2021.00010","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00010","url":null,"abstract":"By reviewing the development of camera robot system, this paper extracts function and applicative characteristics of the first two generation systems. According to the actual requirements of film shooting and the evolution path by nature, this paper proposes the conceptual form of the fully automatic third-generation camera robot system suitable for photographer control. To explain major differences between the third-generation and the earlier two clearly, this paper uses comparison methods in narration procedure. With consideration of the application, development status, on-site requirements and engineering technology limitations of camera robot system in recent years, the design principles and functional advantages of third-generation system are discussed, and the corresponding key research directions are proposed.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114953021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Poverty Alleviation Path of Film and Television Design in the Context of Media Convergence 媒介融合背景下影视设计的扶贫路径研究
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00044
Guobin Peng, Xudong Pi, Jiajia Zhang
{"title":"Research on Poverty Alleviation Path of Film and Television Design in the Context of Media Convergence","authors":"Guobin Peng, Xudong Pi, Jiajia Zhang","doi":"10.1109/ICCST53801.2021.00044","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00044","url":null,"abstract":"As a common cause of the whole society, poverty alleviation is the basis of the work to explore the diversification of poverty alleviation paths in the overall project. Media convergence is the combination of the typical advantages of traditional media and new media, which has provided a breakthrough for poverty alleviation both from the media and content creation. With the rapid development of social science and technology and the popularization and application of digitalization, the advantages of artistic language of film and television design have gradually emerged in the poverty alleviation pattern, which has improved the transmission efficiency of poverty alleviation information. The hierarchical type of film and television expands the range of the audience. It makes poverty alleviation information reach the entire social group through the media and video symbols, thus indirectly promoting poverty alleviation.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129849103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-Reference Video Quality Assessment Based on Spatiotemporal Visual Sensitivity 基于时空视觉灵敏度的全参考视频质量评价
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00071
Huiyuan Fu, Da Pan, Ping Shi
{"title":"Full-Reference Video Quality Assessment Based on Spatiotemporal Visual Sensitivity","authors":"Huiyuan Fu, Da Pan, Ping Shi","doi":"10.1109/ICCST53801.2021.00071","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00071","url":null,"abstract":"Video streaming services have become one of the important businesses of network service providers. Accurately predicting video perceptual quality score can help providing high-quality video services. Many video quality assessment (VQA) methods were trying to simulate human visual system (HVS) to get a better performance. In this paper, we proposed a full-reference video quality assessment (FR-VQA) method named DeepVQA-FBSA. Our method is based on spatiotemporal visual sensitivity. It firstly uses a convolutional neural network (CNN) to obtain the visual sensitivity maps of frames according to the input spatiotemporal information. Then visual sensitivity maps are used to obtain the perceptual features of every frame which we called frame-level features in this paper. The frame-level features are then feed into a Feature Based Self-attention (FBSA) module to fusion to the video-level features and used to predict the video quality score. The experimental results showed that the predicted results of our method have great consistency with the subjective evaluation results.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125412931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Movie Scene Argument Extraction with Trigger Action Information 基于触发动作信息的电影场景参数提取
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00103
Qian Yi, Guixuan Zhang, Jie Liu, Shuwu Zhang
{"title":"Movie Scene Argument Extraction with Trigger Action Information","authors":"Qian Yi, Guixuan Zhang, Jie Liu, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00103","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00103","url":null,"abstract":"Movie scene argument is an essential part of the movie scene. Movie scene argument extraction can help to understand the movie plot. In this paper we propose a movie scene argument extraction model, which utilizes the trigger action paraphrase as extra information to help improve the argument extraction. Specifically, we obtain the paraphrase of trigger from the dictionary and employ attention mechanism to encode them into an argument oriented embedding vector. Then we use the argument oriented embedding vector and the instance embedding for argument extraction. Experimental results on a movie scene event extraction dataset and a widely used open domain event extraction dataset prove effectiveness of our model.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125436021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Named Entity Recognition of traditional architectural text based on BERT 基于BERT的传统建筑文本命名实体识别
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00047
Yifu Li, Wenjun Hou, Bing Bai
{"title":"Named Entity Recognition of traditional architectural text based on BERT","authors":"Yifu Li, Wenjun Hou, Bing Bai","doi":"10.1109/ICCST53801.2021.00047","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00047","url":null,"abstract":"Traditional architecture is an important component carrier of traditional culture. Through deep learning models, relevant entities can be automatically extracted from unstructured texts to provide data support for the protection and inheritance of traditional architecture. However, research on text information extraction oriented to this field has not been effectively carried out. In this paper, a data set of nearly 50,000 words in this field is collected, sorted out, and annotated, five types of entity labels are defined, annotation specifications are clarified, and a method of Named Entity Recognition based on pre-training model is proposed. BERT (Bidirectional Encoder Representations from Transformers) pre-training model is used to capture dynamic word vector information, Bi-directional Long Short-Term Memory (BiLSTM) module is used to capture bidirectional contextual information with positive and reverse sequences. Finally, classification mapping between labels is completed by the Conditional Random Field (CRF) module. The experiment shows that compared with other models, the BERT-BiLSTM-CRF model proposed in this experiment has a better recognition effect in this field, with F1 reaching 95.45%.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121292252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Image Super-Resolution via Dual Feature Aggregation Network 基于双特征聚合网络的轻量图像超分辨率
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00104
Shang Li, Guixuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang
{"title":"Lightweight Image Super-Resolution via Dual Feature Aggregation Network","authors":"Shang Li, Guixuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00104","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00104","url":null,"abstract":"With the power of deep learning, super-resolution (SR) methods enjoy a dramatic boost of performance. However, they usually have a large model size and high computational complexity, which hinders the application in devices with limited memory and computing power. Some lightweight SR methods solve this issue by directly designing shallower architectures, but it will affect SR performance. In this paper, we propose the dual feature aggregation strategy (DFA). It enhances the feature utilization via feature reuse, which largely improves the representation ability while only introducing marginal computational cost. Thus, a smaller model could achieve better cost-effectiveness with DFA. Specifically, DFA consists of local and global feature aggregation modules (LAM and GAM). They work together to further fuse hierarchical features adaptively along the channel and spatial dimensions. Extensive experiments suggest that the proposed network performs favorably against the state-of-the-art SR methods in terms of visual quality, memory footprint, and computational complexity.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on the Development and Application of Virtual Reality Simulation Technology in Investigation and Research Courses 虚拟现实仿真技术在调查研究课程中的开发与应用研究
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00028
Xing Fang, K. Un, Xi Zhang
{"title":"Research on the Development and Application of Virtual Reality Simulation Technology in Investigation and Research Courses","authors":"Xing Fang, K. Un, Xi Zhang","doi":"10.1109/ICCST53801.2021.00028","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00028","url":null,"abstract":"The purpose of this study is to develop the investigation and research Courses, improve the design education methods, and adapt to the rapid changes of the platform era. In order to improve the design education of the investigation and research courses, virtual reality simulation experiment is a simulation experiment based on head tracking VR, which is used to display the investigation results, so that users can investigate the subjects given by a variety of NPCs and apply them to the design. The core elements include User Virtual Realistic Control, Virtual AI System and Research Information Collection. Although the content of the simulation is very simple, the experience gained through the simulation will have a positive impact on the design education of the investigation team.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131600209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey upon College Students’ Consumption Behavior on Short Video Platforms 大学生短视频平台消费行为调查
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00133
Caiwen Zhao, Gu Wang, Guowei Li, Li Ding
{"title":"A Survey upon College Students’ Consumption Behavior on Short Video Platforms","authors":"Caiwen Zhao, Gu Wang, Guowei Li, Li Ding","doi":"10.1109/ICCST53801.2021.00133","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00133","url":null,"abstract":"The rapid development of the short video industry has not only offered individuals innovative ways for entertainment and social contacts, but also facilitated a new online consumption mode. Whereas, during the process of consumption upon short video platforms, there are a flurry of problems, such as ingratiating the general consumption trend, blind consumption, lax quality management of some products, and difficult protection of consumers’ rights and benefits. In this paper, the questionnaire survey method was employed to investigate and analyze College Students’ usage behavior and consumption behavior on short video platforms. It is found that (1) the content and style of short videos, (2) the individual charisma of the vlogger, (3) the user’s personal preference and (4) the platform’s purchase mode are the main factors impacting college students’ consumption. Consequently, it indicates a survey reference for short video platforms to improve user stickiness.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134338899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust video watermarking approach based on QR code 一种基于二维码的鲁棒视频水印方法
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00079
Zhuojie Gao, Zhixian Niu, Baoning Niu, Hu Guan, Ying Huang, Shuwu Zhang
{"title":"A Robust video watermarking approach based on QR code","authors":"Zhuojie Gao, Zhixian Niu, Baoning Niu, Hu Guan, Ying Huang, Shuwu Zhang","doi":"10.1109/ICCST53801.2021.00079","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00079","url":null,"abstract":"Video watermarking embeds copyright mark called watermark in video frame to prove the ownership of video copyright. Compared with more mature image watermarking algorithms, video watermarking algorithms require higher robustness. This paper encodes the watermark into QR code and makes full use of the high fault tolerance of QR code, proposes a watermark generating and decoding strategy based on the characteristics of QR code, which improves the robustness of the watermarking algorithm. The experimental results show that the algorithm is more robust than the algorithm using random binary string or scrambling QR code as watermark.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115600831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calculation and simulation of loudspeaker power based on cultural complex 基于文化综合体的扬声器功率计算与仿真
2021 International Conference on Culture-oriented Science & Technology (ICCST) Pub Date : 2021-11-01 DOI: 10.1109/ICCST53801.2021.00109
Zhen Li, Pengyang Ji, Lifeng Wu, Hui Ren, Yuqing Chen
{"title":"Calculation and simulation of loudspeaker power based on cultural complex","authors":"Zhen Li, Pengyang Ji, Lifeng Wu, Hui Ren, Yuqing Chen","doi":"10.1109/ICCST53801.2021.00109","DOIUrl":"https://doi.org/10.1109/ICCST53801.2021.00109","url":null,"abstract":"Research on how to calculate the power of loudspeaker in sound reinforcement system of the cultural complex has been studied seldom, this paper analyzes the calculation methods proposed by domestic and foreign scholars, on this basis, put forward the improved method that can calculate the total loudspeaker power according to the space volume of the hall(LPSV). Using the existing algorithm and the LPSV in this paper, the speaker power in the cultural complex is calculated respectively, and the total power of the loudspeaker required in the hall can be directly obtained by using the LPSV through the volume of the hall. The 3D model of the hall is established by EASE, which is used to calculate and simulate the sound reinforcement technical parameters such as analogical pressure level, articulation loss, and rapid speech transmission index. In the aspects of maximum sound pressure level, sound field nonuniformity, and transmission frequency characteristics, according to the above analysis of the data, it meets the “Design specification of hall sound amplification system” (GB50371-2006). This study provides a theoretical basis for loudspeaker configuration of sound reinforcement system in cultural complex and has great application value.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114678726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信