{"title":"An image selection method for image representation of tourism destination based on comment text and image data","authors":"Xiaojia Huang, Yong Yang, Yezhou Yang, Chen Wang, Liang Guo","doi":"10.1109/CoST57098.2022.00012","DOIUrl":"https://doi.org/10.1109/CoST57098.2022.00012","url":null,"abstract":"One of the challenges faced by Diffused Metal-Oxide Semiconductor (DMOs) is how to track the behavior of tourists and provide more comfortable experience for tourists. Nowadays, multi-source tourism big data provides many available information for improving tourists’ experience. For management organizations, in order to achieve better publicity effect, how to choose the appropriate image as the representative of the destination image has become a problem. Based on the review text and image data, this paper proposes a method, Scale-invariant feature transform KMeans (SIFT-KMeans) of selecting the representative image of tourism destination. This method uses the text and image data generated by tourists to carry out a series of analysis and processing, and then feeds back the results to tourists, so as to reflect the greatest interest of tourists. The accuracy and stability of this method is wonderful, and the change of destination image can be reflected through the change of time. The destination management organization can carry out corresponding construction and publicity based on the obtained results.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117250162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved TPX based IAGA for solving hybrid flow-shop scheduling problem with identical parallel machine","authors":"Zhu Chang-jian, Zheng Kun, Lian Zhi-Wei, Xu Hui, Feng Xue-Qing, Gu Xin-Yan","doi":"10.1109/cost57098.2022.00087","DOIUrl":"https://doi.org/10.1109/cost57098.2022.00087","url":null,"abstract":"The hormone regulation adaptive genetic algorithm based on improved two-point crossover (ITPX) is investigated and applied to a hybrid flow shop scheduling problem with identical parallel machines. Firstly, the hormone regulation mechanism is used to improve the parameter settings of different operators in the genetic algorithm to make it have adaptive regulation capability. Secondly, according to the problems of high redundancy and low efficiency of the traditional two-point crossover (TPX) operation, an exact point taking method is proposed to improve the exploration performance of the TPX operator, while multiple perturbation operations are designed to maintain the diversity characteristics of the variants. Finally, the improved algorithm is tested on the hybrid flow-shop scheduling problem with identical parallel machine. The test results show that the improved algorithm has an average percent deviation of 0.86% in solving the simple problem and 2.79 % in solving the complex problem, both of which are better than the comparable algorithms, verifying the effectiveness of the proposed algorithm.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115028083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on the Development and Practice of Digital Technology in Architectural Heritage","authors":"Jie Wang, Chang Lu","doi":"10.1109/cost57098.2022.00039","DOIUrl":"https://doi.org/10.1109/cost57098.2022.00039","url":null,"abstract":"In the protection and development of architectural heritage, in addition to the traditional protection means, digital technology is even more imperative. The digitization process of cultural heritage has developed by leaps and bounds in this century. The application of digital technology in the protection and inheritance of architectural heritage has made remarkable achievements, which has brought significant changes to the protection, dissemination and development mode of architectural heritage. With the continuous improvement of the digitization of cultural heritage, we should actively carry out the digital protection of architectural heritage, and always pay attention to the development and application of digital technology, and constantly explore and practice the new forms of digital technology in the field of architectural heritage protection. In this paper, CiteSpace software is used to visualize and compare the keywords of relevant literature at home and abroad, so as to explore the research direction and research hotspot, analyze the existing research status, and put forward the limitations of the research, and introduce the application and development of a variety of high-tech means in the field of architectural heritage protection. Digital technology as an indispensable and important technical means in the field of architectural cultural heritage protection, which has attracted numerous experts and scholars to engage in research.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126403910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Learning Based Personalized Movie Research and Implementation of Recommendation System","authors":"Xianting Feng, Jianming Hu, Xin Zhu","doi":"10.1109/cost57098.2022.00025","DOIUrl":"https://doi.org/10.1109/cost57098.2022.00025","url":null,"abstract":"With the development of the Internet industry, the information age presents a trend of “information overload”, and people's efficiency in extracting effective information is getting lower and lower. In order to relieve people's browsing pressure, this paper implements a collaborative filtering algorithm based on machine learning for the movie recommendation, citing the principle of personalized recommendation system proposed by Robert Armstrong and others in the United States in 1995. First, the rating data is preprocessed and visualized in consideration of the user's real behavior. Then implement the algorithm mentioned above, and use the test indicators to measure the performance of the recommender system and optimize the system parameters. Finally, using software engineering and Java front-end knowledge based on Spring+SpringMVC+Mybaits (SSM) to conduct demand analysis, functional analysis, non-functional analysis and establish a database. At last, use java database connectivity (JDBC) to link the database Mysql, and finally realized a movie recommender system with basic functions.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131797876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A QoE Prediction Model Combining Network Parameters and Video Quality","authors":"Jinfan Zhao, Shufeng Li, Feng Hu","doi":"10.1109/CoST57098.2022.00016","DOIUrl":"https://doi.org/10.1109/CoST57098.2022.00016","url":null,"abstract":"The advent of the 5G era and the theater performing arts market woes caused by Corona Virus Disease 2019 (COVID- 2019) epidemic have accelerated the emergence and growth of the cloud performing arts business. To improve the quality of service for cloud performing arts and live performances, it is critical to develop a predictive model that accurately and timely reflects the Quality of Experience (QoE). In this paper, we first filter three of the seven recognized application layer Quality of Service (QoS) parameters that represent the input network quality in this QoE prediction model. Then one of the four different video quality evaluation methods is selected as the most effective method to represent the video quality. The purpose of combining network quality and video quality is to build a more accurate and effective QoE prediction model.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformer-based Multimodal Contextual Co-encoding for Humour Detection","authors":"Boya Deng, Jiayin Tian, Hao Li","doi":"10.1109/CoST57098.2022.00067","DOIUrl":"https://doi.org/10.1109/CoST57098.2022.00067","url":null,"abstract":"Humor, a unique expression of the human language system different from other emotions, plays a very important role in human communication. Previous works on humor detection have been mostly limited to a single textual modality. From the perspective of human humor perception, various aspects such as text, intonation, mannerisms, and body language can convey humor. From the perspective of the structure of jokes, any combination of textual, acoustic, and visual modalities in various positions in the context can form unexpected humor. Therefore, information that exists among multiple modalities and contexts should be considered simultaneously in humor detection. This paper proposes a humor detection model based on the transformer and contextual co-encoding called Transformer-based Multimodal Contextual Co-encoding (TMCC). The model uses the transformer-based multi-head attention to capture potential information across modalities and contexts first. Then, it uses a convolutional autoencoder to further fuse the overall feature matrix and reduce dimensionality. Finally, a simple multilayer perceptron is used to predict the humor labels. By comparing with common baselines of humor detection, it is demonstrated that our model achieves some performance improvement. The availability of each part of the model is demonstrated through a series of ablation studies.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123411218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Character Feature Extraction for Novels Based on Text Analysis","authors":"Tingting Wu, Jianming Hu, Xin Zhu","doi":"10.1109/cost57098.2022.00089","DOIUrl":"https://doi.org/10.1109/cost57098.2022.00089","url":null,"abstract":"Research on automatic character analysis of novels can help to achieve automatic Q & A with fictional characters. In this paper, a corpus containing 1435 novel texts was constructed with Chinese martial arts novel characters as the research object, and a total of 57026 characters were extracted. The character vectors were generated by Skip-gram model training, and the effect of applying the character vectors was explored. Similarity calculation and K-means clustering were performed on the persona vectors, and the experimental results showed that people from the same author usually have similarity. The gender classification prediction was performed using logistic regression and support vector machine for the persona vectors respectively, and the experimental results showed that both classification algorithms could predict the gender of the new sample characters well.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124839688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recommendation of Clip Templates Based on Cross-Modal Retrieval","authors":"Zhiyi Zhu, Xiaoyu Wu, Xueting Yang, Kai Zhang, Haoyi Yu, Xiangshan Chen","doi":"10.1109/cost57098.2022.00071","DOIUrl":"https://doi.org/10.1109/cost57098.2022.00071","url":null,"abstract":"Nowadays, the use of video editing software has increased dramatically. However, there is a problem of insufficient intelligence in the recommendation of clip templates in these software. Therefore, this paper addresses this problem and devotes to combining machine learning algorithms and deep learning to achieve optimization of video clip template recommendation, and proposes the design of a clip template recommendation system based on cross-modal retrieval technology. Firstly, the Requests module is used to crawl some data from Baidu images and NetEase cloud music websites and store them persistently as components of user templates to make the templates diverse and meet the needs of more users. Secondly, the algorithm network construction based on PyTorch framework was completed to realize background replacement and music matching, improve the template matching mechanism for users, and generate videos from images; finally, the Android Studio platform was used to develop the APP for Android system, and the Web server was built to realize the data interaction between the client side and the server side, so that users can easily use the APP to get functional experience.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122592664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuliang Xue, Nan Zhong, Zhenxing Qian, Xinpeng Zhang
{"title":"PSTNet: Protectable Style Transfer Network Based on Steganography","authors":"Yuliang Xue, Nan Zhong, Zhenxing Qian, Xinpeng Zhang","doi":"10.1109/CoST57098.2022.00021","DOIUrl":"https://doi.org/10.1109/CoST57098.2022.00021","url":null,"abstract":"Neural style transfer (NST) is a technique based on deep learning that preserves the content of an image and converts its style to a target style. In recent years, NST has been widely used to generate new artworks based on existent styles to promote cultural communication. However, there is little research that considers the protection of copyright during the generation of stylised images. To this end, we propose an end-to-end protectable style transfer network based on steganography, called PSTNet. This network, including a pair of encoder and decoder, takes a content image and copyright information as input. The encoder embeds copyright information directly into the input content image and render the content image in a specific style. When the copyright needs to be verified, only the corresponding decoder can extract copyright information correctly. Furthermore, an elaborated designed noise layer is added between the encoder and decoder to improve the robustness of the copyright protection method. Experiments show that the protectable stylised images generated by PSTNet have significant visual effects and the undetectability of copyright information is proved by steganalysis. In addition, our method is robust enough that the copyright of generated stylised images can still be proved even after spreading on real social networks. We hope this work will raise awareness of the protection of artworks created by NST.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125382245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on Visual Cognitive Model Modeling based on Design Psychology","authors":"Tianyu Hua, Shuang Wang, Jingyu Liu, Jian Jiang","doi":"10.1109/CoST57098.2022.00040","DOIUrl":"https://doi.org/10.1109/CoST57098.2022.00040","url":null,"abstract":"This article is based on the theoretical achievements of design psychology, takes graphic poster design as the research object, and combines experimental psychology methods with information processing technology to construct a visual cognitive model that can serve art design. This article firstly starts from the laws of formal beauty in design psychology, and screens the formal beauty features of posters. Secondly, a subjective evaluation experiment for the formal beauty characteristics of the poster materials was conducted, and correlation analysis and factor analysis on the processed experimental data were completed. Finally, this article uses a variety of machine learning algorithms to construct a visual cognitive prediction model. This article summarizes seven low-level and four high-level feature descriptors of the beauty of poster form, and constructs a prediction model from low-level features to high-level features. In addition, this article also quantifies the “Balance” feature which has the highest word frequency in the low-level features, realizing the calculation of the balance degree of the poster image.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117237338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}