{"title":"Design and Realization of Multiple_Platform Digital Museum Interaction System Based on VR&AR","authors":"Xinzhe Zhang, Huaqun Liu, Shijie Wang","doi":"10.1145/3439133.3439136","DOIUrl":"https://doi.org/10.1145/3439133.3439136","url":null,"abstract":"The museum is a place for collecting, storing, displaying and researching physical objects representing natural and human cultural heritage, and rationally classifying them, providing visitors with learning, education, and entertainment services. Providing high-quality information services is the core value of the museum. The museum invisibly reflects the historical background of the cultural relics displayed and the cultural connotation of the displayed area. It cannot be classified, arranged, and juxtaposed in a single manner, but should be the expression of internal cultural spirit. Create a vivid, contextual, and convenient online digital museum content ecology and information service framework through the combination of museums and VR technology. With the help of realistic content scenes and excellent modeling tools and ecosystems, it provides distinctions for museum users Offline, with a richer content ecosystem and a more convenient content ecosystem, will greatly enhance the competitiveness of digital museums and increase the number of museum users. This article takes \"Hunan Changsha Museum\" as the research object, researches education for ordinary people, focuses on getting cultural relics out of the museum, and comprehensively utilizes cultural relics resources. Let the cultural heritage be deeply rooted in the hearts of the people through a variety of expressions, use Internet technology, big data processing, cloud data integration, high-fidelity modeling technology, and use cultural relics and the history behind them as content to build PC terminals, mobile terminals, and integrated display machines, A cross-platform and multi-dimensional intelligent interaction platform for paper books. Let the collection of cultural relics go out of the exhibition hall and into the lives of the people.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121177220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Implementation of Realistic Rendering and Immersive Experience System Based on Unreal Engine4","authors":"Shijie Wang, Huaqun Liu, Xinzhe Zhang","doi":"10.1145/3439133.3439138","DOIUrl":"https://doi.org/10.1145/3439133.3439138","url":null,"abstract":"This article uses Unreal Engine 4 as the production platform to develop an interactive system that promotes traditional Chinese engraving and printing techniques, in order to show more possibilities of virtual reality technology in the protection and dissemination of Chinese intangible cultural heritage[1]. By analyzing the current user needs of virtual reality technology in the protection of intangible cultural heritage, a systematic and efficient design method is proposed: according to the characteristics of the Unreal Engine 4, the necessary comprehensive optimization of the scene model is carried out to enhance the realism of the virtual picture to a certain extent; Editing dynamic interactive scripts through blueprint events to achieve two roaming modes and interactive behaviors in the scene, presenting the various process details of engraving and printing techniques in ancient towns and meeting the actual needs of interactive motion perception, and can also effectively promote the virtual reality experience Authenticity enhances the user's immersive experience.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114660074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Praveen Thachappully Adithya, R. Muthalagu, Sapna Sadhwani
{"title":"Genesis Net: Fine Tuned Dense Net Configuration using Reinforcement Learning","authors":"Praveen Thachappully Adithya, R. Muthalagu, Sapna Sadhwani","doi":"10.1145/3439133.3439139","DOIUrl":"https://doi.org/10.1145/3439133.3439139","url":null,"abstract":"Designing neural networks even in the case of relatively simpler fully connected neural networks / dense networks is a time-consuming process since the architecture design is done manually based on intuition and manual tweaking. In this paper, we present “Genesis Net”, a dense net that starts off with a very basic configuration (“seed configuration”), and subsequently tweaks itself via reinforcement learning (RL) to arrive at an optimal configuration for the task at hand. Genesis Net attained a test error within 0.59% of a similar but bigger documented baseline model. Furthermore, our model was able to achieve this using merely 10.11% of trainable weights that the baseline model used. This significantly smaller network was found using Q-Learning combined with a dynamic action space that allowed for fine tuning the network configuration.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121572348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Enlightenment of \"AR / VR\" Technical University Course Education in Taiwan, China","authors":"Xiaoyu Guo, Xingnan Chen, Xiaoqi Feng, Shijue Zheng","doi":"10.1145/3439133.3439146","DOIUrl":"https://doi.org/10.1145/3439133.3439146","url":null,"abstract":"The development of computers and information technology, especially the emergence of the Internet, big data, artificial intelligence, virtual reality, and mental enhancement, has created a new field of educational technology and entered a new chapter in the development of educational information. Augmented reality technology will be the most promising technology in the field of education after multimedia and computer networks. The most common teaching application of AR/VR technology in mainland China is to integrate digital learning resources and carry out relevant theoretical and experimental courses in multiple universities; China's Taiwan AR/VR technology mainly focuses on image processing related fields and education, and combines multiple teaching methods. This article summarizes and compares the application of augmented reality technology in university curriculum education on both sides of the Taiwan Straits from mainland China and Taiwan, and draws some enlightenment.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130006346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User Experience Research of Queuing System based in Chinese Smart Bank Branches","authors":"Yongkang Xing, X. Cai, Kun Li, Qiaoqian Lu, Xiaoyan Wang, Jian Ruan","doi":"10.1145/3439133.3439143","DOIUrl":"https://doi.org/10.1145/3439133.3439143","url":null,"abstract":"With the development of Internet finance, customer behavior and consumption habits have transformed and brought new challenges to commercial banks' traditional branches in China. Bank branches face the pressure of rising operating costs, declining profits, and being gradually replaced by online services. Under the smart economic trend, bank branches are facing significant transforming and upgrading. Branches will gradually transform their roles from traditional business transaction centers to product marketing and customer experience centers. The intelligent upgrading and transformation of branches play an essential role in accelerating the process. In this paper, our research focuses on user experience in the smart branch and adjust with the smart society development trend. We study smart diversions' pre-processing at smart branches, establish smart queuing models, and mobile appointment systems to control the queue. We compare the smart model with the traditional under the bank's virtual simulation system. Use customer waiting time for service and employee work intensity as essential indicators of the queuing model. The simulation results show that the branch uses intelligent distribution is much more efficient than the traditional one. Customer experience and service efficiency are greatly improved.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"19 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123211995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning Application in Alzheimer Disease Diagnoses and Prediction","authors":"Tao Jiang","doi":"10.1145/3439133.3439144","DOIUrl":"https://doi.org/10.1145/3439133.3439144","url":null,"abstract":"The field of Alzheimer's Disease classification and prediction recently gains more and more attention. Traditionally, the method treating such a problem was using the combination of traditional machine learning algorithm and deep learning algorithm. But there are constant efforts made by researchers to search for the potential of only using the deep learning method. This article compares the pros and cons of using deep learning method in the field of Alzheimer's Disease. Even though traditional machine learning method may perform slightly better right now, with the accumulation of data, it is totally possible to only use the deep learning method to make the Alzheimer's Disease classification and prediction.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113939419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modular AR Framework for Vision-Language Tasks","authors":"Robin Fischer, Tzu-Hsuan Weng, L. Fu","doi":"10.1145/3439133.3439142","DOIUrl":"https://doi.org/10.1145/3439133.3439142","url":null,"abstract":"Mixed / augmented reality systems have become more and more sophisticated in recent years. However, they still lack any ability to reason about the surrounding world. On the other hand, computer vision research has made many advancements towards a more human-like reasoning process. This paper aims to bridge these 2 research areas by implementing a modular framework which interconnects an AR application with a deep learning based vision model. Finally, a few potential use cases of the proposed system are showcased. The developed framework allows the application to utilize a variety of Vision-Language (V+L) models, to gain additional understanding about the surrounding environment. The system is designed to be modular and expandable. It is able to connect any number of Python processes of the V+L models to Unity apps using AR technology. The system was evaluated in our university's smart home lab based on daily life use cases. With a further extension of the framework by additional downstream tasks provided by V+L models and other computer vision systems, this framework should find wider adoption in AR applications. The increasing ability of applications to comprehend visual common sense and natural conversations would enable more intuitive interactions with the user, who could perceive his device more as a virtual assistant and companion.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122930898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discussion on Practical Teaching Mode from the Perspective of Virtual Reality","authors":"Jie Liang","doi":"10.1145/3439133.3439137","DOIUrl":"https://doi.org/10.1145/3439133.3439137","url":null,"abstract":"With the maturity of hardware and software of virtual reality and the decline in equipment costs, the wide application of virtual reality technology has become possible, and it has become the focus of discussion in education and teaching field. Simultaneously, it also brings new opportunities and challenges to current practical teaching in higher education. For universities, it is important and challenging to not only provide students with an imaginative, exploring and authentic practice environment, but also build positive, open and free space-time space for students, which is difficult to achieve in traditional practice teaching. This is an exploratory study on how to effectively make full use of the teaching advantages of virtual reality technology in practical teaching, in order to break through the limitation of current practical teaching. The researcher gives a general overview of virtual reality, discusses the feasibility of integrating virtual reality into practical teaching from the perspective of intentions and methods, puts forward the practical teaching mode of “virtual reality integration”, and discusses several possible problems and effective countermeasures to optimize and promote the application of the practical teaching mode.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131134719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robot Path Planning Algorithm Based on Particle Swarm Optimization and Feedforward Neural Network in Network Environment","authors":"Shiwei Li","doi":"10.1145/3439133.3439145","DOIUrl":"https://doi.org/10.1145/3439133.3439145","url":null,"abstract":"The main task of mobile robot path planning is: according to the environment model, the mobile robot is based on one or some optimization criteria: such as the minimum work cost, the shortest walking route, the shortest walking time, etc., to find a path in the motion space that does not occur with obstacles. Under the premise of collision, the collision-free path from the starting coordinate point to the target coordinate point allows the robot to reach the destination safely. At present, the path planning methods of mobile robots can be roughly divided into three categories according to their working methods. The first is path planning based on environmental models. It can handle path planning under the condition of fully known obstacle positions and shapes. In the environment, the path planning method based on the environment model will not be applicable. Specific methods such as A * [1], topological graph method [2], etc .; second is local path planning method based on sensor information, typical methods are: artificial potential field method [3], fuzzy logic method [4], etc .; third Behavior-based path planning method [5], which decomposes the navigation problem into independent modules such as collision avoidance and target guidance [6]. Practice shows that it is an effective method to apply neural network to automatic generation of robot trajectory and path planning of mobile robot.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117071709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Orawat Yodnual, Wanus Srimaharaj, R. Chaisricharoen, Kanchit Pamanee
{"title":"Automatic Workload Estimation for Software House","authors":"Orawat Yodnual, Wanus Srimaharaj, R. Chaisricharoen, Kanchit Pamanee","doi":"10.1145/3439133.3439135","DOIUrl":"https://doi.org/10.1145/3439133.3439135","url":null,"abstract":"Normally, organizations have to estimate the workload relying on limited resources. An appropriate estimation method can improve workforce optimization. In the software house, workload categorization and estimation can be acquired from the information technology management. Nevertheless, there are several factors such as work priority and specific goals that affect the workload level. General workload management spends a long time and decreases task management quality. Therefore, this study applies machine learning, Naïve Bayes, to estimate the workload automatically. This classification method increases the accuracy of workload estimation, along with reducing the time consumption for the whole system.","PeriodicalId":291985,"journal":{"name":"2020 4th International Conference on Artificial Intelligence and Virtual Reality","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134638807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}