Dario Cazzato, Claudio Cimarelli, Jose Luis Sanchez-Lopez, M. Olivares-Méndez, H. Voos
{"title":"Real-Time Human Head Imitation for Humanoid Robots","authors":"Dario Cazzato, Claudio Cimarelli, Jose Luis Sanchez-Lopez, M. Olivares-Méndez, H. Voos","doi":"10.1145/3348488.3348501","DOIUrl":"https://doi.org/10.1145/3348488.3348501","url":null,"abstract":"The ability of the robots to imitate human movements has been an active research study since the dawn of the robotics. Obtaining a realistic imitation is essential in terms of perceived quality in human-robot interaction, but it is still a challenge due to the lack of effective mapping between human movements and the degrees of freedom of robotics systems. If high-level programming interfaces, software and simulation tools simplified robot programming, there is still a strong gap between robot control and natural user interfaces. In this paper, a system to reproduce on a robot the head movements of a user in the field of view of a consumer camera is presented. The system recognizes the presence of a user and its head pose in real-time by using a deep neural network, in order to extract head position angles and to command the robot head movements consequently, obtaining a realistic imitation. At the same time, the system represents a natural user interface to control the Aldebaran NAO and Pepper humanoid robots with the head movements, with applications in human-robot interaction.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127339131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiang Ji, Hu Liu, Zizhao Wang, Yuge Li, Yongliang Tian, Mingyu Huo
{"title":"Agent-based MR Simulation as a Tool for Combat Search and Rescue Command and Control","authors":"Xiang Ji, Hu Liu, Zizhao Wang, Yuge Li, Yongliang Tian, Mingyu Huo","doi":"10.1145/3348488.3348489","DOIUrl":"https://doi.org/10.1145/3348488.3348489","url":null,"abstract":"Combat search and rescue(CSAR) is a significant part of joint combat operation. The isolated personnel in hostile territory is of great value to both sides. For the sake of extremely high cost and massive consumption of time in military exercises, virtual reality is a preferred way to conduct the experiment. Mixed reality, known as MR, allows users to interact with the virtual objects, as well as the real objects. With MR devices, commanders get a better awareness of both virtual combat and real environment. Agent-based modelling is a relatively new approach to modelling systems composed of interacting objects. By analyzing the decision-making process of agents, codes are written to drive the simulation. Finally, the whole system is applied to HoloLens to provide a holographic demonstration.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115222118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zizhao Wang, Hu Liu, Xiang Ji, Yuge Li, Yongliang Tian, Zikun Chen
{"title":"Virtual Simulation of the Maritime Search Operation for Drowning Crew","authors":"Zizhao Wang, Hu Liu, Xiang Ji, Yuge Li, Yongliang Tian, Zikun Chen","doi":"10.1145/3348488.3348490","DOIUrl":"https://doi.org/10.1145/3348488.3348490","url":null,"abstract":"This paper started from the concrete operation analysis and environmental condition set up. After building the drifting model and scripting for the operation, the visual simulation in different search methods with different types of searching aircrafts are carried out in a particular search range, aiming to find the influence of aircraft performance and operation planning on search efficiency, and to provide references for the real search and rescue operation decision.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127766242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mission Effectiveness Evaluation of Manned/Unmanned Aerial Team based on OODA and Agent-Based Simulation","authors":"Peisen Xiong, Hu Liu, Yongliang Tian","doi":"10.1145/3348488.3348491","DOIUrl":"https://doi.org/10.1145/3348488.3348491","url":null,"abstract":"Coordinated operation of manned and unmanned aerial vehicles is being taken as an important operation mode by military and research institutions in various countries. However, the complex coordination of aircrafts in the fleet has caused great difficulties for the evaluation of mission effectiveness. This paper studies the method to evaluate the mission effectiveness of manned/ unmanned aerial team based on agent-based simulation and proposes an approach called resource consumption analyze to establish evaluation index system. This approach decomposes missions into multiple tasks, and further decomposes the process of tasks based on Observe-Orient-Decide-Act (OODA) to facilitate modeling tasks and developing evaluation indicators. Moreover, this paper builds the behavior models of combat units based on OODA and conducts several simulation experiments to evaluate the mission effectiveness of manned/unmanned aerial team. These experiments show that the evaluation approach based on OODA and agent-based simulation can help to evaluate the mission effectiveness of manned/unmanned aerial team and design more suitable aircraft for future warfare.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"435 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132453709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of Different Block Sizes on the Quality of JPEG-like Compressed Image","authors":"Chen-Wei Deng, Hongbo Zhang, Yulin Wang","doi":"10.1145/3348488.3355180","DOIUrl":"https://doi.org/10.1145/3348488.3355180","url":null,"abstract":"Discrete Cosine Transform (DCT) is a common method in image compression processing. In this method, the original image is first divided into several blocks for processing, and then the information of each pixel is converted to frequency coefficients, and a certain high frequency component is discarded, thus the image is lossy compressed. In this paper, the effect of block sizes on the quality of decompressed image is studied, so as to find out the optimal block sizes. Firstly, we make a theoretical analysis, and then verify my analysis results through experiment. In the experiment, the block sizes range from 1x1 to 512x512. We use different metrics to evaluate the image quality, including MSE, PSNR and SSIM.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132533671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face Recognition System Based on Modified Sparse Representation","authors":"Xudong Yang, Yongna Liu","doi":"10.1145/3348488.3348492","DOIUrl":"https://doi.org/10.1145/3348488.3348492","url":null,"abstract":"This paper proposes a face recognition method based on modified sparse representation. Sparse representation is an advanced data analysis algorithm based on compressive sensing. Traditionally, the sparse representation is performed on the global dictionary formed by all the training classes. Afterwards, the classification is made based on the reconstruction errors. This method did not consider the individual representation capabilities of different classes. So, a modified sparse representation is designed in this study by conducting the sparse representation on the local dictionary formed by each training class. Then, the reconstruction error of each class is computed and compared to determine the label of the test sample. In the experiments, the AR and Yale-B face image databases are employed to investigate the performance of the proposed method. The results show its effectiveness and robustness.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130012449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dario Cazzato, Marco Leo, P. Carcagnì, Claudio Cimarelli, H. Voos
{"title":"Understanding and Modelling Human Attention for Soft Biometrics Purposes","authors":"Dario Cazzato, Marco Leo, P. Carcagnì, Claudio Cimarelli, H. Voos","doi":"10.1145/3348488.3348500","DOIUrl":"https://doi.org/10.1145/3348488.3348500","url":null,"abstract":"Soft biometrics systems have spread among recent years, both as a mean to empower classical biometrics, as well as stand-alone and complete solutions, with several application scopes ranging from digital signage to human-robot interaction. Among all, in the recent years emerged the possibility to consider as soft biometrics also the temporal evolution of both the human attention and the emotional states, and some recent works in the literature partially explored this exciting research line making use of either expensive tools or RGB-D devices. This work is instead the first attempt to perform soft biometrics identification of individuals on the basis of data acquired by a consumer camera, looking at users' attention evolution in time. The experimental evidence of the feasibility of using the proposed framework as soft-biometrics is given on a set of 22 users recorded by a tablet front-facing camera while watching, in an unconstrained mobile setting, a video running on the tablet screen.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130953876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Influence of Spatial Awareness on VR: Investigating the influence of the familiarity and awareness of content of the real space to the VR","authors":"A. AlMutawa, Ryoko Ueoka","doi":"10.1145/3348488.3348502","DOIUrl":"https://doi.org/10.1145/3348488.3348502","url":null,"abstract":"A common practice of VR experiments is to create a 3D model replica of the experience room to ease the transition of the real world to the virtual environment. However, the biggest issue with this method is that different spaces would need lots of preparation of 3D models. We propose to use a semi-unified covered space so that the user won't be able to compare the real space and the virtual environment, and some methods to make users believe that the VR space is real. It helps the user to believe in the existence of virtual objects that are not in the real space, and increases the possibility of the existence of the 3D object in real space.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131606145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seba Susan, A. Agarwal, Chetan Gulati, Sunpreet Singh, V. Chauhan
{"title":"Human Attention Span Modeling using 2D Visualization Plots for Gaze Progression and Gaze Sustenance","authors":"Seba Susan, A. Agarwal, Chetan Gulati, Sunpreet Singh, V. Chauhan","doi":"10.1145/3348488.3348494","DOIUrl":"https://doi.org/10.1145/3348488.3348494","url":null,"abstract":"This paper presents a novel perspective on human attention span modeling based on gaze estimation from head pose data extracted from videos. This is achieved by devising specialized 2D visualization plots that capture gaze progression and gaze sustenance over time. In doing so, a low-resolution analysis is assumed, as is the case with most crowd surveillance videos wherein the retinal analysis and iris pattern extraction of individual subjects is made impossible. The information is useful for studies involving the random gaze behavior pattern of humans in a crowded place, or in a controlled environment in seminars or office meetings. The extraction of useful information regarding the attention span of the individual from the spatial and temporal analysis of gaze points is the subject of study in this paper. Different solutions ranging from plotting temporal gaze plots to sustained attention span graphs are investigated, and the results are compared with the existing techniques of attention span modeling and visualization.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117021657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongliang Tian, Ming Li, Hu Liu, Siliang Liu, Rong Yin
{"title":"Research on 3D Virtual Training Courseware Development System of Civil Aircraft Based on Virtual Reality Technology","authors":"Yongliang Tian, Ming Li, Hu Liu, Siliang Liu, Rong Yin","doi":"10.1145/3348488.3348495","DOIUrl":"https://doi.org/10.1145/3348488.3348495","url":null,"abstract":"The application of 3D(three-dimensional) virtual reality technology for theoretical and interactive training of civil aircraft has become a development tendency. However, there still remains problems in the lack of modifiability and the complexity of development process in this application. In this paper, based on analysis of the training content requirements and characteristics of virtual reality technology, the 3D virtual training courseware for civil aircraft is divided into component recognition courseware, gas/ liquid flow courseware, mechanism action courseware, and disassembly/ assembly courseware. The function requirements of 3D virtual training courseware are analyzed, and a design scheme of 3D virtual training courseware development system for civil aircraft including system architecture design, virtual reality engine selection and courseware flow and logic editing was proposed. Finally, a sample courseware application test was conducted for four types of courseware.","PeriodicalId":420290,"journal":{"name":"International Conference on Artificial Intelligence and Virtual Reality","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132629729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}