{"title":"Transformation to a bird: overcoming the height of fear by inducing the proteus effect of the bird avatar","authors":"Akimi Oyanagi, Ren Ohmura","doi":"10.1145/3313950.3313976","DOIUrl":"https://doi.org/10.1145/3313950.3313976","url":null,"abstract":"Body ownership illusion can lead to give a user a feeling of becoming an avatar as if it was their own body. Furthermore, existing studies have reported that an avatar's appearance can affect a user's behavior, attitude and mental condition. Almost all studies thus far have been focused on a human-like avatar, not that of an animal avatar. Unlike a human, it can be expected that the bird avatar can leads to give psychological effects in a flight experience, because a bird has flight ability. Our previous study reported that the bird avatar enhances the sense of flight significantly (i.e. participants feel as though they are flying in the sky.) than a human avatar in a flight experience. In this present work, we investigate about changes in attitudes with the fear of heights. We conduct experiments where an avatar is raised to a specific altitude after inducing the sense of body ownership. There are no significant differences between conditions. However, the bird avatar is rated higher than control condition in some questionnaire items. In particular, becoming a bird avatar decreases the subjective fear of heights.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130509895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experimental research on virtual reality system with panorama","authors":"Xin Xu, Guangpu Qian","doi":"10.1145/3313950.3313973","DOIUrl":"https://doi.org/10.1145/3313950.3313973","url":null,"abstract":"In order to introduce panoramic technology into virtual reality system, a new type VR system modeling method which combine 3D panorama and virtual modeling is proposed. The process and characteristics of virtual modeling were analyzed, and the principle and method of panorama were presented. The VR system architecture which combines the real and virtual scenes is verified by building the VR system of a coal mill. The results show that this method could effectively have two advantages, one is the real pictures from the panorama, the other is the interoperability from the virtual modeling. It can be applied in immersive remote training and maintenance in the future which strongly broaden the scope of current VR applications.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115320086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advanced template matching prediction using a motion boundary","authors":"Dong-Jae Won, Joo-Hee Moon","doi":"10.1145/3313950.3313959","DOIUrl":"https://doi.org/10.1145/3313950.3313959","url":null,"abstract":"In this paper, an advanced template matching prediction is proposed to improve the coding efficiency gain. Conventional template matching prediction in joint exploration model (JEM) finds a motion vector for the current coding block by using adjacent template in reconstructed region although some coding blocks may have complicated motion coming from two moving objects or backgrounds. In these case, conventional template matching prediction mode could not be efficient. In order to overcome the complicated motion problem, we propose a new template matching prediction which applies two motion vectors to predict the coding block. In the proposed template matching prediction, a split mechanism is used to divide the coding block along the motion boundary. The proposed scheme is implemented on the JEM reference software 6.0 version and shows that maximum coding efficiency gain is up to 0.82% in luminance component.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115478464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jen-Tun Lee, R. J. Rajapakse, Y. Hung, Y. Mao, Y. Tokuyama
{"title":"HaptDoll: an interactive device for tactile sensation with virtual companion doll","authors":"Jen-Tun Lee, R. J. Rajapakse, Y. Hung, Y. Mao, Y. Tokuyama","doi":"10.1145/3313950.3313974","DOIUrl":"https://doi.org/10.1145/3313950.3313974","url":null,"abstract":"With the advance of display technology, the virtual reality technique has become more valuable than ever, which then promotes the development of VR/AR interactive contents. VR has the potential to provide experiences and deliver results that cannot be otherwise achieved. However, interacting with immersive applications is not always straightforward and it is not just about an interface for the user to reach their goals. It is also about users working in an intuitive attitude that is a comfortable experience and devoid of frustration. This work provides solutions to the issues of being too hard for the user to have intuitive experiences in the virtual scenario. For the intuitive experience, we developed a wireless wearable device which is set on the user's upper-body to detect the activities then import them into the virtual environment. For interactive and immersive VR contents, we developed a companion doll that provides haptic feedback in the real-world and six emotional expressions in the virtual world.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125327314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on data visualization technology of logistics distribution system based on clustering algorithm","authors":"Meng Huang, Shuai Liu, Jinglei Lin","doi":"10.1145/3313950.3313957","DOIUrl":"https://doi.org/10.1145/3313950.3313957","url":null,"abstract":"This paper takes the logistics distribution record of Yifeng Weiye Group for the past two years as the basic research unit. By exploring the relationship between data fields, we use the idea of adaptive clustering algorithm and spatial clustering analysis to process the attribute data of transportation capacity[8]. Basing on the obtained clustering results, we use Python and PHP technology to optimize the distribution area, and finally design an effective visual expression method to obtain the traffic situation knowledge. We can provide relevant analysis and technical support for enterprises to improve the efficiency of distribution logistics and optimize the structure of the industrial chain.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115520750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"State recognition of circuit breaker based on machine vision","authors":"Qian Jia, Zhifeng Zhou, Yong Zhao","doi":"10.1145/3313950.3313958","DOIUrl":"https://doi.org/10.1145/3313950.3313958","url":null,"abstract":"In the automatic assembly line of the disconnector, the upper cover and the base of the circuit breaker housing are separated by a separation station. If the upper cover and the base are placed too far apart or placed in the opposite direction, the separation station cannot be separated normally, and it will affect the normal operation of subsequent processes. In this paper, a machine vision recognition and detection system for the circuit breaker shell is proposed. The image of the circuit breaker casing is captured by the CCD camera. The image processing technology is used to detect the edge of the original image of the circuit breaker shell and feature extraction. The algorithm of convex hull is used to calculate the rotation angle. Then the least square method is used to calculate the center position of the largest circle, and the upper cover and the base are identified according to the quadrant of the center of the circle.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129820699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards integration of user-centered designed tutorials for better virtual reality immersion","authors":"S. F. M. Zaidi, C. Moore, Himanshu Khanna","doi":"10.1145/3313950.3313977","DOIUrl":"https://doi.org/10.1145/3313950.3313977","url":null,"abstract":"Virtual reality (VR) has been contributing to education, health sciences and entertainment of late. The technology has made itself flexible enough to cater to users of different ages. Additionally head-mounted displays (HMDs) has led to the increase of immersion in the VR environment. Yet, it is sometimes hard for the users to get engaged if not instructed properly to on how to interact with the environment, adding stress to what was expected to be an entertaining experience. Hands-on instructions about environment before VR exposure is common in VR commercial studios. Also, VR game developers after completing their game, put tutorial sessions for the users to understand how the game works. In this paper, we suggested that there is a need for immersive user-center designed (UCD) tutorials for the VR games, including other high interaction and engagement environments. This approach allows users to get hands-off experience and sense of achievement through well-descriptive and interactive tutorials before going into the full VR Immersion.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116310283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of virtual reality to the study of festival culture in aboriginal literature","authors":"Chuan-Po Wang, Cheng-Hui Tsai","doi":"10.1145/3313950.3313975","DOIUrl":"https://doi.org/10.1145/3313950.3313975","url":null,"abstract":"This study will use the virtual reality skills to construct the cultural scenes of the aboriginal festivals, and use the \"virtual reality applied to the study of ritual culture in Aboriginal literature\" as the research consciousness, according to the culture of the Aboriginal rituals written by the aboriginal writers. The core problem consciousness of the aboriginal festival culture, the application of virtual reality skills into cultural education, analyzing 24 students culture learning effect. By multiple case research and experimental method improved after the virtual reality integration, the learning of the ritual culture of the traditional aboriginal literature, and the integration of virtual A comparative study of the effectiveness of ritual culture learning after the reality. In the course of the research, it combined with the virtual reality of the aboriginal culture to realize the digital transmission of cultural education with the innovative learning mode of digital humanities, so as to enhance the learning effect of virtual reality into text narrative and cultural education. Entering a new era of digital human development. Finally, it is also possible to analyze and compare Taiwan's aboriginal digital culture education with international cultural education to establish innovative teaching goals for digital humanities education.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126951313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital recognition of street view house numbers based on DCGAN","authors":"Juping Zhong, Jing Gao, Rongjun Chen, Jun Yu Li","doi":"10.1145/3313950.3313963","DOIUrl":"https://doi.org/10.1145/3313950.3313963","url":null,"abstract":"Deep learning algorithms have surpassed human resolution in applications such as face recognition and object classification. However, it can only produce very blurred, lack of details of the image. Generative Adversarial Network is a game training of minimax antagonism between generator G and discriminator D, and ultimately achieves Nash equilibrium. We use deep convolutional GAN that recognizes sequence numbers and without split characters. First we use convolution network to extract character features. Second we construct a convolution neural network to recognize digits of natural scene house number. DCGAN is used to improve the resolution of the number of fuzzy houses, so as to extract more abundant data features in data set training. It can better recognize the numbers in the natural street.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131856870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Time-frequency characterization of resting-state brain function reveals overlapping components with specific topology and frequency content","authors":"T. Bolton, D. Ville","doi":"10.1145/3313950.3314188","DOIUrl":"https://doi.org/10.1145/3313950.3314188","url":null,"abstract":"Even at rest, functional magnetic resonance imaging (fMRI) data displays exquisitely complex temporal dynamics. Here, we deployed a time-frequency analysis to track the modulus of fMRI signals over time, across space (a set of 341 brain areas) and frequency (45 uniformly distributed bins in the 0.01-0.25 Hz range). Decomposing the data into a set of temporally overlapping building blocks by Principal Component Analysis, we exposed diverse functional components with their own modulus pattern across brain locations and frequency sub-ranges. In particular, the component explaining most data variance showed homogeneous modulus across space at low frequencies, fitting with the marked whole-brain signal fluctuations seen in the time courses subjected to analysis. Other components showed topologically well-defined modulus patterns (e.g., contrasting the default mode and visual networks), with characteristic frequency properties and subject-specific activation profiles.","PeriodicalId":392037,"journal":{"name":"Proceedings of the 2nd International Conference on Image and Graphics Processing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115036634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}