{"title":"Designing MUSE: A Multimodal User Experience for a Shopping Mall Kiosk","authors":"Andreea Niculescu, Kheng Hui Yeo, R. Banchs","doi":"10.1145/2974804.2980521","DOIUrl":"https://doi.org/10.1145/2974804.2980521","url":null,"abstract":"Multimodal interactions provide more engaging experiences allowing users to perform complex tasks while searching for information. In this paper, we present a multimodal interactive kiosk for displaying information in shopping malls. The kiosk uses visual information and natural language to communicate with visitors. Users can connect to the kiosk using their own mobile phone as speech or type input device. The connection is established by scanning a QR code displayed on the kiosk screen. Field work, observations, design, system architecture and implementation are reported.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123051864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Children's Facial Expressions in Truthful and Deceptive Interactions with a Virtual Agent","authors":"M. Pereira, J. D. Lange, S. Shahid, M. Swerts","doi":"10.1145/2974804.2974815","DOIUrl":"https://doi.org/10.1145/2974804.2974815","url":null,"abstract":"The present study focused on the facial expressions that children exhibit while they try to deceive a virtual agent. An interactive lie elicitation game was developed to record children's facial expressions during deceptive and truthful utterances, when doing the task alone or in the presence of peers. Based on manual annotations of their facial expressions, we found that children, while communicating with a virtual agent, produce different facial expressions in deceptive and truthful contexts. It seems that deceptive children try to cover their lie as they smile significantly more than truthful children. Moreover, co-presence enhances children's facial expressive behaviour and the amount of cues to deceit. Deceivers, especially when being together with a friend, more often press their lips, smile, blink and avert their gaze than truth-tellers.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126068586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Main Track Session V: Extending Body Image","authors":"Hirotaka Osawa, Tetsushi Oka","doi":"10.1145/3257127","DOIUrl":"https://doi.org/10.1145/3257127","url":null,"abstract":"","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128697935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Main Track Session II: Power of Groups","authors":"T. Iio, Sin-Hwa Kang","doi":"10.1145/3257124","DOIUrl":"https://doi.org/10.1145/3257124","url":null,"abstract":"","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121218682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Humotion: A Human Inspired Gaze Control Framework for Anthropomorphic Robot Heads","authors":"Simon Schulz, Florian Lier, A. Kipp, S. Wachsmuth","doi":"10.1145/2974804.2974827","DOIUrl":"https://doi.org/10.1145/2974804.2974827","url":null,"abstract":"In recent years, an attempt is being made to control robots more intuitive and intelligible by exploiting and integrating anthropomorphic features to boost social human-robot interaction. The design and construction of anthropomorphic robots for this kind of interaction is not the only challenging issue -- smooth and expectation-matching motion control is still an unsolved topic. In this work we present a highly configurable, portable, and open control framework that facilitates anthropomorphic motion generation for humanoid robot heads by enhancing state-of-the-art neck-eye coordination with human-like eyelid saccades and animation. On top of that, the presented framework supports dynamic neck offset angles that allow animation overlays and changes in alignment to the robots communication partner whileretaining visual focus on a given target. In order to demonstrate the universal applicability of the proposed ideas we used this framework to control the Flobi and the iCub robot head, both in simulation and on the physical robot. In order to foster further comparative studies of different robot heads, we will release all software, based on this contribution, under an open-source license.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114850822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-cultural Study of Perception and Acceptance of Japanese Self-adaptors","authors":"T. Ishioh, Tomoko Koda","doi":"10.1145/2974804.2980491","DOIUrl":"https://doi.org/10.1145/2974804.2980491","url":null,"abstract":"This paper reports our preliminary results of a cross-cultural study of perception and acceptance of cultural specific self-adaptors performed by a virtual agent. There are culturally-defined preferences in self-adaptors and other bodily expressions, and allowance level of expressing such non-verbal behavior are culture-dependent. We conducted a web experiment to evaluate the impression and acceptance of Japanese culture specific self-adaptors and gathered participants from 8 countries. The results indicated non-Japanese participants' insensitivity to the different types of self-adaptors and over sensitivity to Japanese participants' to stressful self-adaptors.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130187179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Behaviours and Roles for Social and Adaptive Robots In Education: Teacher's Perspective","authors":"M. Ahmad, Omar Mubin, Joanne Orlando","doi":"10.1145/2974804.2974829","DOIUrl":"https://doi.org/10.1145/2974804.2974829","url":null,"abstract":"In order to establish a long-term relationship between a robot and a child, robots need to learn from the environment, adapt to specific user needs and display behaviours and roles accordingly. Literature shows that certain robot behaviours could negatively impact child's learning and performance. Therefore, the purpose of the present study is to not only understand teacher's opinion on the existing effective social behaviours and roles but also to understand novel behaviours that can positively influence children performance in a language learning setting. In this paper, we present our results based on interviews conducted with 8 language teachers to get their opinion on how a robot can efficiently perform behaviour adaptation to influence learning and achieve long-term engagement. We also present results on future directions extracted from the interviews with teachers.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130853425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behavioral Expression Design onto Manufactured Figures","authors":"Yoshihisa Ishihara, Kazuki Kobayashi, S. Yamada","doi":"10.1145/2974804.2980484","DOIUrl":"https://doi.org/10.1145/2974804.2980484","url":null,"abstract":"Natural language user interfaces, such as Apple Siri and Google Voice Search have been embedded in consumer devices; however, speaking to objects can feel awkward. Use of these interfaces should feel natural, like speaking to a real listener. This paper proposes a method for manufactured objects such as anime figures to exhibit highly realistic behavioral expressions to improve speech interaction between a user and an object. Using a projection mapping technique, an anime figure provides back-channel feedback to a user by appearing to nod or shake its head.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133390242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Posture Detection using H-ELM Body Part and Whole Person Detectors for Human-Robot Interaction","authors":"M. Ramanathan, W. Yau, E. Teoh","doi":"10.1145/2974804.2980480","DOIUrl":"https://doi.org/10.1145/2974804.2980480","url":null,"abstract":"For reliable human-robot interaction, the robot must know the person's action in order to plan the appropriate way to interact or assist the person. As part of the pre-processing stage of action recognition, the robot also needs to recognize the various body parts and posture of the person. But estimation of posture and body parts is challenging due to the articulated nature of the human body and the huge intra-class variations. To address this challenge, we propose two schemes using Hierarchical-ELM (H-ELM) for posture detection into either upright or non-upright posture. In the first scheme, we follow a whole body detector approach, where a H-ELM classifier is trained on several whole body postures. In the second scheme, we follow a body part detection approach, where separate H-ELM classifiers are detected for each body part. Using the detected body parts a final decision is made on the posture of the person. We have conducted several experiments to compare the performance of both approaches under different scenarios like view angle changes, occlusion etc. Our experimental results show that body part H-ELM based posture detection works better than other proposed framework even in the presence of occlusion.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Social Interaction with Everyday Object based on Perceptual Crossing","authors":"S. A. Anas, S. Qiu, G.W.M. Rauterberg, Jun Hu","doi":"10.1145/2974804.2974810","DOIUrl":"https://doi.org/10.1145/2974804.2974810","url":null,"abstract":"Eye gaze plays an essential role in social interaction which influences our perception of others. It is most likely that we can perceive the existence of another intentional subject through the act of cathing one another's eyes. Based on the notion of perceptual crossing, we aim to establish a meaningful social interaction that emerges out of the perceptual crossing between a person and an everyday object by exploiting the gazing behavior of the person as the input modality for the system. We investigated in literature the experiments that adopt the perceptual crossing as their foundation, lessons learned from literature were used as input for a concept to create meaningful social interaction. We used an eye-tracker to measure gaze behavior that allows the participant to interact with the object by using their eyes through active exploration. It creates a situation where both of them mutually becoming aware of each other's existence. Further, we discuss the motivation for this research, present a preliminary experiment that influences our decision and our directions for future work.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115671165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}