{"title":"Deaf Poetry: saying everything without speaking","authors":"Dimitrios Batras, Jean-François Jégo, Chu-Yin Chen","doi":"10.1145/2948910.2955108","DOIUrl":"https://doi.org/10.1145/2948910.2955108","url":null,"abstract":"Here, hands don't beat the drum. Instead the drum speaks with its hands, projected onto its skin. They interact and create poems in sign language, especially for the deaf/hearing impaired, because the drum has acquired the expressive and prosodic gestures of deaf poets. This installation is based on a Virtual Reality Agent-based platform to explore an interactive gestural dialogue between real and virtual actors.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125565001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gestures: Emotions Interaction: e-Viographima application for visual artistic synthesis","authors":"I. Mavridou","doi":"10.1145/2948910.2948953","DOIUrl":"https://doi.org/10.1145/2948910.2948953","url":null,"abstract":"This study aims at developing an application, called \"E-Viographima\", for real time artistic synthesis that utilises low-cost human-computer interaction devices. The diverse possibilities of artistic expression are approached through variations of visual stimuli whose parameters are altered according to motion captured data recorded while the users wag their hands, together with concurrent emotional states derived from the users' brain activity. The hands' movements are recorded by a motion capture system (Leap Motion) and four different emotional states are identified through electroencephalography signals (EEG) recorded by a brain signal detection headset (Emotiv Epoc). The primary scope of this project is to produce in real time a visualised output in the form of an interactive line that results in the creation of a synthesis out of contoured forms made by the automated and semi-automated elements.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126228616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elisabetta Bevacqua, R. Richard, J. Soler, P. D. Loor
{"title":"INGREDIBLE: A platform for full body interaction between human and virtual agent that improves co-presence","authors":"Elisabetta Bevacqua, R. Richard, J. Soler, P. D. Loor","doi":"10.1145/2948910.2948943","DOIUrl":"https://doi.org/10.1145/2948910.2948943","url":null,"abstract":"This paper presents a platform dedicated to a full body interaction between a virtual agent and human or between two virtual agents. It is based on the notion of coupling and the metaphor of the alive communication that come from studies in psychology. The platform, based on a modular architecture, is composed of modules that communicate through messages. Four modules have been implemented for human tracking, motion analysis, decision computation and rendering. The paper describes all of them. Part of the decision module is generic, that is it could be used for different interactions based on sensorimotor, while part of it is strictly dependent on the type of scenario one wants to obtain. An application example for a fitness exergame scenario is also presented in this work.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129154732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ribeiro, R. K. D. Anjos, Carla Fernandes, J. Pereira
{"title":"3D Annotation in Contemporary Dance: Enhancing the Creation-Tool Video Annotator","authors":"C. Ribeiro, R. K. D. Anjos, Carla Fernandes, J. Pereira","doi":"10.1145/2948910.2948961","DOIUrl":"https://doi.org/10.1145/2948910.2948961","url":null,"abstract":"Annotated videos have been used in the context of dance performance not only as a way to record and share compositions and knowledge between different choreographers, but also as a powerful learning tool. Restraining the viewpoint of the user to the recorded point of view can be an obstacle in several scenarios. Alternatives that introduce the concept of a three-dimensional space have been developed, but coming short either on the freedom of concepts that the user is able to introduce, or on resorting to a non-natural representation. This article describes a follow-up work on the previously developed Creation-Tool [2] extending the existing functionality to tackle this problem. The developed system places the 2D annotations onto a three-dimensional point cloud, captured by depth sensors coupled with cameras around the performance, thus enabling the user to freely visualize the annotated performance three-dimensionally at an arbitrary point of view.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130256588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What is Movement Interaction in Virtual Reality for?","authors":"M. Gillies","doi":"10.1145/2948910.2948951","DOIUrl":"https://doi.org/10.1145/2948910.2948951","url":null,"abstract":"This paper raises the question of why movement base interaction is important in Virtual Reality (VR). This is an important question as new VR hardware is increasingly being released together with movement interfaces. Slater's view is that VR reproduces the sensorimotor contingencies present in our interactions with the real world. This provides a powerful justification, but when the contingencies are not perfectly reproduced, they can result in interfaces that lack important features of established interaction design: discoverability memorability, and feedback. However, Embodied Cognition suggests that these imperfect reproductions can still have value if they allow us to reproduce our cognitive and emotional engagement with the world and our movements.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121743415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Camurri, K. E. Raheb, O. Even-Zohar, Y. Ioannidis, Amalia Markatzi, Jean-Marc Matos, E. Morley-Fletcher, Pablo Palacio, M. Romero, A. Sarti, S. Pietro, Vladimir Viro, Sarah Whatley
{"title":"WhoLoDancE: Towards a methodology for selecting Motion Capture Data across different Dance Learning Practice","authors":"A. Camurri, K. E. Raheb, O. Even-Zohar, Y. Ioannidis, Amalia Markatzi, Jean-Marc Matos, E. Morley-Fletcher, Pablo Palacio, M. Romero, A. Sarti, S. Pietro, Vladimir Viro, Sarah Whatley","doi":"10.1145/2948910.2948912","DOIUrl":"https://doi.org/10.1145/2948910.2948912","url":null,"abstract":"In this paper we present the objectives and preliminary work of WhoLoDancE a Research and Innovation Action funded under the European Union's Horizon 2020 programme, aiming at using new technologies for capturing and analyzing dance movement to facilitate whole-body interaction learning experiences for a variety of dance genres. Dance is a diverse and heterogeneous practice and WhoLoDancE will develop a protocol for the creation and/or selection of dance sequences drawn from different dance styles for different teaching and learning modalities. As dance learning practice lacks standardization beyond dance genres and specific schools and techniques, one of the first project challenges is to bring together a variety of dance genres and teaching practices and work towards a methodology for selecting the appropriate shots for motion capturing, to acquire kinetic material which will provide a satisfying proof of concept for Learning scenarios of particular genres. The four use cases we are investigating are 1) classical ballet, 2) contemporary dance, 3) flamenco and 4) Greek folk dance.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125933064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Movement Notation and Digital Media Art in the Contemporary Dance Practice: Aspects of the Making of a Multimedia Dance Performance","authors":"Foteini Papadopoulou, Martin Schulte","doi":"10.1145/2948910.2948929","DOIUrl":"https://doi.org/10.1145/2948910.2948929","url":null,"abstract":"UPDATED---9 June 2016. This paper presents some of the aspects from the creative process leading to the multimedia dance performance 'as far as abstract objects' ('afaao'). The creation of that stage work was at the same time an extensive experimental transdisciplinary research, a laboratory for movement analysis and movement composition. In particular, this paper will elaborate on the relation and collaboration between two of the fields involved, movement notation and media art. In 'afaao' fundamental principles from movement notation were paired with possibilities from modern media technology. The common ground they shared was the transformation process that movement undergoes when working with notation as well as when working with media art. The objective of this collaboration was to explore the complex phenomenon of movement and to create a work of art that would communicate the research revelations in a visually engaging way with the audience, offering alternative views on the movement's dynamic structures.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132638215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"InterACTE: Improvising with a Virtual Actor","authors":"Dimitrios Batras, Judith Guez, Jean-François Jégo","doi":"10.1145/2948910.2955109","DOIUrl":"https://doi.org/10.1145/2948910.2955109","url":null,"abstract":"This paper describes an interactive installation that invites participants to improvise with an autonomous virtual actor. Based on real-time analysis of the participant's gestures, the virtual actor is capable of either propose expressive and linguistic gestures (selected from a set of four Motion Capture Databases), either imitate the participant, or either generate its own gestures using a Genetic Algorithm. A software agent is used to trigger the previously mentioned behaviors based on the analysis results and on a set of predefined rules. The installation is presented in two acts: i. In the first act, the participant interacts with the projected shadow of the virtual actor; ii. In the second act, the participant is being immersed in a 3D world using a virtual reality HMD---the virtual actor is being presented to her/him in the form of an avatar made of particles.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133923431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ververidis, Sotirios Karavarsamis, S. Nikolopoulos, Y. Kompatsiaris
{"title":"Pottery gestures style comparison by exploiting Myo sensor and forearm anatomy","authors":"D. Ververidis, Sotirios Karavarsamis, S. Nikolopoulos, Y. Kompatsiaris","doi":"10.1145/2948910.2948924","DOIUrl":"https://doi.org/10.1145/2948910.2948924","url":null,"abstract":"In this paper we propose a set of Electromyogram (EMG) based features such as muscles total pressure, flexors pressure, tensors pressure, and gesture stiffness, for the purpose of identifying differences in performing the same gesture across three pottery constructions namely bowl, cylindrical vase, and spherical vase. In identifying these EMG-based features we have developed a tool for visualizing in real-time the signals generated from a Myo sensor along with the muscle activation level in 3D space. In order to do this, we have introduced an algorithm for estimating the activation level of each muscle based on the weighted sum of the 8 EMG signals captured by Myo. In particular, the weights are calculated as the distance of the muscle cross-sectional volumes at Myo plane level from each of the 8 Myo pods, multiplied by the muscle cross-section volume. Statistics estimated on an experimental dataset for the proposed features such as mean, variance, and percentiles, indicate that gestures such as \"Raise clay\" and \"Form down cyclic clay\" exhibit differences across the three vase types (i.e. bowl, cylinder, and sphere), although perceived as identical.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"46 23","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133782979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Odysseas Bouzos, Yannick Jacob, S. Manitsaris, A. Glushkova
{"title":"3D-scene modelling of professional gestures when interacting with moving, deformable and revolving objects","authors":"Odysseas Bouzos, Yannick Jacob, S. Manitsaris, A. Glushkova","doi":"10.1145/2948910.2948949","DOIUrl":"https://doi.org/10.1145/2948910.2948949","url":null,"abstract":"In this paper, we present good practices of applying and extending Random Decision Forests (RDFs) for the 3D modelling of scenes where humans interact with moving, deformable and revolving objects in a professional context. We apply our method to two use-cases; the first is in the industrial context of the luxury leather good production while the second is in an atelier specialised in the wheel-throwing art of pottery. In the first use-case we use a single RDF, while for the second one of pottery, we extend the typical application of RDFs, by introducing the Hierarchical Random Decision Forests (HRDFs). More precisely, we use three RDFs in a tree structure architecture. The parent RDF is used to create a rough initial segmentation of the scene, while the two children RDFs are used to further classify the regions of the left and right arm, hand and fingers respectively. Results demonstrate that the proposed algorithm is sufficient for the accurate classification of scenes where humans interact with objects by using hand gestures in both simple and complex scenarios.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114152300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}