{"title":"Techniques and Approaches in Static Visualization of Motion Capture Data","authors":"William Li, L. Bartram, Philippe Pasquier","doi":"10.1145/2948910.2948935","DOIUrl":"https://doi.org/10.1145/2948910.2948935","url":null,"abstract":"In this paper we present a state of the art of the current approaches to visualization of motion capture data. We discuss the data representation, pre-processing techniques, and the design of existing tools and systems. Next we outline the advantages and disadvantages of the systems, some of which are explicitly noted by the original authors. Lastly we conclude with an overall summary and future directions.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124370001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. E. Raheb, Nicolas Papapetrou, A. Katifori, Y. Ioannidis
{"title":"BalOnSe: Ballet Ontology for Annotating and Searching Video performances","authors":"K. E. Raheb, Nicolas Papapetrou, A. Katifori, Y. Ioannidis","doi":"10.1145/2948910.2948926","DOIUrl":"https://doi.org/10.1145/2948910.2948926","url":null,"abstract":"In this paper we present BalOnSe (named after the ballet step balance), an ontology-based web interface that allows the user to annotate classical ballet videos, with a hierarchical domain specific vocabulary and provides an archival system for videos of dance. The interface integrates a hierarchical vocabulary based on classical ballet syllabus terminology (Ballet.owl) implemented as an OWL-2 ontology. BalOnSe supports the search and browsing of the multimedia content using metadata (title, dancer featured, etc.), and also implements the functionality of \"searching by movement concepts\", i.e., filtering the videos that are associated with particular required terms of the vocabulary, based on previous submitted annotations. In the paper, we present the ballet.owl ontology, and its structure, explaining the conceptual modeling decisions. We highlight the main functionality of the system and finally, we present how the manual ontology guided annotation allows the user to search the content through the vocabularies and also view statistics in the form of tag clouds.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115078450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Interactive Machine Learning to Sonify Visually Impaired Dancers' Movement","authors":"S. Katan","doi":"10.1145/2948910.2948960","DOIUrl":"https://doi.org/10.1145/2948910.2948960","url":null,"abstract":"This preliminary research investigates the application of Interactive Machine Learning (IML) to sonify the movements of visually impaired dancers. Using custom wearable devices with localized sound, our observations demonstrate how sonification enables the communication of time-based information about movements such as phrase length and periodicity, and nuanced information such as magnitudes and accelerations. The work raises a number challenges regarding the application of IML to this domain. In particular we identify a need for ensuring even rates of change in regression models when performing sonification and a need for consideration of how to convey machine learning approaches to end users.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Volioti, S. Hadjidimitriou, S. Manitsaris, L. Hadjileontiadis, V. Charisis, A. Manitsaris
{"title":"On mapping emotional states and implicit gestures to sonification output from the 'Intangible Musical Instrument'","authors":"C. Volioti, S. Hadjidimitriou, S. Manitsaris, L. Hadjileontiadis, V. Charisis, A. Manitsaris","doi":"10.1145/2948910.2948950","DOIUrl":"https://doi.org/10.1145/2948910.2948950","url":null,"abstract":"Sonification is an interdisciplinary field of research, aiming at generating sound from data based on systematic, objective and reproducible transformations. Towards this direction, expressive gestures play an important role in music performances facilitating the artistic perception by the audience. Moreover, emotions are linked with music, as sound has the ability to evoke emotions. In this vein, a combinatory approach which aims at gesture and emotion sonification in the context of music composition and performance is presented here. The added value of the proposed system is that both gesture and emotion are able to continuously manipulate the reproduced sound in real-time.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128751243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Audience Behaviour During Contemporary Dance Performances","authors":"Lida Theodorou, P. Healey, F. Smeraldi","doi":"10.1145/2948910.2948928","DOIUrl":"https://doi.org/10.1145/2948910.2948928","url":null,"abstract":"How can performers detect and potentially respond to the reactions of a live audience? Audience members' physical movements provide one possible source of information about their engagement with a performance. Using a case study of the dance performance \"Frames\" that took place in Theatre Royal in Glasgow during March 2015, we examine patterns of audience movement during contemporary dance performances and explore how they relate to the dancer's movements. Video recordings of performers and audience were analysed using computer vision and data analysis techniques extracting facial expression, hand gestural and body movement data. We found that during the performance audiences move very little and have predominantly expressionless faces while hand gestures seem to play a significant role in the way audiences respond. This suggests that stillness i.e. the absence of motion may be an indicator of engagement.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129538493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Voice and movement as predictors of gesture types and physical effort in virtual object interactions of classical Indian singing","authors":"Stella Paschalidou, T. Eerola, M. Clayton","doi":"10.1145/2948910.2948914","DOIUrl":"https://doi.org/10.1145/2948910.2948914","url":null,"abstract":"This paper reports on the exploration of the relationships between gesture and sound in the context of practicing 'sound sculpting' [4] in Hindustani (Dhrupad) vocal improvisation. In this practice, singers of the classical Indian music tradition often engage with melodic ideas by manipulating intangible, imaginary objects and materials with their hands while singing. Here we explore the interaction possibilities that both malleable (through elasticity) as well as rigid (through weight/friction) objects can afford by accounting for the physical effort that these require. Specifically, we focus on using movement and audio features for (a) predicting bodily effort levels through linear regression and (b) classifying gestures as either elastic or rigid interactions through logistic regression. The results suggest that a good part of the variance in both physical effort and gesture type can be explained through a small set of audio and motion features.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127718557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rock Art Rocks Me","authors":"A. Dubos, Jean-François Jégo","doi":"10.1145/2948910.2948918","DOIUrl":"https://doi.org/10.1145/2948910.2948918","url":null,"abstract":"We are looking for something primitive: a memory from before our birth. Something obvious, we all carry and that evolves within us: the first gestures of the first men. Between art, science and technology, our research tends to a virtual scene of rock art in action. Assuming that the cave paintings are the traces of oral performance or dance rites [1], they have been used as transmission and communication media for the knowledge of surrounding and environments of the early men (hunting, myth, history, art or religious content, shamanism). Through the scope of these fragmentary traces of history, we are seeking to reconstruct various performative scenarios echoing the paintings of the Chauvet's cave.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122270496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Movement of Things Exploring Inertial Motion Sensing When Autonomous, Tiny and Wireless","authors":"Andreas Schlegel, Cédric Honnet","doi":"10.1145/2948910.2948916","DOIUrl":"https://doi.org/10.1145/2948910.2948916","url":null,"abstract":"The Movement of Things project is an exploration into the qualities and properties of movement. Through a range of exercises these movements are captured and translated by custom-built software and the use of an autonomous, tiny and wireless motion sensor. A series of Motion Sensing Extensions suggest different approaches of how to use a motion sensor within various physical environments to capture movement to better understand the materialization of movement and new forms of interactions through movement.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132944272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Do Choreographers Craft Dance?: Designing for a Choreographer-Technology Partnership","authors":"Marianela Ciolfi Felice, S. Alaoui, W. Mackay","doi":"10.1145/2948910.2948941","DOIUrl":"https://doi.org/10.1145/2948910.2948941","url":null,"abstract":"Choreographers rarely have access to interactive tools that are designed specifically to support their creative process. In order to design for such a technology, we interviewed six contemporary choreographers about their creative practice. We found that even though each process is unique, choreographers represent their ideas by applying a set of operations onto choreographic objects. Throughout different creative phases, choreographers compose by shifting among various degrees of specificity and vary their focal points from dancers to stage, to interaction, to the whole piece. Based on our findings, we present a framework for articulating the higher-level patterns that emerge from these complex and idiosyncratic processes. We then articulate the resulting implications for the design of interactive tools to support the choreographic practice.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133101412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Activity Patterns Performed With Emotion","authors":"Qi Wang, T. Artières, Yu Ding","doi":"10.1145/2948910.2948958","DOIUrl":"https://doi.org/10.1145/2948910.2948958","url":null,"abstract":"This paper is a preliminary work towards the design of a model able to generate realistic motion sequences conditioned on a number of contextual variables like age, morphology, emotion etc. We focus in a first step on the design of contextual markovian models able to perform recognition of activities performed under various emotions even in the case no training samples are available for a particular (activity, emotion) pair, a zero shot learning setting.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125768072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}