Ksenia Kolykhalova, Paolo Alborno, A. Camurri, G. Volpe
{"title":"A serious games platform for validating sonification of human full-body movement qualities","authors":"Ksenia Kolykhalova, Paolo Alborno, A. Camurri, G. Volpe","doi":"10.1145/2948910.2948962","DOIUrl":"https://doi.org/10.1145/2948910.2948962","url":null,"abstract":"In this paper we describe a serious games platfrom for validating sonification of human full-body movement qualities. This platform supports the design and development of serious games aiming at validating (i) our techniques to measure expressive movement qualities, and (ii) the mapping strategies to translate such qualities in the auditory domain, by means of interactive sonification and active music experience. The platform is a part of a more general framework developed in the context of the EU ICT H2020 DANCE \"Dancing in the dark\" Project n.645553 that aims at enabling the perception of nonverbal artistic whole-body experiences to visual impaired people.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129162264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua
{"title":"Fingers gestures early-recognition with a unified framework for RGB or depth camera","authors":"S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua","doi":"10.1145/2948910.2948947","DOIUrl":"https://doi.org/10.1145/2948910.2948947","url":null,"abstract":"This paper presents a unified framework computer vision approach for finger gesture early recognition and interaction that can be applied on sequences of either RGB or depth images without any supervised skeleton extraction. Either RGB or time-of-flight cameras can be used to capture finger motions. The hand detection is based on a skin color model for color images or distance slicing for depth images. A unique hand model is used for the finger detection and identification. Static (fingerings) and dynamic (sequence and/or combination of fingerings) patterns can be early-recognized based on one-shot learning approach using a modified Hidden Markov Models approach. The recognition accuracy is evaluated in two different applications: musical and robotic interaction. In the first case standardized basic piano-like finger gestures (ascending/descending scales, ascending/descending arpeggio) are used to evaluate the performance of the system. In the second case, both standardized and user-defined gestures (driving, waypoints etc.) are recognized and used to interactively control an automated guided vehicle.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127957697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carl Malmstrom, Yaying Zhang, Philippe Pasquier, T. Schiphorst, L. Bartram
{"title":"MoComp: A Tool for Comparative Visualization between Takes of Motion Capture Data","authors":"Carl Malmstrom, Yaying Zhang, Philippe Pasquier, T. Schiphorst, L. Bartram","doi":"10.1145/2948910.2948932","DOIUrl":"https://doi.org/10.1145/2948910.2948932","url":null,"abstract":"We present MoComp, an interactive visualization tool that allows users to identify and understand differences in motion between two takes of motion capture data. In MoComp, the body part position and motion is visualized focusing on angles of the joints making up each body part. This makes the tool useful for between-take and even between-subject comparison of particular movements since the angle data is independent of the size of the captured subject.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116097445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Grammalidis, K. Dimitropoulos, F. Tsalakanidou, A. Kitsikidis, P. Roussel-Ragot, B. Denby, P. Chawah, L. Buchman, S. Dupont, S. Laraba, B. Picart, M. Tits, J. Tilmanne, S. Hadjidimitriou, L. Hadjileontiadis, V. Charisis, C. Volioti, A. Stergiaki, A. Manitsaris, Odysseas Bouzos, S. Manitsaris
{"title":"The i-Treasures Intangible Cultural Heritage dataset","authors":"N. Grammalidis, K. Dimitropoulos, F. Tsalakanidou, A. Kitsikidis, P. Roussel-Ragot, B. Denby, P. Chawah, L. Buchman, S. Dupont, S. Laraba, B. Picart, M. Tits, J. Tilmanne, S. Hadjidimitriou, L. Hadjileontiadis, V. Charisis, C. Volioti, A. Stergiaki, A. Manitsaris, Odysseas Bouzos, S. Manitsaris","doi":"10.1145/2948910.2948944","DOIUrl":"https://doi.org/10.1145/2948910.2948944","url":null,"abstract":"In this paper, we introduce the i-Treasures Intangible Cultural Heritage (ICH) dataset, a freely available collection of multimodal data captured from different forms of rare ICH. More specifically, the dataset contains video, audio, depth, motion capture data and other modalities, such as EEG or ultrasound data. It also includes (manual) annotations of data, while in some cases additional features and metadata are provided, extracted using algorithms and modules developed within the i-Treasures project. We describe the creation process (sensors, capture setups and modules used), the dataset content and the associated annotations. An attractive feature of this ICH Database is that it's the first of its kind, providing annotated multimodal data for a wide range of rare ICH types. Finally, some conclusions are drawn and the future development of the dataset is discussed.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extending Methods of Composition and Performance for Live Media Art Through Markerless Voice and Movement Interfaces: An Artist Perspective","authors":"Vesna Petresin","doi":"10.1145/2948910.2948920","DOIUrl":"https://doi.org/10.1145/2948910.2948920","url":null,"abstract":"Transmediation of movement, body data and sound to morphogenetic processes links the trigger and response off-screen, and moves away from wearable tracking devices to gesture and AI. Workflow for composing and designing with movement and voice for media opera may be developed within a single workspace implementing principles of cross modal perception and particle simulations in animation softwares, as has been demonstrated using case studies of experimental practice using 3D film, light, voice, soundscapes and movement to compose and modulate the artistic experience in real time.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125134534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Presenting a Performative Presence: materializing movement data for the design of digital interactions","authors":"Lise Amy Hansen","doi":"10.1145/2948910.2948911","DOIUrl":"https://doi.org/10.1145/2948910.2948911","url":null,"abstract":"This paper makes a case for exploring embodied annotation in real time for the study of movement data for interaction design. The paper argues for the critical role played by agency in performed and lived movement of an interaction; the agency stemming from the internal perceptions in relation to external structural consequences of moving. In particular, that the creative handling or materialization of movement data require boundaries for what movements are made to matter and which are not. I discuss some concerns and considerations of modeling digital movement through enactments exploring kinesthesia.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115894766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perspectives on Real-time Computation of Movement Coarticulation","authors":"Frédéric Bevilacqua, Baptiste Caramiaux, Jules Françoise","doi":"10.1145/2948910.2948956","DOIUrl":"https://doi.org/10.1145/2948910.2948956","url":null,"abstract":"We discuss the notion of movement coarticulation, which has been studied in several fields such as motor control, music performance and animation. In gesture recognition, movement coarticulation is generally viewed as a transition between \"gestures\" that can be problematic. We propose here to account for movement coarticulation as an informative element of skilled practice and propose to explore computational modeling of coarticulation. We show that established probabilistic models need to be extended to accurately take into account movement coarticulation, and we propose research questions towards such a goal.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127222789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards the design of augmented feedforward and feedback for sensorimotor learning of motor skills","authors":"Paraskevi Kritopoulou, S. Manitsaris, F. Moutarde","doi":"10.1145/2948910.2948959","DOIUrl":"https://doi.org/10.1145/2948910.2948959","url":null,"abstract":"Creating a digital metaphor of the \"in person transmission\" of manual-crafting motor skills, is an extremely complicated and challenging task. We are aiming to achieve the above by creating a mixed reality environment, supported by an interactive system for sensorimotor learning that relies on pathing techniques. The gestural instruction of a person, the Learner, arises from the reference gesture of an Expert. The concept of the system is based on the simple idea of guiding with the projection of a gesture depicting path in 2D space and in real time. The path is projected either as a feedforward that describes the gesture that has to be executed next, either as a feedback that amends the gesture while taking into account the time needed to correct the mistake. This projection takes place in the exact area where the object lies and the Learner is being trained, to avoid any distraction from the crafting task.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126850353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ulysses Bernardet, Dhruv Adhia, Norman Jaffe, Johnty Wang, Michael Nixon, Omid Alemi, J. Phillips, S. DiPaola, Philippe Pasquier, T. Schiphorst
{"title":"m+m: A novel Middleware for Distributed, Movement based Interactive Multimedia Systems","authors":"Ulysses Bernardet, Dhruv Adhia, Norman Jaffe, Johnty Wang, Michael Nixon, Omid Alemi, J. Phillips, S. DiPaola, Philippe Pasquier, T. Schiphorst","doi":"10.1145/2948910.2948942","DOIUrl":"https://doi.org/10.1145/2948910.2948942","url":null,"abstract":"Embodied interaction has the potential to provide users with uniquely engaging and meaningful experiences. m+m: Movement + Meaning middleware is an open source software framework that enables users to construct real-time, interactive systems that are based on movement data. The acquisition, processing, and rendering of movement data can be local or distributed, real-time or off-line. Key features of the m+m middleware are a small footprint in terms of computational resources, portability between different platforms, and high performance in terms of reduced latency and increased bandwidth. Examples of systems that can be built with m+m as the internal communication middleware include those for the semantic interpretation of human movement data, machine-learning models for movement recognition, and the mapping of movement data as a controller for online navigation, collaboration, and distributed performance.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114702016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Camurri, G. Volpe, Stefano Piana, M. Mancini, Radoslaw Niewiadomski, Nicola Ferrari, C. Canepa
{"title":"The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement","authors":"A. Camurri, G. Volpe, Stefano Piana, M. Mancini, Radoslaw Niewiadomski, Nicola Ferrari, C. Canepa","doi":"10.1145/2948910.2948927","DOIUrl":"https://doi.org/10.1145/2948910.2948927","url":null,"abstract":"This paper presents a conceptual framework for the analysis of expressive qualities of movement. Our perspective is to model an observer of a dance performance. The conceptual framework is made of four layers, ranging from the physical signals that sensors capture to the qualities that movement communicate (e.g., in terms of emotions). The framework aims to provide a conceptual background the development of computational systems can build upon, with a particular reference to systems analyzing a vocabulary of expressive movement qualities, and translating them to other sensory channels, such as the auditory modality. Such systems enable their users to \"listen to a choreography\" or to \"feel a ballet\", in a new kind of cross-modal mediated experience.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127380913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}