{"title":"Registers as inventions: body, dance, memory and digital medias","authors":"Thembi Rosa, Carlos Falci","doi":"10.1145/3212721.3212876","DOIUrl":"https://doi.org/10.1145/3212721.3212876","url":null,"abstract":"This PHD research aims to map and study a selection of artistic propositions that use digital interfaces to deal with creations related to body, dance, memory and archives. The focus is on how new technologies are affecting the process of creation, opening up to new modes of interactions with the public, as well as instigating new approaches to cognition, memory and the means of registration. The research is inspired by and will make use of some methodologies and softwares (Piecemaker and MoSys) developed and used by the Motion Bank project. As such, some 'choreographic objects' will be proposed based on three selected performances made in Brazil; furthermore, different approaches to handling digital archiving in dance will be explored. The main question of this research refers to the modes of dance using new technologies. We also want to test how much embodied experiences related to choreographic's principles can be amplified and perceived throughout the creative process of proposing choreographic objects as well as sustaining the notions of bodies as dynamical archives.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130571033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-Philippe Rivière, S. Alaoui, Baptiste Caramiaux, W. Mackay
{"title":"How Do Dancers Learn To Dance?: A first-person perspective of dance acquisition by expert contemporary dancers","authors":"Jean-Philippe Rivière, S. Alaoui, Baptiste Caramiaux, W. Mackay","doi":"10.1145/3212721.3212723","DOIUrl":"https://doi.org/10.1145/3212721.3212723","url":null,"abstract":"We are interested in supporting motor skill acquisition in highly creative skilled practices such as dance. We conducted semi-structured interviews with 11 professional dancers to better understand how they learn new dance movements. We found that each dancer engages in a set of operations, including imitation, segmentation, marking, or applying movement variations. We also found a progression in learning dance movements composed of three steps consistently reported by dancers: analysis, integration and personalization. In this study, we aim to provide an empirical foundation to better understand the acquisition of dance movements from the perspective of the learner.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121796620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting multimodal integration in adaptive interactive systems and game-based learning interfaces","authors":"Erica Volta, G. Volpe","doi":"10.1145/3212721.3212849","DOIUrl":"https://doi.org/10.1145/3212721.3212849","url":null,"abstract":"The main purpose of my work is to investigate multisensory and multimodal integration in the design and development of adaptive systems and interfaces for game-based learning applications in the areas of education and rehabilitation. To this aim, I contributed to the creation of a multimodal dataset of violin performances, integrating motion capture, video, audio, and on-body sensors (accelerometers and EMG), and I worked closely with psychophysicists and educators on the design of paradigms and technologies for multisensory and embodied learning of mathematics in primary school children. Main theoretical foundations of my research are multisensory processing and integration, psychophysics analysis, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models for adaptive interfaces and serious-games.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127704209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SloMo study #2","authors":"F. Visi","doi":"10.1145/3212721.3212890","DOIUrl":"https://doi.org/10.1145/3212721.3212890","url":null,"abstract":"This piece was composed to explore the use of slow and microscopic body movements in electronic music performance, and the role of rhythmic visual cues and breathing in the perception of movement and time. To do so, it employs wearable sensors (EMG and IMUs), variable-frequency stroboscopic lights, an electronic stethoscope, and a body-worn camera for face tracking. The performer’s left hand very slowly draws an arc that begins with the left arm across the chest and ends when the arm is fully stretched outwards. The whole movement is performed in about 10 minutes and marks the beginning and end of the piece. Every small movement across this arc shifts the selection of a small portion of audio from a buffer used for granular synthesis. The recording loaded in the buffer consists of 8 bars of the piece Kineslimina [1] written by the author and performed by Esther Coorevits. Through these slow movements, that instrumental passage is reinterpreted on a much longer timescale, making it possible to linger on the attack transients of each drum hit and, at the same time, experience it as a single, slowly unfolding event. This continuously evolving rhythm has its counterpoint in the breathing sounds of the performer, which are amplified through an electronic stethoscope placed on their neck. These sounds are processed by an array of resonators that are controlled through the mouth positions of the performer, tracked by the body-worn camera. Breathing acts as the performer’s inner timekeeper, while modulating breathing sounds through mouth positions","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"17 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131504932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modosc","authors":"Luke Dahl, F. Visi","doi":"10.1145/3212721.3212842","DOIUrl":"https://doi.org/10.1145/3212721.3212842","url":null,"abstract":"Marker-based motion capture systems that stream precise movement data in real-time afford interaction scenarios that can be subtle, detailed, and immediate. However, challenges to effectively utilizing this data include having to build bespoke processing systems which may not scale well, and a need for higher-level representations of movement and movement qualities. We present modosc, a set of Max abstractions for computing motion descriptors from raw motion capture data in real time. Modosc is designed to address the data handling and synchronization issues that arise when working with complex marker sets, and to structure data streams in a meaningful and easily accessible manner. This is achieved by adopting a multiparadigm programming approach using o.dot and Open Sound Control. We describe an initial set of motion descriptors, the addressing system employed, and design decisions and challenges.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114769594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smartphone-Assessed Movement Predicts Music Properties: Towards Integrating Embodied Music Cognition into Music Recommender Services via Accelerometer Motion Data","authors":"M. Irrgang, J. Steffens, Hauke Egermann","doi":"10.1145/3212721.3212852","DOIUrl":"https://doi.org/10.1145/3212721.3212852","url":null,"abstract":"Numerous studies have shown a close relationship between movement and music [7], [17], [11], [14], [16], [3], [8]. That is why Leman calls for new mediation technologies to query music in a corporeal way [9]. Thus, the goal of the presented study was to explore how movement captured by smartphone accelerometer data can be related to musical properties. Participants (N = 23, mean age = 34.6 yrs, SD = 13.7 yrs, 13 females, 10 males) moved a smartphone to 15 musical stimuli of 20s length presented in random order. Motion features related to tempo, smoothness, size, regularity, and direction were extracted from accelerometer data to predict the musical qualities \"rhythmicity\", \"pitch level + range\" and \"complexity\" assessed by three music experts. Motion features selected by a 20-fold lasso predicted the musical properties to the following degrees \"rhythmicity\" (R2: .47), pitch level and range (R2: .03) and complexity (R2: .10). As a consequence, we conclude that music properties can be predicted from the movement it evoked, and that an embodied approach to Music Information Retrieval is feasible.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125516855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MAVi","authors":"Ewelina Bakala, Yaying Zhang, Philippe Pasquier","doi":"10.1145/3212721.3212838","DOIUrl":"https://doi.org/10.1145/3212721.3212838","url":null,"abstract":"Since the appearance of the video technology, dance is mostly recorded, reproduced and reinterpreted through the cinematic medium. Current computer vision and motion capture systems encode movement information as joints positions, and angles of joints in a 3D skeleton. We present the Movement Aesthetic Visualization system (MAVi), a tool for aesthetic movement visualization that enables creative exploration of these data. We discuss how this software solution can be used for three-dimensional dance exploration, dance video prototyping, and generation of movement-driven visuals.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121978830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creating Virtual Characters","authors":"M. Gillies","doi":"10.1145/3212721.3212835","DOIUrl":"https://doi.org/10.1145/3212721.3212835","url":null,"abstract":"An encounter with a virtual person can be one of the most compelling experiences in immersive virtual reality, as Mel Slater and his group have shown in many experiments on social interaction in VR. Much of this is due to virtual reality's ability to accurately represent body language, since participants can share a 3D space with a character. However, creating virtual characters capable of body language is a challenging task. It is a tacit, embodied skill that cannot be well represented in code. This paper surveys a series of experiments performed by Mel Slater and colleagues that show the power of Virtual Characters in VR and summarizes details of the technical infrastructure used, and Slater's theories of why virtual characters are effective. It they discusses the issues involved in creating virtual characters and the type of tool required. It concludes by proposing that Interactive Machine Learning can provide this type of tool.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"36 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120918566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catherine Massie-Laberge, M. Wanderley, I. Cossette
{"title":"Kinetic Analysis of Hip Motion During Piano Playing","authors":"Catherine Massie-Laberge, M. Wanderley, I. Cossette","doi":"10.1145/3212721.3212878","DOIUrl":"https://doi.org/10.1145/3212721.3212878","url":null,"abstract":"The swaying action coming from the pianists' hip region constitutes the pivotal point for upper body movements. In this paper, we evaluate the relation between the use of vertical force and structural characteristics of technically different excerpts when pianists play with varied levels of expression. We also analyze the common patterns within the time-series of force data. Ten pianists performed three excerpts from the Romantic period in four expressive conditions. The force was measured with a Bertec force plate placed under the piano stool. The rhythmic and dynamic form, and the technical complexity of the excerpt, induce variations in force when pianists play with different levels of expression. Recurrent patterns of hip motion are found in specific regions of the score, which suggests that these pianists share similar movement strategies to communicate structural features.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128624903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Playground Social Interaction Analysis using Bespoke Wearable Sensors for Tracking and Motion Capture","authors":"B. Heravi, J. Gibson, S. Hailes, D. Skuse","doi":"10.1145/3212721.3212818","DOIUrl":"https://doi.org/10.1145/3212721.3212818","url":null,"abstract":"Unstructured play1 is considered important for the social, physical and cognitive development of children. Traditional observational research examining play behaviour at playtime (recess) has been hampered by challenges in obtaining reliable data and in processing sufficient quantities of that data to permit credible inferences to be drawn. The emergence of wearable wireless sensor technology makes it possible to study individual differences in childhood social behaviour based on collective movement patterns during playtime. In this work, we introduce a new method to enable simultaneous collection of GNSS/IMU data from a group of children interacting on a playground. We present a detailed description of system development and implementation before going on to explore methods of characterising social groups based on collective movement recording and analysis. A case study was carried out for a class of 7-8 year old children in their school playground during 10 episodes of unstructured play. A further 10 play episodes were monitored in the same space following the introduction of large, loose play materials. This study design allowed us to observe the effect of an environmental intervention on social movement patterns. Sociometric analysis was conducted for comparison and validation. This successful case study demonstrates that sensor based movement data can be used to explore children's social behaviour during naturalistic play.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121761384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}