Proceedings of the 5th International Conference on Movement and Computing最新文献

筛选
英文 中文
Registers as inventions: body, dance, memory and digital medias 登记为发明:身体,舞蹈,记忆和数字媒体
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212876
Thembi Rosa, Carlos Falci
{"title":"Registers as inventions: body, dance, memory and digital medias","authors":"Thembi Rosa, Carlos Falci","doi":"10.1145/3212721.3212876","DOIUrl":"https://doi.org/10.1145/3212721.3212876","url":null,"abstract":"This PHD research aims to map and study a selection of artistic propositions that use digital interfaces to deal with creations related to body, dance, memory and archives. The focus is on how new technologies are affecting the process of creation, opening up to new modes of interactions with the public, as well as instigating new approaches to cognition, memory and the means of registration. The research is inspired by and will make use of some methodologies and softwares (Piecemaker and MoSys) developed and used by the Motion Bank project. As such, some 'choreographic objects' will be proposed based on three selected performances made in Brazil; furthermore, different approaches to handling digital archiving in dance will be explored. The main question of this research refers to the modes of dance using new technologies. We also want to test how much embodied experiences related to choreographic's principles can be amplified and perceived throughout the creative process of proposing choreographic objects as well as sustaining the notions of bodies as dynamical archives.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130571033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How Do Dancers Learn To Dance?: A first-person perspective of dance acquisition by expert contemporary dancers 舞者是如何学会跳舞的?当代舞蹈专家舞蹈习得的第一人称视角
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212723
Jean-Philippe Rivière, S. Alaoui, Baptiste Caramiaux, W. Mackay
{"title":"How Do Dancers Learn To Dance?: A first-person perspective of dance acquisition by expert contemporary dancers","authors":"Jean-Philippe Rivière, S. Alaoui, Baptiste Caramiaux, W. Mackay","doi":"10.1145/3212721.3212723","DOIUrl":"https://doi.org/10.1145/3212721.3212723","url":null,"abstract":"We are interested in supporting motor skill acquisition in highly creative skilled practices such as dance. We conducted semi-structured interviews with 11 professional dancers to better understand how they learn new dance movements. We found that each dancer engages in a set of operations, including imitation, segmentation, marking, or applying movement variations. We also found a progression in learning dance movements composed of three steps consistently reported by dancers: analysis, integration and personalization. In this study, we aim to provide an empirical foundation to better understand the acquisition of dance movements from the perspective of the learner.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121796620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Exploiting multimodal integration in adaptive interactive systems and game-based learning interfaces 在自适应交互系统和基于游戏的学习界面中开发多模态集成
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212849
Erica Volta, G. Volpe
{"title":"Exploiting multimodal integration in adaptive interactive systems and game-based learning interfaces","authors":"Erica Volta, G. Volpe","doi":"10.1145/3212721.3212849","DOIUrl":"https://doi.org/10.1145/3212721.3212849","url":null,"abstract":"The main purpose of my work is to investigate multisensory and multimodal integration in the design and development of adaptive systems and interfaces for game-based learning applications in the areas of education and rehabilitation. To this aim, I contributed to the creation of a multimodal dataset of violin performances, integrating motion capture, video, audio, and on-body sensors (accelerometers and EMG), and I worked closely with psychophysicists and educators on the design of paradigms and technologies for multisensory and embodied learning of mathematics in primary school children. Main theoretical foundations of my research are multisensory processing and integration, psychophysics analysis, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models for adaptive interfaces and serious-games.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127704209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SloMo study #2
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212890
F. Visi
{"title":"SloMo study #2","authors":"F. Visi","doi":"10.1145/3212721.3212890","DOIUrl":"https://doi.org/10.1145/3212721.3212890","url":null,"abstract":"This piece was composed to explore the use of slow and microscopic body movements in electronic music performance, and the role of rhythmic visual cues and breathing in the perception of movement and time. To do so, it employs wearable sensors (EMG and IMUs), variable-frequency stroboscopic lights, an electronic stethoscope, and a body-worn camera for face tracking. The performer’s left hand very slowly draws an arc that begins with the left arm across the chest and ends when the arm is fully stretched outwards. The whole movement is performed in about 10 minutes and marks the beginning and end of the piece. Every small movement across this arc shifts the selection of a small portion of audio from a buffer used for granular synthesis. The recording loaded in the buffer consists of 8 bars of the piece Kineslimina [1] written by the author and performed by Esther Coorevits. Through these slow movements, that instrumental passage is reinterpreted on a much longer timescale, making it possible to linger on the attack transients of each drum hit and, at the same time, experience it as a single, slowly unfolding event. This continuously evolving rhythm has its counterpoint in the breathing sounds of the performer, which are amplified through an electronic stethoscope placed on their neck. These sounds are processed by an array of resonators that are controlled through the mouth positions of the performer, tracked by the body-worn camera. Breathing acts as the performer’s inner timekeeper, while modulating breathing sounds through mouth positions","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"17 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131504932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modosc
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212842
Luke Dahl, F. Visi
{"title":"Modosc","authors":"Luke Dahl, F. Visi","doi":"10.1145/3212721.3212842","DOIUrl":"https://doi.org/10.1145/3212721.3212842","url":null,"abstract":"Marker-based motion capture systems that stream precise movement data in real-time afford interaction scenarios that can be subtle, detailed, and immediate. However, challenges to effectively utilizing this data include having to build bespoke processing systems which may not scale well, and a need for higher-level representations of movement and movement qualities. We present modosc, a set of Max abstractions for computing motion descriptors from raw motion capture data in real time. Modosc is designed to address the data handling and synchronization issues that arise when working with complex marker sets, and to structure data streams in a meaningful and easily accessible manner. This is achieved by adopting a multiparadigm programming approach using o.dot and Open Sound Control. We describe an initial set of motion descriptors, the addressing system employed, and design decisions and challenges.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114769594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Smartphone-Assessed Movement Predicts Music Properties: Towards Integrating Embodied Music Cognition into Music Recommender Services via Accelerometer Motion Data 智能手机评估的运动预测音乐属性:通过加速计运动数据将具体化的音乐认知整合到音乐推荐服务中
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212852
M. Irrgang, J. Steffens, Hauke Egermann
{"title":"Smartphone-Assessed Movement Predicts Music Properties: Towards Integrating Embodied Music Cognition into Music Recommender Services via Accelerometer Motion Data","authors":"M. Irrgang, J. Steffens, Hauke Egermann","doi":"10.1145/3212721.3212852","DOIUrl":"https://doi.org/10.1145/3212721.3212852","url":null,"abstract":"Numerous studies have shown a close relationship between movement and music [7], [17], [11], [14], [16], [3], [8]. That is why Leman calls for new mediation technologies to query music in a corporeal way [9]. Thus, the goal of the presented study was to explore how movement captured by smartphone accelerometer data can be related to musical properties. Participants (N = 23, mean age = 34.6 yrs, SD = 13.7 yrs, 13 females, 10 males) moved a smartphone to 15 musical stimuli of 20s length presented in random order. Motion features related to tempo, smoothness, size, regularity, and direction were extracted from accelerometer data to predict the musical qualities \"rhythmicity\", \"pitch level + range\" and \"complexity\" assessed by three music experts. Motion features selected by a 20-fold lasso predicted the musical properties to the following degrees \"rhythmicity\" (R2: .47), pitch level and range (R2: .03) and complexity (R2: .10). As a consequence, we conclude that music properties can be predicted from the movement it evoked, and that an embodied approach to Music Information Retrieval is feasible.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125516855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAVi
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212838
Ewelina Bakala, Yaying Zhang, Philippe Pasquier
{"title":"MAVi","authors":"Ewelina Bakala, Yaying Zhang, Philippe Pasquier","doi":"10.1145/3212721.3212838","DOIUrl":"https://doi.org/10.1145/3212721.3212838","url":null,"abstract":"Since the appearance of the video technology, dance is mostly recorded, reproduced and reinterpreted through the cinematic medium. Current computer vision and motion capture systems encode movement information as joints positions, and angles of joints in a 3D skeleton. We present the Movement Aesthetic Visualization system (MAVi), a tool for aesthetic movement visualization that enables creative exploration of these data. We discuss how this software solution can be used for three-dimensional dance exploration, dance video prototyping, and generation of movement-driven visuals.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121978830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Creating Virtual Characters 创建虚拟角色
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212835
M. Gillies
{"title":"Creating Virtual Characters","authors":"M. Gillies","doi":"10.1145/3212721.3212835","DOIUrl":"https://doi.org/10.1145/3212721.3212835","url":null,"abstract":"An encounter with a virtual person can be one of the most compelling experiences in immersive virtual reality, as Mel Slater and his group have shown in many experiments on social interaction in VR. Much of this is due to virtual reality's ability to accurately represent body language, since participants can share a 3D space with a character. However, creating virtual characters capable of body language is a challenging task. It is a tacit, embodied skill that cannot be well represented in code. This paper surveys a series of experiments performed by Mel Slater and colleagues that show the power of Virtual Characters in VR and summarizes details of the technical infrastructure used, and Slater's theories of why virtual characters are effective. It they discusses the issues involved in creating virtual characters and the type of tool required. It concludes by proposing that Interactive Machine Learning can provide this type of tool.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"36 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120918566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Kinetic Analysis of Hip Motion During Piano Playing 钢琴演奏中髋部运动的动力学分析
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212878
Catherine Massie-Laberge, M. Wanderley, I. Cossette
{"title":"Kinetic Analysis of Hip Motion During Piano Playing","authors":"Catherine Massie-Laberge, M. Wanderley, I. Cossette","doi":"10.1145/3212721.3212878","DOIUrl":"https://doi.org/10.1145/3212721.3212878","url":null,"abstract":"The swaying action coming from the pianists' hip region constitutes the pivotal point for upper body movements. In this paper, we evaluate the relation between the use of vertical force and structural characteristics of technically different excerpts when pianists play with varied levels of expression. We also analyze the common patterns within the time-series of force data. Ten pianists performed three excerpts from the Romantic period in four expressive conditions. The force was measured with a Bertec force plate placed under the piano stool. The rhythmic and dynamic form, and the technical complexity of the excerpt, induce variations in force when pianists play with different levels of expression. Recurrent patterns of hip motion are found in specific regions of the score, which suggests that these pianists share similar movement strategies to communicate structural features.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128624903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Playground Social Interaction Analysis using Bespoke Wearable Sensors for Tracking and Motion Capture 使用定制可穿戴传感器进行跟踪和动作捕捉的操场社会互动分析
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212818
B. Heravi, J. Gibson, S. Hailes, D. Skuse
{"title":"Playground Social Interaction Analysis using Bespoke Wearable Sensors for Tracking and Motion Capture","authors":"B. Heravi, J. Gibson, S. Hailes, D. Skuse","doi":"10.1145/3212721.3212818","DOIUrl":"https://doi.org/10.1145/3212721.3212818","url":null,"abstract":"Unstructured play1 is considered important for the social, physical and cognitive development of children. Traditional observational research examining play behaviour at playtime (recess) has been hampered by challenges in obtaining reliable data and in processing sufficient quantities of that data to permit credible inferences to be drawn. The emergence of wearable wireless sensor technology makes it possible to study individual differences in childhood social behaviour based on collective movement patterns during playtime. In this work, we introduce a new method to enable simultaneous collection of GNSS/IMU data from a group of children interacting on a playground. We present a detailed description of system development and implementation before going on to explore methods of characterising social groups based on collective movement recording and analysis. A case study was carried out for a class of 7-8 year old children in their school playground during 10 episodes of unstructured play. A further 10 play episodes were monitored in the same space following the introduction of large, loose play materials. This study design allowed us to observe the effect of an environmental intervention on social movement patterns. Sociometric analysis was conducted for comparison and validation. This successful case study demonstrates that sensor based movement data can be used to explore children's social behaviour during naturalistic play.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121761384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信