Proceedings of the 5th International Conference on Movement and Computing最新文献

筛选
英文 中文
Coding Movement in Sign Languages: the Typannot Approach 手语中的编码运动:Typannot方法
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212808
Claudia S. Bianchini, Léa Chèvrefils, Claire Danet, Patrick Doan, Morgane Rébulard, Adrien Contesse, D. Boutet
{"title":"Coding Movement in Sign Languages: the Typannot Approach","authors":"Claudia S. Bianchini, Léa Chèvrefils, Claire Danet, Patrick Doan, Morgane Rébulard, Adrien Contesse, D. Boutet","doi":"10.1145/3212721.3212808","DOIUrl":"https://doi.org/10.1145/3212721.3212808","url":null,"abstract":"Typannot is an innovative transcription system (TranSys) for Sign Languages (SLs), based on robust graphematic and coherent typographic formulas. It is characterized by readability, writability, searchability, genericity and modularity. Typannot can be used to record handshapes, mouth actions, facial expressions, initial locations (LOCini) and movements of the upper limbs (MOV). For LOCini and MOV, Typannot uses intrinsic frames of reference (iFoR) to describe the position of each segment (arm, forearm, hand) in terms of degrees of freedom (DoF). It assumes that the motion is subdivided into a complex moment of initial preparation, leading to the stabilization of a LOCini, and a subsequent phase of MOV deployment based on simple motor patterns. The goal of Typannot is not only to create a new TranSys, but also to provide an instrument to advance the knowledge about SLs. The observation of the SLs makes it possible to formulate various hypotheses, among which: 1) MOV follows a simple motor scheme that aims at minimizing motor control during MOV; 2) proximal→distal flows of MOV are predominant in SLs. Only the use of a TranSys based on iFoR and the description of the DoF makes it possible to explore the data in order to test these hypotheses.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116271059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
What Quality?: Performing Research on Movement and Computing 什么质量?进行运动和计算的研究
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212834
Jan C. Schacher
{"title":"What Quality?: Performing Research on Movement and Computing","authors":"Jan C. Schacher","doi":"10.1145/3212721.3212834","DOIUrl":"https://doi.org/10.1145/3212721.3212834","url":null,"abstract":"This article investigates fundamental questions and methodological issues concerning research on movement and computing. Through a process of mapping of the various approaches and phases of research in this domain, it attempts to construct a coherent picture and overview of the research field. A series of questions arise that are discussed with the intent of anchoring and directing future research across different disciplines. In order to better apprehend the complexity of movement, gesture, action, and physical performance, and their role as topic of scientific, scholarly as well artistic research practices, an extension of the disciplinary and methodological framework is proposed. The juxtaposition of the diverse approaches and goals, and the extension of the research can indicate novel axes for generating techniques, methods, and ultimately knowledge. Based on this insight, a reflection on the potential of a wider cross-mediating research practice concludes this article.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123473263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Assessing with the head: a motor compatibility effect 用头部评估:运动相容性效应
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212853
Stefania Moretti, A. Greco
{"title":"Assessing with the head: a motor compatibility effect","authors":"Stefania Moretti, A. Greco","doi":"10.1145/3212721.3212853","DOIUrl":"https://doi.org/10.1145/3212721.3212853","url":null,"abstract":"Research within the embodiment perspective has found that cognitive processing proceeds easier when bodily actions (mostly arms motion) are compatible with the conceptual meaning of verbal expressions (concrete or abstract, or with positive and negative values). Facilitation effects involving head motion, however, have not yet been investigated. The present work aims to test the motor compatibility hypothesis between directional head movements, usually performed to communicate agreement and disagreement, and truth evaluation. Five experiments were designed: participants were asked to assess a series of sentences as true or false, according to their meaning (objectively) or on the basis of personal preferences (subjectively), in compatible and incompatible motion conditions and with different response modalities. Response times were shorter only when true sentences, or about a liked content, were moved vertically, and when false sentences, or about a disliked content, were moved horizontally, with the head. Results confirm the hypothesis that higher cognitive processing is grounded in bodily motion, and shed light on the possibility to manipulate vertical and horizontal head movements in order to reveal attitudes.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122946814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kinematic predictors for the moving hand illusion 移动手错觉的运动学预测器
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212841
O. Perepelkina, G. Arina
{"title":"Kinematic predictors for the moving hand illusion","authors":"O. Perepelkina, G. Arina","doi":"10.1145/3212721.3212841","DOIUrl":"https://doi.org/10.1145/3212721.3212841","url":null,"abstract":"The sense of body ownership is a result of convergent input of several sensory modalities. Experimental manipulation of different sensory inputs is possible during bodily illusions. These illusions allow studying multisensory mechanisms of body representation. The aim of this research was to investigate motion characteristics of the virtual hand illusion. A novel kinematic analysis for moving hand illusion was applied. Several motion features (such as jerk, smoothness and velocity) predicted subjective and behavioral measures of the illusion. This result may reflect that subjects with higher motor abilities could have better multisensory body representation mechanisms that are responsible for the ownership illusion.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123836964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenMoves OpenMoves
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212846
Samir Amin, J. Burke
{"title":"OpenMoves","authors":"Samir Amin, J. Burke","doi":"10.1145/3212721.3212846","DOIUrl":"https://doi.org/10.1145/3212721.3212846","url":null,"abstract":"While person-tracking systems can capture very fine-grained, accurate data, the creation of art pieces and interactive experiences making use of captured data often benefits from being able to work with higher-level features. We propose a computational framework for interpreting person-tracking data and publishing the resulting information over a network for use by client applications, and emphasize the recognition of patterns of movement, both over time and instantaneously. Our system consists of four modules for tracking instantaneous features, short-time features, and using unsupervised and supervised machine learning techniques to extract features at higher levels of abstraction. Data used by the system is collected using OpenPTrack, an open-source library for person and object tracking geared towards accessibility to the arts and education communities.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125117702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Practice Makes Perfect: Towards Learned Path Planning for Robotic Musicians using Deep Reinforcement Learning 熟能生巧:使用深度强化学习的机器人音乐家学习路径规划
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212839
Lamtharn Hantrakul, Zachary Kondak, Gil Weinberg
{"title":"Practice Makes Perfect: Towards Learned Path Planning for Robotic Musicians using Deep Reinforcement Learning","authors":"Lamtharn Hantrakul, Zachary Kondak, Gil Weinberg","doi":"10.1145/3212721.3212839","DOIUrl":"https://doi.org/10.1145/3212721.3212839","url":null,"abstract":"When a pianist effortlessly glides across the keyboard during an improvised solo, the musician is executing a series of movements informed by years of practice ingrained with musical knowledge. This paper proposes an analogous approach that enables Robotic Musicians to learn about its degrees of freedom and physical constraints through \"practice\" in the form of Deep Reinforcement Learning. We use a Deep Q Network (DQN) to train a virtual agent representing a real 4-armed robotic musician, to motion-plan the optimal sequence of movements given a musical sequence through a learned strategy instead of a search strategy. Early results from our proof-of-concept system demonstrate that DRL can achieve optimal control of a musical agent, learning a form of bi-manual coordination in the process.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128237404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A biofeedback music-sonification system for gait retraining 一种用于步态再训练的生物反馈音乐音响系统
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212843
V. Lorenzoni, Pieter-Jan Maes, P. Berghe, D. Clercq, T. D. Bie, M. Leman
{"title":"A biofeedback music-sonification system for gait retraining","authors":"V. Lorenzoni, Pieter-Jan Maes, P. Berghe, D. Clercq, T. D. Bie, M. Leman","doi":"10.1145/3212721.3212843","DOIUrl":"https://doi.org/10.1145/3212721.3212843","url":null,"abstract":"Auditory feedbacks are becoming increasingly popular in sports providing opportunities for monitoring and gait (re)training in ecological environments. We present the design process of a sonification strategy for modification of running parameters. The sonification provides real-time feedback of the performance through introduction of distortion of a baseline music track. The music BPM is continuously matched to the runners' cadence. The noise-based continuous feedback was able to significantly alter the mean running cadence in a non-instructed and non-disturbing way and performed better than standard verbal instructions. Although some of the participants did not respond effectively to the feedback, a large majority of the participants positively rated the feedback system in terms of pleasantness and motivation.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128381401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Time to Compile 编译时间
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212888
Catie Cuan, I. Pakrasi, A. LaViers
{"title":"Time to Compile","authors":"Catie Cuan, I. Pakrasi, A. LaViers","doi":"10.1145/3212721.3212888","DOIUrl":"https://doi.org/10.1145/3212721.3212888","url":null,"abstract":"\"Time to Compile\"1 is the result of an extended in-house residency of an artist in a robotics lab. The piece explores the temporal and spatial dislocations enabled by digital technology and the internet and plays with human responses to articulated machines (robots) in that setting. The audience journeys through a suspended, disparate landscape that aims to reconcile these responses to technology and machines. This proposal offers to bring an excerpt of the piece, live dance performance surrounded by videos of robots created in the lab, to MOCO. Additionally, an interactive installation could be produced if MOCO has the timing bandwidth to offer this more involved setup.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"234 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133396189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Conceptual Framework for Creating and Analyzing Dance Learning Digital Content 创建和分析舞蹈学习数字内容的概念框架
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212837
K. E. Raheb, Sarah Whatley, A. Camurri
{"title":"A Conceptual Framework for Creating and Analyzing Dance Learning Digital Content","authors":"K. E. Raheb, Sarah Whatley, A. Camurri","doi":"10.1145/3212721.3212837","DOIUrl":"https://doi.org/10.1145/3212721.3212837","url":null,"abstract":"As they are mainly based on bodily experiences and embodied knowledge, dance and movement practices present a great diversity and complexity across genre and context. Thus, developing a conceptual framework for archiving, managing, curating and analysing movement data, in order to develop reusable datasets and algorithms for a variety of purposes, remains a challenge. In this work, based on relevant literature on movement representation and existing systems such as Laban Movement Analysis, as well as working with dance experts through workshops, focus groups, and interviews, we propose a conceptual framework for creating, and analysing dance learning content. The conceptual framework, has been developed within an interdisciplinary project, that brings together technology and human computer interaction researchers, computer science engineers, motion capture experts from industry and academia, as well as dance experts with background on four different dance genres: contemporary, ballet, Greek folk, and flamenco. The framework has been applied: a) as a guidance to systematically create a movement library with multimodal recordings for dance education, including four different dance genres, b) as the basis for developing controlled vocabularies of dance for manual and automated annotation, and c) as the conceptual framework to define the requirements for similarity search and feature extraction.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122653369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
The lexicon of the Conductor's gaze 列车长凝视的词汇
Proceedings of the 5th International Conference on Movement and Computing Pub Date : 2018-06-28 DOI: 10.1145/3212721.3212811
I. Poggi, Alessandro Ansani
{"title":"The lexicon of the Conductor's gaze","authors":"I. Poggi, Alessandro Ansani","doi":"10.1145/3212721.3212811","DOIUrl":"https://doi.org/10.1145/3212721.3212811","url":null,"abstract":"This work presents two studies investigating the existence of a lexicon of gaze in conducting, and its possible different mastery in musicians and laypeople. An observational qualitative study singled out 17 items of gaze used by Conductors in music rehearsal and concert, conveying interactional, affective and musical meanings to musicians in the ensemble, and exploiting four semiotic devices: the Conductor may use the same gaze types as laypeople and with the same meaning (generic codified), or with meaning more specific of musical performance (specific codified), and directly or indirectly iconic gaze items. In a subsequent perceptual study, 8 of the gaze items singled out were submitted to 177 between musicians and naïf subjects asking them to interpret their meanings through open and closed questions. Results show that some gaze items, especially those conveying intensity (piano, forte) and other technical indications (high note, attack) are fairly recognized; yet, no significant differences result between expert and naïf subjects. Gaze constitutes a lexicon also in music performance and exploits the same semiotic devices as gaze in everyday life.2","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132376378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信