Proceedings of the 3rd International Symposium on Movement and Computing最新文献

筛选
英文 中文
A serious games platform for validating sonification of human full-body movement qualities 一个严肃的游戏平台,用于验证人类全身运动质量的声音化
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948962
Ksenia Kolykhalova, Paolo Alborno, A. Camurri, G. Volpe
{"title":"A serious games platform for validating sonification of human full-body movement qualities","authors":"Ksenia Kolykhalova, Paolo Alborno, A. Camurri, G. Volpe","doi":"10.1145/2948910.2948962","DOIUrl":"https://doi.org/10.1145/2948910.2948962","url":null,"abstract":"In this paper we describe a serious games platfrom for validating sonification of human full-body movement qualities. This platform supports the design and development of serious games aiming at validating (i) our techniques to measure expressive movement qualities, and (ii) the mapping strategies to translate such qualities in the auditory domain, by means of interactive sonification and active music experience. The platform is a part of a more general framework developed in the context of the EU ICT H2020 DANCE \"Dancing in the dark\" Project n.645553 that aims at enabling the perception of nonverbal artistic whole-body experiences to visual impaired people.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129162264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Fingers gestures early-recognition with a unified framework for RGB or depth camera 手指手势早期识别与RGB或深度相机统一框架
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948947
S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua
{"title":"Fingers gestures early-recognition with a unified framework for RGB or depth camera","authors":"S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua","doi":"10.1145/2948910.2948947","DOIUrl":"https://doi.org/10.1145/2948910.2948947","url":null,"abstract":"This paper presents a unified framework computer vision approach for finger gesture early recognition and interaction that can be applied on sequences of either RGB or depth images without any supervised skeleton extraction. Either RGB or time-of-flight cameras can be used to capture finger motions. The hand detection is based on a skin color model for color images or distance slicing for depth images. A unique hand model is used for the finger detection and identification. Static (fingerings) and dynamic (sequence and/or combination of fingerings) patterns can be early-recognized based on one-shot learning approach using a modified Hidden Markov Models approach. The recognition accuracy is evaluated in two different applications: musical and robotic interaction. In the first case standardized basic piano-like finger gestures (ascending/descending scales, ascending/descending arpeggio) are used to evaluate the performance of the system. In the second case, both standardized and user-defined gestures (driving, waypoints etc.) are recognized and used to interactively control an automated guided vehicle.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127957697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MoComp: A Tool for Comparative Visualization between Takes of Motion Capture Data MoComp:运动捕捉数据之间比较可视化的工具
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948932
Carl Malmstrom, Yaying Zhang, Philippe Pasquier, T. Schiphorst, L. Bartram
{"title":"MoComp: A Tool for Comparative Visualization between Takes of Motion Capture Data","authors":"Carl Malmstrom, Yaying Zhang, Philippe Pasquier, T. Schiphorst, L. Bartram","doi":"10.1145/2948910.2948932","DOIUrl":"https://doi.org/10.1145/2948910.2948932","url":null,"abstract":"We present MoComp, an interactive visualization tool that allows users to identify and understand differences in motion between two takes of motion capture data. In MoComp, the body part position and motion is visualized focusing on angles of the joints making up each body part. This makes the tool useful for between-take and even between-subject comparison of particular movements since the angle data is independent of the size of the captured subject.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116097445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The i-Treasures Intangible Cultural Heritage dataset i-Treasures非物质文化遗产数据集
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948944
N. Grammalidis, K. Dimitropoulos, F. Tsalakanidou, A. Kitsikidis, P. Roussel-Ragot, B. Denby, P. Chawah, L. Buchman, S. Dupont, S. Laraba, B. Picart, M. Tits, J. Tilmanne, S. Hadjidimitriou, L. Hadjileontiadis, V. Charisis, C. Volioti, A. Stergiaki, A. Manitsaris, Odysseas Bouzos, S. Manitsaris
{"title":"The i-Treasures Intangible Cultural Heritage dataset","authors":"N. Grammalidis, K. Dimitropoulos, F. Tsalakanidou, A. Kitsikidis, P. Roussel-Ragot, B. Denby, P. Chawah, L. Buchman, S. Dupont, S. Laraba, B. Picart, M. Tits, J. Tilmanne, S. Hadjidimitriou, L. Hadjileontiadis, V. Charisis, C. Volioti, A. Stergiaki, A. Manitsaris, Odysseas Bouzos, S. Manitsaris","doi":"10.1145/2948910.2948944","DOIUrl":"https://doi.org/10.1145/2948910.2948944","url":null,"abstract":"In this paper, we introduce the i-Treasures Intangible Cultural Heritage (ICH) dataset, a freely available collection of multimodal data captured from different forms of rare ICH. More specifically, the dataset contains video, audio, depth, motion capture data and other modalities, such as EEG or ultrasound data. It also includes (manual) annotations of data, while in some cases additional features and metadata are provided, extracted using algorithms and modules developed within the i-Treasures project. We describe the creation process (sensors, capture setups and modules used), the dataset content and the associated annotations. An attractive feature of this ICH Database is that it's the first of its kind, providing annotated multimodal data for a wide range of rare ICH types. Finally, some conclusions are drawn and the future development of the dataset is discussed.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Extending Methods of Composition and Performance for Live Media Art Through Markerless Voice and Movement Interfaces: An Artist Perspective 通过无标记的声音和动作界面扩展现场媒体艺术的构成和表现方法:一个艺术家的视角
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948920
Vesna Petresin
{"title":"Extending Methods of Composition and Performance for Live Media Art Through Markerless Voice and Movement Interfaces: An Artist Perspective","authors":"Vesna Petresin","doi":"10.1145/2948910.2948920","DOIUrl":"https://doi.org/10.1145/2948910.2948920","url":null,"abstract":"Transmediation of movement, body data and sound to morphogenetic processes links the trigger and response off-screen, and moves away from wearable tracking devices to gesture and AI. Workflow for composing and designing with movement and voice for media opera may be developed within a single workspace implementing principles of cross modal perception and particle simulations in animation softwares, as has been demonstrated using case studies of experimental practice using 3D film, light, voice, soundscapes and movement to compose and modulate the artistic experience in real time.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125134534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presenting a Performative Presence: materializing movement data for the design of digital interactions 呈现一种表现性的存在:为数字交互设计物化运动数据
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948911
Lise Amy Hansen
{"title":"Presenting a Performative Presence: materializing movement data for the design of digital interactions","authors":"Lise Amy Hansen","doi":"10.1145/2948910.2948911","DOIUrl":"https://doi.org/10.1145/2948910.2948911","url":null,"abstract":"This paper makes a case for exploring embodied annotation in real time for the study of movement data for interaction design. The paper argues for the critical role played by agency in performed and lived movement of an interaction; the agency stemming from the internal perceptions in relation to external structural consequences of moving. In particular, that the creative handling or materialization of movement data require boundaries for what movements are made to matter and which are not. I discuss some concerns and considerations of modeling digital movement through enactments exploring kinesthesia.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115894766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perspectives on Real-time Computation of Movement Coarticulation 运动协同关节实时计算研究进展
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948956
Frédéric Bevilacqua, Baptiste Caramiaux, Jules Françoise
{"title":"Perspectives on Real-time Computation of Movement Coarticulation","authors":"Frédéric Bevilacqua, Baptiste Caramiaux, Jules Françoise","doi":"10.1145/2948910.2948956","DOIUrl":"https://doi.org/10.1145/2948910.2948956","url":null,"abstract":"We discuss the notion of movement coarticulation, which has been studied in several fields such as motor control, music performance and animation. In gesture recognition, movement coarticulation is generally viewed as a transition between \"gestures\" that can be problematic. We propose here to account for movement coarticulation as an informative element of skilled practice and propose to explore computational modeling of coarticulation. We show that established probabilistic models need to be extended to accurately take into account movement coarticulation, and we propose research questions towards such a goal.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127222789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards the design of augmented feedforward and feedback for sensorimotor learning of motor skills 运动技能感觉运动学习的增强前馈与反馈设计
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948959
Paraskevi Kritopoulou, S. Manitsaris, F. Moutarde
{"title":"Towards the design of augmented feedforward and feedback for sensorimotor learning of motor skills","authors":"Paraskevi Kritopoulou, S. Manitsaris, F. Moutarde","doi":"10.1145/2948910.2948959","DOIUrl":"https://doi.org/10.1145/2948910.2948959","url":null,"abstract":"Creating a digital metaphor of the \"in person transmission\" of manual-crafting motor skills, is an extremely complicated and challenging task. We are aiming to achieve the above by creating a mixed reality environment, supported by an interactive system for sensorimotor learning that relies on pathing techniques. The gestural instruction of a person, the Learner, arises from the reference gesture of an Expert. The concept of the system is based on the simple idea of guiding with the projection of a gesture depicting path in 2D space and in real time. The path is projected either as a feedforward that describes the gesture that has to be executed next, either as a feedback that amends the gesture while taking into account the time needed to correct the mistake. This projection takes place in the exact area where the object lies and the Learner is being trained, to avoid any distraction from the crafting task.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126850353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
m+m: A novel Middleware for Distributed, Movement based Interactive Multimedia Systems m+m:一种用于分布式、基于移动的交互式多媒体系统的新型中间件
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948942
Ulysses Bernardet, Dhruv Adhia, Norman Jaffe, Johnty Wang, Michael Nixon, Omid Alemi, J. Phillips, S. DiPaola, Philippe Pasquier, T. Schiphorst
{"title":"m+m: A novel Middleware for Distributed, Movement based Interactive Multimedia Systems","authors":"Ulysses Bernardet, Dhruv Adhia, Norman Jaffe, Johnty Wang, Michael Nixon, Omid Alemi, J. Phillips, S. DiPaola, Philippe Pasquier, T. Schiphorst","doi":"10.1145/2948910.2948942","DOIUrl":"https://doi.org/10.1145/2948910.2948942","url":null,"abstract":"Embodied interaction has the potential to provide users with uniquely engaging and meaningful experiences. m+m: Movement + Meaning middleware is an open source software framework that enables users to construct real-time, interactive systems that are based on movement data. The acquisition, processing, and rendering of movement data can be local or distributed, real-time or off-line. Key features of the m+m middleware are a small footprint in terms of computational resources, portability between different platforms, and high performance in terms of reduced latency and increased bandwidth. Examples of systems that can be built with m+m as the internal communication middleware include those for the semantic interpretation of human movement data, machine-learning models for movement recognition, and the mapping of movement data as a controller for online navigation, collaboration, and distributed performance.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114702016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement 眼睛中的舞者:迈向运动品质的多层计算框架
Proceedings of the 3rd International Symposium on Movement and Computing Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948927
A. Camurri, G. Volpe, Stefano Piana, M. Mancini, Radoslaw Niewiadomski, Nicola Ferrari, C. Canepa
{"title":"The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement","authors":"A. Camurri, G. Volpe, Stefano Piana, M. Mancini, Radoslaw Niewiadomski, Nicola Ferrari, C. Canepa","doi":"10.1145/2948910.2948927","DOIUrl":"https://doi.org/10.1145/2948910.2948927","url":null,"abstract":"This paper presents a conceptual framework for the analysis of expressive qualities of movement. Our perspective is to model an observer of a dance performance. The conceptual framework is made of four layers, ranging from the physical signals that sensors capture to the qualities that movement communicate (e.g., in terms of emotions). The framework aims to provide a conceptual background the development of computational systems can build upon, with a particular reference to systems analyzing a vocabulary of expressive movement qualities, and translating them to other sensory channels, such as the auditory modality. Such systems enable their users to \"listen to a choreography\" or to \"feel a ballet\", in a new kind of cross-modal mediated experience.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127380913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信