New Interfaces for Musical Expression最新文献

筛选
英文 中文
All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience 你所需要的是LOD:为观众提供视觉增强的细节水平
New Interfaces for Musical Expression Pub Date : 2020-07-21 DOI: 10.5281/zenodo.4813236
Olivier Capra, Florent Berthaut, L. Grisoni
{"title":"All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience","authors":"Olivier Capra, Florent Berthaut, L. Grisoni","doi":"10.5281/zenodo.4813236","DOIUrl":"https://doi.org/10.5281/zenodo.4813236","url":null,"abstract":"Because they break the physical link between gestures and sound, Digital Musical Instruments offer countless opportunities for musical expression. For the same reason however, they may hinder the audience experience, making the musician contribution and expressiveness difficult to perceive. In order to cope with this issue without altering the instruments , researchers and artists have designed techniques to augment their performances with additional information, through audio, haptic or visual modalities. These techniques have however only been designed to offer a fixed level of information, without taking into account the variety of spectators expertise and preferences. In this paper, we investigate the design, implementation and effect on audience experience of visual augmentations with controllable level of detail (LOD). We conduct a controlled experiment with 18 participants, including novices and experts. Our results show contrasts in the impact of LOD on experience and comprehension for experts and novices, and highlight the diversity of usage of visual augmentations by spectators.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127046664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
P(l)aying Attention: Multi-modal, multi-temporal music control 注意:多模态、多时间的音乐控制
New Interfaces for Musical Expression Pub Date : 2020-07-21 DOI: 10.5281/zenodo.4813303
N. Gold, Chongyang Wang, Temitayo A. Olugbade, N. Berthouze, A. Williams
{"title":"P(l)aying Attention: Multi-modal, multi-temporal music control","authors":"N. Gold, Chongyang Wang, Temitayo A. Olugbade, N. Berthouze, A. Williams","doi":"10.5281/zenodo.4813303","DOIUrl":"https://doi.org/10.5281/zenodo.4813303","url":null,"abstract":"The expressive control of sound and music through body movements is well-studied. For some people, body movement is demanding, and although they would prefer to express themselves freely using gestural control, they are unable to use such interfaces without difficulty. In this paper, we present the P(l)aying Attention framework for manipulating recorded music to support these people, and to help the therapists that work with them. The aim is to facilitate body awareness, exploration, and expressivity by allowing the manipulation of a pre-recorded ‘ensemble’ through an interpretation of body movement, provided by a machine-learning system trained on physiotherapist assessments and movement data from people with chronic pain. The system considers the nature of a person’s movement (e.g. protective) and offers an interpretation in terms of the joint-groups that are playing a major role in the determination at that point in the movement, and to which attention should perhaps be given (or the opposite at the user’s discretion). Using music to convey the interpretation offers informational (through movement sonification) and creative (through manipulating the ensemble by movement) possibilities. The approach offers the opportunity to explore movement and music at multiple timescales and under varying musical aesthetics.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125617217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Taxonomy of Spectator Experience Augmentation Techniques 观众体验增强技术的分类
New Interfaces for Musical Expression Pub Date : 2020-07-21 DOI: 10.5281/zenodo.4813396
Olivier Capra, Florent Berthaut, L. Grisoni
{"title":"A Taxonomy of Spectator Experience Augmentation Techniques","authors":"Olivier Capra, Florent Berthaut, L. Grisoni","doi":"10.5281/zenodo.4813396","DOIUrl":"https://doi.org/10.5281/zenodo.4813396","url":null,"abstract":"In the context of artistic performances, the complexity and diversity of digital interfaces may impair the spectator experience , in particular hiding the engagement and virtuosity of the performers. Artists and researchers have made attempts at solving this by augmenting performances with additional information provided through visual, haptic or sonic modalities. However, the proposed techniques have not yet been formalized and we believe a clarification of their many aspects is necessary for future research. In this paper, we propose a taxonomy for what we define as Spectator Experience Augmentation Techniques (SEATs). We use it to analyse existing techniques and we demonstrate how it can serve as a basis for the exploration of novel ones.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"11 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131751586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A platform for low-latency continuous keyboard sensing and sound generation 一个低延迟连续键盘感应和声音生成的平台
New Interfaces for Musical Expression Pub Date : 2020-07-20 DOI: 10.5281/zenodo.4813253
G. Moro, Andrew Mcpherson
{"title":"A platform for low-latency continuous keyboard sensing and sound generation","authors":"G. Moro, Andrew Mcpherson","doi":"10.5281/zenodo.4813253","DOIUrl":"https://doi.org/10.5281/zenodo.4813253","url":null,"abstract":"On several acoustic and electromechanical keyboard instruments, the produced sound is not always strictly dependent exclusively on a discrete key velocity parameter, and minute gesture details can affect the final sonic result. By contrast, subtle variations in articulation have a relatively limited effect on the sound generation when the keyboard controller uses the MIDI standard, used in the vast ma-jority of digital keyboards. In this paper we present an embedded platform that can generate sound in response to a controller capable of sensing the continuous position of keys on a keyboard. This platform enables the creation of keyboard-based DMIs which allow for a richer set of interaction gestures than would be possible through a MIDI keyboard, which we demonstrate through two example instruments. First, in a Hammond organ emulator, the sensing device allows to recreate the nuances of the interaction with the original instrument in a way a velocity-based MIDI controller could not. Second, a nonlinear waveguide flute synthesizer is shown as an example of the expressive capabilities that a continuous-keyboard controller opens up in the creation of new keyboard-based DMIs.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122955096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy 检视NIME教育用数位乐器的时间趋势与设计目标:一个拟议的分类
New Interfaces for Musical Expression Pub Date : 2020-07-01 DOI: 10.5281/zenodo.4813210
Margarida Pessoa, Cláudio Parauta, Pedro Luís, I. Almeida, Gilberto Bernardes
{"title":"Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy","authors":"Margarida Pessoa, Cláudio Parauta, Pedro Luís, I. Almeida, Gilberto Bernardes","doi":"10.5281/zenodo.4813210","DOIUrl":"https://doi.org/10.5281/zenodo.4813210","url":null,"abstract":"This paper presents an overview of the design principles behind Digital Music Instruments (DMIs) for education across all editions of the International Conference on New Interfaces for Music Expression (NIME). We compiled a comprehensive catalogue of over hundred DMIs with varying degrees of applicability in the educational practice. Each catalogue entry is annotated according to a proposed taxonomy for DMIs for education, rooted in the mechanics of control, mapping and feedback of an interactive music system, along with the required expertise of target user groups and the instrument learning curve. Global statistics unpack underlying trends and design goals across the chronological period of the NIME conference. In recent years, we note a growing number of DMIs targeting non-experts and with reduced requirements in terms of expertise. Stemming from the identified trends, we discuss future challenges in the design of DMIs for education towards enhanced degrees of variation and unpredictability.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128598404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds 采样:用于现场录制环境声音的实时集成性能的系统
New Interfaces for Musical Expression Pub Date : 2020-06-17 DOI: 10.5281/zenodo.4813371
Atsuya Kobayashi, Reo Anzai, N. Tokui
{"title":"ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds","authors":"Atsuya Kobayashi, Reo Anzai, N. Tokui","doi":"10.5281/zenodo.4813371","DOIUrl":"https://doi.org/10.5281/zenodo.4813371","url":null,"abstract":"We propose ExSampling: an integrated system of recording application and Deep Learning environment for a real-time music performance of environmental sounds sampled by field recording. Automated sound mapping to Ableton Live tracks by Deep Learning enables field recording to be applied to real-time performance, and create interactions among sound recorders, composers and performers.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114311387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis The Da ıs:一种用于控制声音合成物理模型的音乐表达的触觉新界面
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813220
P. J. Christensen, Dan Overholt, S. Serafin
{"title":"The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis","authors":"P. J. Christensen, Dan Overholt, S. Serafin","doi":"10.5281/ZENODO.4813220","DOIUrl":"https://doi.org/10.5281/ZENODO.4813220","url":null,"abstract":"","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130599554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance 基于交互模型的舞蹈动作声音化研究
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813422
James Leonard, A. Giomi
{"title":"Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance","authors":"James Leonard, A. Giomi","doi":"10.5281/ZENODO.4813422","DOIUrl":"https://doi.org/10.5281/ZENODO.4813422","url":null,"abstract":"This paper presents an ongoing research on hand gesture interactive sonification in dance performances. For this purpose, a conceptual framework and a multilayered mapping model issued from an experimental case study will be proposed. The goal of this research is twofold. On the one hand, we aim to determine action-based perceptual invariants that allow us to establish pertinent relations between gesture qualities and sound features. On the other hand, we are interested in analysing how an interactive model-based sonification can provide useful and effective feedback for dance practitioners. From this point of view, our research explicitly addresses the convergence between the scientific understandings provided by the field of movement sonification and the traditional know-how developed over the years within the digital instrument and interaction design communities. A key component of our study is the combination between physically-based sound synthesis and motion features analysis. This approach has proven effective in providing interesting insights for devising novel sonification models for artistic and scientific purposes, and for developing a collaborative platform involving the designer, the musician and the performer.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114735645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
User-Defined Mappings for Spatial Sound Synthesis 用于空间声音合成的用户定义映射
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813477
Henrik von Coler, Steffen Lepa, S. Weinzierl
{"title":"User-Defined Mappings for Spatial Sound Synthesis","authors":"Henrik von Coler, Steffen Lepa, S. Weinzierl","doi":"10.5281/ZENODO.4813477","DOIUrl":"https://doi.org/10.5281/ZENODO.4813477","url":null,"abstract":"The presented sound synthesis system allows the individual spatialization of spectral components in real-time, using a sinusoidal modeling approach within 3-dimensional sound reproduction systems. A co-developed, dedicated haptic interface is used to jointly control spectral and spatial attributes of the sound. Within a user study, participants were asked to create an individual mapping between control parameters of the interface and rendering parameters of sound synthesis and spatialization, using a visual programming environment. Resulting mappings of all participants are evaluated, indicating the preference of single control parameters for specific tasks. In comparison with mappings intended by the development team, the results validate certain design decisions and indicate new directions.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"60 30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122053493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SOIL CHOIR v.1.3 - soil moisture sonification installation 土壤合唱团v.1.3 -土壤水分超声装置
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813226
J. Suchánek
{"title":"SOIL CHOIR v.1.3 - soil moisture sonification installation","authors":"J. Suchánek","doi":"10.5281/ZENODO.4813226","DOIUrl":"https://doi.org/10.5281/ZENODO.4813226","url":null,"abstract":"The artistic sonification offers a creative method for putting direct semantic layers to the abstract sounds. This paper is dedicated to the sound installation “Soil choir v.1.3” that use sonifies soil moisture in different depths and transforms this non-musical phenomenon into organized sound structures. The sonification of natural soil moisture processes tests the limits of our attention, patience and willingness to still perceive ultra-slow reactions and examines the mechanisms of our sense adaptation. Although the musical time of the installation is set to almost non-human – environmental time scale (changes happen within hours, days, weeks or even months…) this system can be explored and even played also as an instrument by putting sensors to different soil areas or pouring liquid into the soil and waiting for changes... The crucial aspect of the work was to design the sonification architecture that deals with extreme slow changes of input data – measured values from moisture sensors. The result is the sound installation consisting of three objects – each with different types of soil. Every object is compact, independent unit consisting of three low-cost capacitive soil moisture sensors, 1m long perspex tube filled with soil, full range loudspeaker and Bela platform with custom Supercollider code. I developed this installation during the year 2019 and this paper will give insight into the aspects and issues connected with creating this installation.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125219901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信