New Interfaces for Musical Expression最新文献

筛选
英文 中文
Inexpensive Colour Tracking to Overcome Performer ID Loss 廉价的颜色跟踪,以克服表演者ID丢失
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813245
Bob Pritchard, I. Lavery
{"title":"Inexpensive Colour Tracking to Overcome Performer ID Loss","authors":"Bob Pritchard, I. Lavery","doi":"10.5281/ZENODO.4813245","DOIUrl":"https://doi.org/10.5281/ZENODO.4813245","url":null,"abstract":"The NuiTrack IDE supports writing code for an active infrared camera to track up to six bodies, with up to 25 target points on each person. The system automatically assigns IDs to performers/users as they enter the tracking area, but when occlusion of a performer occurs, or when a user exits and then re-enters the tracking area, upon rediscovery of the user the system generates a new tracking ID. Because of this any assigned and registered target tracking points for specific users are lost, as are the linked abilities of that performer to control media based on their movements. We describe a single camera system for overcoming this problem by assigning IDs based on the colours worn by the performers, and then using the colour tracking for updating and confirming identification when the performer reappears after occlusion or upon re-entry. A video link is supplied showing the system used for an interactive dance work with four dancers controlling individual audio tracks.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"575 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114845320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brainwaves-driven Effects Automation in Musical Performance 音乐表演中脑波驱动的效果自动化
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813180
Giorgos Filandrianos, Natalia Kotsani, Edmund Dervakos, G. Stamou, Vaios Amprazis, Panagiotis Kiourtzoglou
{"title":"Brainwaves-driven Effects Automation in Musical Performance","authors":"Giorgos Filandrianos, Natalia Kotsani, Edmund Dervakos, G. Stamou, Vaios Amprazis, Panagiotis Kiourtzoglou","doi":"10.5281/ZENODO.4813180","DOIUrl":"https://doi.org/10.5281/ZENODO.4813180","url":null,"abstract":"A variety of controllers with multifarious sensors and functions have maximized the real time performers control capa-bilities. The idea behind this project was to create an inter-face which enables the interaction between the performers and the effect processor measuring their brain waves am-plitudes, e.g., alpha, beta, theta, delta and gamma, not necessarily with the user’s awareness. We achieved this by using an electroencephalography (EEG) sensor for detecting performer’s different emotional states and, based on these, sending midi messages for digital processing units automation. The aim is to create a new generation of digital processor units that could be automatically configured in real-time given the emotions or thoughts of the performer or the audience. By introducing emotional state information in the real time control of several aspects of artistic expression, we highlight the impact of surprise and uniqueness in the artistic performance.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127201146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor 基于智能手机传感器的长短期记忆即兴合奏支持系统
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813434
Haruya Takase, Shun Shiramatsu
{"title":"Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor","authors":"Haruya Takase, Shun Shiramatsu","doi":"10.5281/ZENODO.4813434","DOIUrl":"https://doi.org/10.5281/ZENODO.4813434","url":null,"abstract":"Our goal is to develop an improvisational ensemble support system for music beginners who do not have knowledge of chord progressions and do not have enough experience of playing an instrument. We hypothesized that a music beginner cannot determine tonal pitches of melody over a particular chord but can use body movements to specify the pitch contour (i.e., melodic outline) and the attack timings (i.e., rhythm). We aim to realize a performance interface for supporting expressing intuitive pitch contour and attack timings using body motion and outputting harmonious pitches over the chord progression of the background music. Since the intended users of this system are not limited to people with music experience, we plan to develop a system that uses Android smartphones, which many people have. Our system consists of three modules: a module for specifying attack timing using smartphone sensors, module for estimating the vertical movement of the smartphone using smartphone sensors, and module for estimating the sound height using smartphone vertical movement and background chord progression. Each estimation module is developed using long short-term memory (LSTM), which is often used to estimate time series data. We conduct evaluation experiments for each module. As a result, the attack timing estimation had zero misjudgments, and the mean error time of the estimated attack timing was smaller than the sensor-acquisition interval. The accuracy of the vertical motion estimation was 64%, and that of the pitch estimation was 7.6%. The results indicate that the attack timing is accurate enough, but the vertical motion estimation and the pitch estimation need to be improved for actual use.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115420829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music 电子音乐对象操纵手势控制的半自动映射
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813232
Virginia de las Pozas
{"title":"Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music","authors":"Virginia de las Pozas","doi":"10.5281/ZENODO.4813232","DOIUrl":"https://doi.org/10.5281/ZENODO.4813232","url":null,"abstract":"This paper describes a system for automating the generation of mapping schemes between human interaction with extramusical objects and electronic dance music. These mappings are determined through the comparison of sensor input to a synthesized matrix of sequenced audio. The goal of the system is to facilitate live performances that feature quotidian objects in the place of traditional musical instruments. The practical and artistic applications of musical control with quotidian objects is discussed. The associated object-manipulating gesture vocabularies are mapped to musical output so that the objects themselves may be perceived as DMIs. This strategy is used in a performance to explore the liveness qualities of the system.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122448789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology 为家庭残疾音乐家创建在线合奏:残疾人访问和通用设计-为什么残疾人必须处于发展技术的核心
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813266
Amble Skuse, Shelly Knotts
{"title":"Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology","authors":"Amble Skuse, Shelly Knotts","doi":"10.5281/ZENODO.4813266","DOIUrl":"https://doi.org/10.5281/ZENODO.4813266","url":null,"abstract":"","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114196185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype 动画,声音和流体时间:一个视觉-听觉原型
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813230
Taylor Olsen
{"title":"Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype","authors":"Taylor Olsen","doi":"10.5281/ZENODO.4813230","DOIUrl":"https://doi.org/10.5281/ZENODO.4813230","url":null,"abstract":"The visual-audioizer is a patch created in Max in which the concept of fluid-time animation techniques, in tandem with basic computer vision tracking methods, can be used as a tool to allow the visual time-based media artist to create music. Visual aspects relating to the animator’s knowledge of motion, animated loops, and auditory synchronization derived from computer vision tracking methods, allow an immediate connection between the generated audio derived from visuals—becoming a new way to experience and create audio-visual media. A conceptual overview, comparisons of past/current audio-visual contributors, and a summary of the Max patch will be discussed. The novelty of practice-based animation methods in the field of musical expression, considerations of utilizing the visual-audioizer, and the future of fluid-time animation techniques as a tool of musical creativity will also be addressed.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126188564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Dimension Space for the Evaluation of Accessible Digital Musical Instruments 无障碍数字乐器评价的维度空间
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813326
Nicola Davanzo, F. Avanzini
{"title":"A Dimension Space for the Evaluation of Accessible Digital Musical Instruments","authors":"Nicola Davanzo, F. Avanzini","doi":"10.5281/ZENODO.4813326","DOIUrl":"https://doi.org/10.5281/ZENODO.4813326","url":null,"abstract":"Research on Accessible Digital Musical Instruments (ADMIs) has received growing attention over the past decades, carving out an increasingly large space in the literature. Despite the recent publication of state-of-the-art review works, there are still few systematic studies on ADMIs design analysis. In this paper we propose a formal tool to explore the main design aspects of ADMIs based on Dimension Space Analysis, a well established methodology in the NIME literature which allows to generate an effective visual representation of the design space. We therefore propose a set of relevant dimensions, which are based both on categories proposed in recent works in the research context, and on original contributions. We then proceed to demonstrate its applicability by selecting a set of relevant case studies, and analyzing a sample set of ADMIs found in the literature.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130055732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners 生物生理学同步计算机生成的音乐提高了表现,减少了越野跑者的感知努力
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813174
Duncan A. H. Williams, B. Fazenda, V. Williamson, György Fazekas
{"title":"Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners","authors":"Duncan A. H. Williams, B. Fazenda, V. Williamson, György Fazekas","doi":"10.5281/ZENODO.4813174","DOIUrl":"https://doi.org/10.5281/ZENODO.4813174","url":null,"abstract":"Music has previously been shown to be beneficial in improving runners performance in treadmill based experiments. This paper evaluates a generative music system, HEARTBEATS, designed to create biosignal synchronous music in real-time according to an individual athlete’s heart-rate or cadence (steps per minute). The tempo, melody, and timbral features of the generated music are modulated according to biosensor input from each runner using a wearable Bluetooth sensor. We compare the relative performance of athletes listening to heart-rate and cadence synchronous music, across a randomized trial (N=57) on a trail course with 76ft of elevation. Participants were instructed to continue until perceived effort went beyond an 18 using the Borg rating of perceived exertion scale. We found that cadence-synchronous music improved performance and decreased perceived effort in male runners, and improved performance but not perceived effort in female runners, in comparison to heart-rate synchronous music. This work has implications for the future design and implementation of novel portable music systems and in music-assisted coaching.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130920990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School SnoeSky和SonicDive -为SEN学校设计和评估两种无障碍数字乐器
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813243
Andreas Förster, Christina Komesker, Norbert Schnell
{"title":"SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School","authors":"Andreas Förster, Christina Komesker, Norbert Schnell","doi":"10.5281/ZENODO.4813243","DOIUrl":"https://doi.org/10.5281/ZENODO.4813243","url":null,"abstract":"Music technology can provide persons who experience physical and/or intellectual barriers using traditional musical instruments with a unique access to active music making. This applies particularly but not exclusively to the so-called group of people with physical and/or mental disabilities. This paper presents two Accessible Digital Musical Instruments (ADMIs) that were specifically designed for the students of a Special Educational Needs (SEN) school with a focus on intellectual disabilities. With SnoeSky, we present an interactive installation in form of a starry sky that integrates into the ceiling of a Snoezel-Room. Here, users can ’play’ with ’melodic constellations’ using a flashlight. SonicDive is an interactive installation that enables users to explore a complex water soundscape through their movement inside a ball pool. The underlying goal of both ADMIs is the promotion of self-efficacy experiences while stimulating the users’ relaxation and activation. This paper reports on the design process involving the users and their environment. In addition, it describes some details of the technical implementation of the ADMIs as well as first indices for their effectiveness.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"53 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120883545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces 使用实时编码和基于Web的接口的高能物理数据的声音化
New Interfaces for Musical Expression Pub Date : 2020-06-01 DOI: 10.5281/ZENODO.4813430
K. Vasilakos, Scott Wilson, T. McCauley, Tsun Winston Yeung, Emma Margetson, Milad Khosravi Mardakheh
{"title":"Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces","authors":"K. Vasilakos, Scott Wilson, T. McCauley, Tsun Winston Yeung, Emma Margetson, Milad Khosravi Mardakheh","doi":"10.5281/ZENODO.4813430","DOIUrl":"https://doi.org/10.5281/ZENODO.4813430","url":null,"abstract":"This paper presents a discussion of Dark Matter, a sonification project by the Birmingham Ensemble for Electroacoustic Research (BEER), a laptop group using live coding and just-in-time programming techniques, based at the University of Birmingham (UK). The project uses prerecorded data from proton-proton collisions produced by the Large Hadron Collider (LHC) at CERN, Switzerland, and then detected and reconstructed by the Compact Muon Solenoid (CMS) experiment, and was developed with the support of the art@CMS project. Work for the Dark Matter project included the development of a custom-made environment in the SuperCollider (SC) programming language that lets the performers of the group engage in collective improvisations using dynamic interventions and networked music systems. This paper will also provide information about a spin-off project entitled the Interactive Physics Sonification System (IPSOS), an interactive and standalone online application developed in the JavaScript programming language. It provides a web-based interface that allows users to map particle data to sound on commonly used web browsers, and mobile devices, such as smartphones, tablets etc. The project was developed as an educational outreach tool to engage young students and the general public with prerecorded data derived from LHC collisions.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125657456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信