New Interfaces for Musical Expression最新文献

筛选
英文 中文
A Bassline Generation System Based on Sequence-to-Sequence Learning 基于序列对序列学习的低音线生成系统
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3672928
B. Haki, S. Jordà
{"title":"A Bassline Generation System Based on Sequence-to-Sequence Learning","authors":"B. Haki, S. Jordà","doi":"10.5281/zenodo.3672928","DOIUrl":"https://doi.org/10.5281/zenodo.3672928","url":null,"abstract":"This thesis presents a detailed explanation of a system generating basslines that are stylistically and rhythmically interlocked with a provided audio drum loop. The proposed system is based on a natural language processing technique: wordbased sequence-to-sequence learning. The word-based sequence-to-sequence learning method proposed in this thesis is comprised of recurrent neural networks composed of LSTM units. The novelty of the proposed method lies in the fact that the system is not reliant on a voice-by-voice transcription of drums; instead, in this method, a drum representation is used as an input sequence from which a translated bassline is obtained at the output. The drum representation consists of fixed size sequences of onsets detected from a 2-bar audio drum loop in eight different frequency bands. The basslines generated by this method consist of pitched notes with different duration. The proposed system was trained on two distinct datasets compiled for this project by the authors. Each dataset contains a variety of 2-bar drum loops with annotated basslines from two different styles of dance music: House and Soca. A listening experiment designed based on the system revealed that the proposed system is capable of generating basslines that are interesting and are well rhythmically interlocked with the drum loops from which they were generated.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125692823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CD-Synth: a Rotating, Untethered, Digital Synthesizer cd合成器:一个旋转的,不受束缚的,数字合成器
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3672998
Patrick C. Chwalek, J. Paradiso
{"title":"CD-Synth: a Rotating, Untethered, Digital Synthesizer","authors":"Patrick C. Chwalek, J. Paradiso","doi":"10.5281/zenodo.3672998","DOIUrl":"https://doi.org/10.5281/zenodo.3672998","url":null,"abstract":"We describe the design of an untethered digital synthesizer that can be held and manipulated while broadcasting audio data to a receiving off-the-shelf Bluetooth receiver. The synthesizer allows the user to freely rotate and reorient the instrument while exploiting non-contact light sensing for a truly expressive performance. The system consists of a suite of sensors that convert rotation, orientation, touch, and user proximity into various audio filters and effects operated on preset wave tables, while offering a persistence of vision display for input visualization. This paper discusses the design of the system, including the circuit, mechanics, and software layout, as well as how this device may be incorporated into a performance.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128933304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latin American NIMEs: Electronic Musical Instruments and Experimental Sound Devices in the Twentieth Century 拉丁美洲的游戏:二十世纪的电子乐器和实验声音设备
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3672936
Martin Matus Lerner
{"title":"Latin American NIMEs: Electronic Musical Instruments and Experimental Sound Devices in the Twentieth Century","authors":"Martin Matus Lerner","doi":"10.5281/zenodo.3672936","DOIUrl":"https://doi.org/10.5281/zenodo.3672936","url":null,"abstract":"During the twentieth century several Latin American nations (such as Argentina, Brazil, Chile, Cuba and Mexico) have originated relevant antecedents in the NIME field. Their innovative authors have interrelated musical composition, lutherie, electronics and computing. This paper provides a panoramic view of their original electronic instruments and experimental sound practices, as well as a perspective of them regarding other inventions around the World.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131915239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enhancing the Expressivity of the Sensel Morph via Audio-rate Sensing 通过音频速率传感增强传感器形态的表现力
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3672968
Razvan Paisa, Dan Overholt
{"title":"Enhancing the Expressivity of the Sensel Morph via Audio-rate Sensing","authors":"Razvan Paisa, Dan Overholt","doi":"10.5281/zenodo.3672968","DOIUrl":"https://doi.org/10.5281/zenodo.3672968","url":null,"abstract":"This project describes a novel approach to hybrid electroacoustical instruments by augmenting the Sensel Morph, with real-time audio sensing capabilities. The actual actionsounds are captured with a piezoelectric transducer and processed in Max 8 to extend the sonic range existing in the acoustical domain alone. The control parameters are captured by the Morph and mapped to audio algorithm proprieties like filter cutoff frequency, frequency shift or overdrive. The instrument opens up the possibility for a large selection of different interaction techniques that have a direct impact on the output sound. The instrument is evaluated from a sound designer’s perspective, encouraging exploration in the materials used as well as techniques. The contribution are two-fold. First, the use of a piezo transducer to augment the Sensel Morph affords an extra dimension of control on top of the offerings. Second, the use of acoustic sounds from physical interactions as a source for excitation and manipulation of an audio processing system offers a large variety of new sounds to be discovered. The methodology involved an exploratory process of iterative instrument making, interspersed with observations gathered via improvisatory trials, focusing on the new interactions made possible through the fusion of audio-rate inputs with the Morph’s default interaction methods.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115555532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bespoke Design for Inclusive Music: The Challenges of Evaluation 包容性音乐的定制设计:评估的挑战
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3672884
A. Lucas, Miguel Ortiz, F. Schroeder
{"title":"Bespoke Design for Inclusive Music: The Challenges of Evaluation","authors":"A. Lucas, Miguel Ortiz, F. Schroeder","doi":"10.5281/zenodo.3672884","DOIUrl":"https://doi.org/10.5281/zenodo.3672884","url":null,"abstract":"In this paper, the authors describe the evaluation of a collection of bespoke knob cap designs intended to improve the ease in which a specific musician with dyskinetic cerebral palsy can operate rotary controls in a musical context. The authors highlight the importance of the performer’s perspective when using design as a means for overcoming access barriers to music. Also, while the authors were not able to find an ideal solution for the musician within the confines of this study, several useful observations on the process of evaluating bespoke assistive music technology are described; observations which may prove useful to digital musical instrument designers working within the field of inclusive music.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114904817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Taming and Tickling the Beast - Multi-Touch Keyboard as Interface for a Physically Modelled Interconnected Resonating Super-Harp 驯服和挠痒野兽-多点触控键盘作为一个物理模型互联共振超级竖琴的界面
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3672862
Palle Dahlstedt
{"title":"Taming and Tickling the Beast - Multi-Touch Keyboard as Interface for a Physically Modelled Interconnected Resonating Super-Harp","authors":"Palle Dahlstedt","doi":"10.5281/zenodo.3672862","DOIUrl":"https://doi.org/10.5281/zenodo.3672862","url":null,"abstract":"Libration Perturbed is a performance and an improvisation instrument, originally composed and designed for a multispeaker dome. The performer controls a bank of 64 virtual inter-connected resonating strings, with individual and direct control of tuning and resonance characteristics through a multitouch-enhanced klavier interface (TouchKeys). It is a hybrid acoustic-electronic instrument, as all string vibrations originate from physical vibrations in the klavier and its casing, captured through contact microphones. In addition, there are gestural strings, called ropes, excited by performed musical gestures. All strings and ropes are connected, and inter-resonate together as a ”super-harp”, internally and through the performance space. With strong resonance, strings may go into chaotic motion or emergent quasi-periodic patterns, but custom adaptive leveling mechanisms keep loudness under the musician’s control at all times. The hybrid digital/acoustic approach and the enhanced keyboard provide for an expressive and very physical interaction, and a strong multi-channel immersive experience. The paper describes the aesthetic choices behind the design of the system, as well as the technical implementation, and – primarily – the interaction design, as it emerges from mapping, sound design, physical modeling and integration of the acoustic, the gestural, and the virtual. The work is evaluated based on the experiences from a series of performances.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117344879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Eolos: a wireless MIDI wind controller Eolos:无线MIDI风控制器
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3672972
J. M. Ramos
{"title":"Eolos: a wireless MIDI wind controller","authors":"J. M. Ramos","doi":"10.5281/zenodo.3672972","DOIUrl":"https://doi.org/10.5281/zenodo.3672972","url":null,"abstract":"This paper presents a description of the design and usage of Eolos, a wireless MIDI wind controller. The main goal of Eolos is to provide an interface that facilitates the production of music for any individual, regardless of their playing skills or previous musical knowledge. Its features are: open design, lower cost than commercial alternatives, wireless MIDI operation, rechargeable battery power, graphical user interface, tactile keys, sensitivity to air pressure, left-right reversible design and two FSR sensors. There is also a mention about its participation in the 1st Collaborative Concert over the Internet between Argentina and Cuba \"Tradición y Nuevas Sonoridades\".","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127396367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
HMusic: A domain specific language for music programming and live coding HMusic:用于音乐编程和现场编码的领域特定语言
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3673003
A. R. D. Bois, R. Ribeiro
{"title":"HMusic: A domain specific language for music programming and live coding","authors":"A. R. D. Bois, R. Ribeiro","doi":"10.5281/zenodo.3673003","DOIUrl":"https://doi.org/10.5281/zenodo.3673003","url":null,"abstract":"This paper presents HMusic, a domain specific language based on music patterns that can be used to write music and live coding. The main abstractions provided by the language are patterns and tracks. Code written in HMusic looks like patterns and multi-tracks available in music sequencers, drum machines and DAWs. HMusic provides primitives to design and compose patterns generating new patterns. The basic abstractions provided by the language have an inductive definition and HMusic is embedded in the Haskell functional programming language, hence programmers can design functions to manipulate music on the fly. The current implementation of the language is compiled into Sonic Pi [10] and can be downloaded from [9].","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131954771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces 数据驱动音乐接口的声音集合自适应映射
New Interfaces for Musical Expression Pub Date : 2019-06-01 DOI: 10.5281/zenodo.3672976
Gerard Roma, Owen Green, P. Tremblay
{"title":"Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces","authors":"Gerard Roma, Owen Green, P. Tremblay","doi":"10.5281/zenodo.3672976","DOIUrl":"https://doi.org/10.5281/zenodo.3672976","url":null,"abstract":"Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition, this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128986108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks 混合密度递归神经网络交互式音乐预测系统
New Interfaces for Musical Expression Pub Date : 2019-04-01 DOI: 10.5281/zenodo.3672952
Charles Patrick Martin, J. Tørresen
{"title":"An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks","authors":"Charles Patrick Martin, J. Tørresen","doi":"10.5281/zenodo.3672952","DOIUrl":"https://doi.org/10.5281/zenodo.3672952","url":null,"abstract":"This paper is about creating digital musical instruments where a predictive neural network model is integrated into the interactive system. Rather than predicting symbolic music (e.g., MIDI notes), we suggest that predicting future control data from the user and precise temporal information can lead to new and interesting interactive possibilities. We propose that a mixture density recurrent neural network (MDRNN) is an appropriate model for this task. The predictions can be used to fill-in control data when the user stops performing, or as a kind of filter on the user's own input. We present an interactive MDRNN prediction server that allows rapid prototyping of new NIMEs featuring predictive musical interaction by recording datasets, training MDRNN models, and experimenting with interaction modes. We illustrate our system with several example NIMEs applying this idea. Our evaluation shows that real-time predictive interaction is viable even on single-board computers and that small models are appropriate for small datasets.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128143377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信