Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences最新文献

筛选
英文 中文
FEATUR.UX.AV: A Live Sound Visualization System Using Multitrack Audio feature . ux . av:使用多轨音频的实时声音可视化系统
I. Olowe, M. Barthet, M. Grierson
{"title":"FEATUR.UX.AV: A Live Sound Visualization System Using Multitrack Audio","authors":"I. Olowe, M. Barthet, M. Grierson","doi":"10.1145/3123514.3123561","DOIUrl":"https://doi.org/10.1145/3123514.3123561","url":null,"abstract":"In this paper, we describe the conceptual design and technical implementation of an audiovisual system whereby multitrack audio is used to generate visualizations in real time. We discuss our motivation within the context of audiovisual practice and present the outcomes of studies conducted to outline design requirements. We then describe the audio and visual components of our multitrack visualization model, and specific parts of the graphical user interface (GUI) which focus on mapping as the primary mechanism to facilitate live multitrack audiovisual performance.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125561729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Composing The Good Ship Hibernia and the Hole in the Bottom of the World 创作好船海伯尼亚和世界底部的洞
S. Roddy
{"title":"Composing The Good Ship Hibernia and the Hole in the Bottom of the World","authors":"S. Roddy","doi":"10.1145/3123514.3123520","DOIUrl":"https://doi.org/10.1145/3123514.3123520","url":null,"abstract":"This paper explores topics in embodied cognition, soundscape composition and sonification. It explains the compositional decisions and technical considerations that went into the composition of the piece The Good Ship Hibernia, which is an example of embodied soundscape sonification. This explanation is undertaken within the context of an approach to both sonification design and music composition that accounts for and exploits the embodied aspects of meaning-making in auditory cognition as described in the embodied cognitive science literature.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127599908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Designing Computationally Creative Musical Performance Systems 设计计算创造性的音乐表演系统
C. Goddard, M. Barthet, Geraint A. Wiggins
{"title":"Designing Computationally Creative Musical Performance Systems","authors":"C. Goddard, M. Barthet, Geraint A. Wiggins","doi":"10.1145/3123514.3123541","DOIUrl":"https://doi.org/10.1145/3123514.3123541","url":null,"abstract":"This is work in progress where we outline a design process for a computationally creative musical performance system using the Creative Systems Framework (CSF). The proposed system is intended to produce virtuosic interpretations, and subsequent synthesized renderings of these interpretations with a physical model of a bass guitar, using case-based reasoning and reflection. We introduce our interpretations of virtuosity and musical performance, outline the suitability of case-based reasoning in computationally creative systems and introduce notions of computational creativity and the CSF. We design our system by formalising the components of the CSF and briefly outline a potential implementation. In doing so, we demonstrate how the CSF can be used as a tool to aid in designing computationally creative musical performance systems.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122750554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Walking Phrases: Modeling the Walker's Context for Sonic Interaction Design 行走短语:为声音交互设计建模步行者的环境
Nassrin Hajinejad, Licinio Gomes Roque, Barbara Grüter
{"title":"Walking Phrases: Modeling the Walker's Context for Sonic Interaction Design","authors":"Nassrin Hajinejad, Licinio Gomes Roque, Barbara Grüter","doi":"10.1145/3123514.3123544","DOIUrl":"https://doi.org/10.1145/3123514.3123544","url":null,"abstract":"To design meaningful sonic interaction for the mobile context requires accommodating the user's unfolding context. We explore the design of sonic interaction for the walking activity. In this paper, we discuss how the walker's movements can provide insight into individual contextual conditions. Our contribution is a set of semantic walking sequences (Walking Phrases) that allow for segmentation of the walking process into units that are informative for adapting interactive sound to the walker's context.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"547 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123093883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Perception of Paralinguistic Traits in Synthesized Voices 合成语音的副语言特征感知
Alice Baird, Stina Marie Hasse Jørgensen, Emilia Parada-Cabaleiro, Simone Hantke, N. Cummins, Björn Schuller
{"title":"Perception of Paralinguistic Traits in Synthesized Voices","authors":"Alice Baird, Stina Marie Hasse Jørgensen, Emilia Parada-Cabaleiro, Simone Hantke, N. Cummins, Björn Schuller","doi":"10.1145/3123514.3123528","DOIUrl":"https://doi.org/10.1145/3123514.3123528","url":null,"abstract":"Along with the rise of artificial intelligence and the internet-of-things, synthesized voices are now common in daily--life, providing us with guidance, assistance, and even companionship. From formant to concatenative synthesis, the synthesized voice continues to be defined by the same traits we prescribe to ourselves. When the recorded voice is synthesized, does our perception of its new machine embodiment change, and can we consider an alternative, more inclusive form? To begin evaluating the impact of aesthetic design, this study presents a first--step perception test to explore the paralinguistic traits of the synthesized voice. Using a corpus of 13 synthesized voices, constructed from acoustic concatenative speech synthesis, we assessed the response of 23 listeners from differing cultural backgrounds. To evaluate if perception shifts from the defined traits, we asked listeners to assigned traits of age, gender, accent origin, and human--likeness. Results present a difference in perception for age and human--likeness across voices, and a general agreement across listeners for both gender and accent origin. Connections found between age, gender and human--likeness call for further exploration into a more participatory and inclusive synthesized vocal identity.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131495364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Media Device Orchestration for Immersive Spatial Audio Reproduction 沉浸式空间音频再现的媒体设备编排
J. Francombe, R. Mason, P. Jackson, Tim S. Brookes, R. Hughes, James Woodcock, Andreas Franck, F. Melchior, C. Pike
{"title":"Media Device Orchestration for Immersive Spatial Audio Reproduction","authors":"J. Francombe, R. Mason, P. Jackson, Tim S. Brookes, R. Hughes, James Woodcock, Andreas Franck, F. Melchior, C. Pike","doi":"10.1145/3123514.3123563","DOIUrl":"https://doi.org/10.1145/3123514.3123563","url":null,"abstract":"Whilst it is possible to create exciting, immersive listening experiences with current spatial audio technology, the required systems are generally difficult to install in a standard living room. However, in any living room there is likely to already be a range of loudspeakers (such as mobile phones, tablets, laptops, and so on). \"Media device orchestration\" (MDO) is the concept of utilising all available devices to augment the reproduction of a media experience. In this demonstration, MDO is used to augment low channel count renderings of various programme material, delivering immersive three-dimensional audio experiences.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114997663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Handwaving: Gesture Recognition for Participatory Mobile Music 手势:参与式移动音乐的手势识别
Gerard Roma, Anna Xambó, Jason Freeman
{"title":"Handwaving: Gesture Recognition for Participatory Mobile Music","authors":"Gerard Roma, Anna Xambó, Jason Freeman","doi":"10.1145/3123514.3123538","DOIUrl":"https://doi.org/10.1145/3123514.3123538","url":null,"abstract":"This paper describes handwaving, a system for participatory mobile music based on accelerometer gesture recognition. The core of the system is a library that can be used to recognize and map arbitrary gestures to sound synthesizers. Such gestures can be quickly learnt by mobile phone users in order to produce sounds in a musical context. The system is implemented using web standards, so it can be used with most current smartphones without the need of installing specific software.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116775403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The Hyper-Mandolin 的Hyper-Mandolin
L. Turchet
{"title":"The Hyper-Mandolin","authors":"L. Turchet","doi":"10.1145/3123514.3123539","DOIUrl":"https://doi.org/10.1145/3123514.3123539","url":null,"abstract":"This paper presents the Hyper-Mandolin, which consists of a conventional acoustic mandolin augmented with different types of sensors, a microphone, as well as real-time control of digital effects and sound generators during the performer's act of playing. The placing of the added technology is conveniently located and is not a hindrance to the acoustic use of the instrument. A modular architecture is involved to connect various sensors interfaces to a central computing unit dedicated to the analog to digital conversion of the sensors data. Such an architecture allows for an easy interchange of the sensors interface layouts. The processing of audio and sensors data is accomplished by applications coded in Max/MSP and running on an external computer. The instrument can also be used as a controller for digital audio workstations. The interactive control of the sonic output is based on the extraction of features from both the data captured by sensors and the acoustic waveforms captured by the microphone. The development of this instrument was mainly motivated by the author's need to extend the sonic and interaction possibility of the acoustic mandolin when used in conjunction with conventional electronics for sound processing.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122324433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
An Archival Echo: Recalling the public domain through real-time query by vocalisation 档案回声:通过发声的实时查询来回顾公共领域
Ben White, Adib Mehrabi, M. Sandler
{"title":"An Archival Echo: Recalling the public domain through real-time query by vocalisation","authors":"Ben White, Adib Mehrabi, M. Sandler","doi":"10.1145/3123514.3123546","DOIUrl":"https://doi.org/10.1145/3123514.3123546","url":null,"abstract":"In this paper we present a novel system for performative interaction with an archive of public domain music recordings. The system uses real-time query by vocalization to retrieve sounds extracted from chart hit singles of the 1960s. This enables the user, or performer, to generate a cascade of archival echoes from vocalisations. The system was developed for a series of music workshops, held as part of a two art projects centered around the reuse and repurposing of archive recordings. As such the design decisions were shaped by the conceptual framework of the artists and intended audience. Here we outline the context and background of the art projects, describe the query by vocalisation system and discuss the workshops, where the artists invited amateur musicians to use the system to develop a public performance.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116311888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Geometric Oscillator: Sound Synthesis with Cyclic Shapes 几何振荡器:循环形状的声音合成
Joshua Peschke, Axel Berndt
{"title":"The Geometric Oscillator: Sound Synthesis with Cyclic Shapes","authors":"Joshua Peschke, Axel Berndt","doi":"10.1145/3123514.3123522","DOIUrl":"https://doi.org/10.1145/3123514.3123522","url":null,"abstract":"From perfect circular motion derives the sine wave. Deforming the circle or replacing it by a different cyclic shape produces a different waveform. This marks the conceptual basis of the geometric oscillator. Interaction with the shapes, such as in a graphics editor, becomes interaction with the timbres that derive from them. In this paper, we elaborate on this synthesis method, introduce a further derivation step that comes with some handy advantages, and detail a corresponding user interface approach. A prototype implementation, called Cyclone, is described. Based on feedback that we gained from demos and our own experiences from experiments we will outline the next iteration of Cyclone's further development.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123722874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信