Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences最新文献

筛选
英文 中文
Potential of Application of Psychoacoustics to User/Product Interaction Design 心理声学在用户/产品交互设计中的应用潜力
R. Sanz-Segura, Eduardo Manchado-Pérez
{"title":"Potential of Application of Psychoacoustics to User/Product Interaction Design","authors":"R. Sanz-Segura, Eduardo Manchado-Pérez","doi":"10.1145/3123514.3123554","DOIUrl":"https://doi.org/10.1145/3123514.3123554","url":null,"abstract":"Product design helps people to access to technological devices through the development of a product interface. It is also capable of influencing people's interaction with others and with the environment, and of affecting the emotional response of product users, as the evolution of this activity to a new approach that can be defined as product semantics design. Thus, the product itself is a communicative system, and an understanding of it depends on the congruence of the sensorial properties perceived by the user with the concept of the product. Therefore, given that sound is one of the essential stimuli perceived by users, it is also one of the vital parameters to be configured by the designer, and the knowledge of the communicative capacities of sounds and their properties must be incorporated into the product design process. This paper reviews some of the more relevant studies in the field of sound from a perspective of its potential application to product interaction design, and critically analyses their main contributions to date and their potential application in order to contribute to the proposal for future lines of work.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124768832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2K-Reality: An Acoustic Sports Entertainment Augmentation for Pickup Basketball Play Spaces 2K-Reality:一个声学运动娱乐增强皮卡篮球比赛空间
T. Ryan, J. Duckworth
{"title":"2K-Reality: An Acoustic Sports Entertainment Augmentation for Pickup Basketball Play Spaces","authors":"T. Ryan, J. Duckworth","doi":"10.1145/3123514.3123529","DOIUrl":"https://doi.org/10.1145/3123514.3123529","url":null,"abstract":"In this paper we describe 2K-Reality; an acoustic sports entertainment augmentation designed to enhance the enjoyment of playing and watching the cultural practice of pickup basketball. 2K-Reality is an interactive digital artefact for pickup basketball play spaces that recontextualises sounds appropriated from a National Basketball Association (NBA) videogame to create interactive sonic experiences for players and spectators. We discuss how the design blends NBA videogames and real basketball play spaces using broadcast-style commentary, stadium-style crowd sound effects and contemporary music break beats activated by spectators interacting with a touchscreen interface connected to a public address (PA) system. Using an ethnographic approach, we analyse the different ways spectators orchestrate the different sounds, and the subsequent effects 2K-Reality soundscapes had on social interactions and the experiences of playing and watching pickup basketball. We conclude from our study that 2K-Reality is a demonstration of a compliant sports augmentation, a term we use to describe a digital enhancement of playing and watching grassroots sports without modifying existing spatial, temporal and cultural practices or the standards codified by a sport's governing body.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121478378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Open Band: A Platform for Collective Sound Dialogues 开放乐队:一个集体声音对话的平台
Ariane Stolfi, M. Barthet, Fábio Goródscy, Antonio Deusany de Carvalho Junior
{"title":"Open Band: A Platform for Collective Sound Dialogues","authors":"Ariane Stolfi, M. Barthet, Fábio Goródscy, Antonio Deusany de Carvalho Junior","doi":"10.1145/3123514.3123526","DOIUrl":"https://doi.org/10.1145/3123514.3123526","url":null,"abstract":"Open Band is a web-based platform for collective \"sound dialogues\" designed to provide audiences with empowering experiences through music. The system draws on interactive participatory art and networked music performance by engaging participants in a sonic web \"agora\" in collocated and/or remote gatherings, regardless of musical level. In this paper, we present our artistic intent grounded in Eco's concept of Open Works and the initial design of a web-based open environment that supports social musical interactions. Interaction operates by means of a multi-user live chat system that renders textual messages into sounds. Feedback gathered across several public participatory performances was overall positive and we identified further design challenges around personalization, crowd dynamic and rhythmic aspects.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126477926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
EVERTims: Open source framework for real-time auralization in VR EVERTims: VR中实时听觉化的开源框架
David Poirier-Quinot, M. Noisternig, B. Katz
{"title":"EVERTims: Open source framework for real-time auralization in VR","authors":"David Poirier-Quinot, M. Noisternig, B. Katz","doi":"10.1145/3123514.3123559","DOIUrl":"https://doi.org/10.1145/3123514.3123559","url":null,"abstract":"Our demonstration presents recent developments of the EVERTims project, an auralization framework for virtual acoustics and real-time room acoustic simulation. The developments presented here concern the complete re-design of the scene graph editor unit, and the C++ implementation of a new spatial renderer based on the JUCE framework. EVERTims now functions as a Blender add-on to support real-time auralization of any 3D room model, both for its creation in Blender and its exploration in the Blender Game Engine. The EVERTims framework is published as open source software.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"2000 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
IMMA-Emo: A Multimodal Interface for Visualising Score- and Audio-synchronised Emotion Annotations IMMA-Emo:一个多模态界面,用于可视化乐谱和音频同步的情感注释
Dorien Herremans, Simin Yang, C. Chuan, M. Barthet, E. Chew
{"title":"IMMA-Emo: A Multimodal Interface for Visualising Score- and Audio-synchronised Emotion Annotations","authors":"Dorien Herremans, Simin Yang, C. Chuan, M. Barthet, E. Chew","doi":"10.1145/3123514.3123545","DOIUrl":"https://doi.org/10.1145/3123514.3123545","url":null,"abstract":"Emotional response to music is often represented on a two-dimensional arousal-valence space without reference to score information that may provide critical cues to explain the observed data. To bridge this gap, we present IMMA-Emo, an integrated software system for visualising emotion data aligned with music audio and score, so as to provide an intuitive way to interactively visualise and analyse music emotion data. The visual interface also allows for the comparison of multiple emotion time series. The IMMA-Emo system builds on the online interactive Multi-modal Music Analysis (IMMA) system. Two examples demonstrating the capabilities of the IMMA-Emo system are drawn from an experiment set up to collect arousal-valence ratings based on participants' perceived emotions during a live performance. Direct observation of corresponding score parts and aural input from the recording allow explanatory factors to be identified for the ratings and changes in the ratings.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134101984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Creating Space for Facilitated Music Performance: Gesture Controlled Sound for Users with Complex Disabilities 为方便的音乐表演创造空间:为有复杂残疾的用户提供手势控制声音
A. Dickens, C. Greenhalgh, B. Koleva
{"title":"Creating Space for Facilitated Music Performance: Gesture Controlled Sound for Users with Complex Disabilities","authors":"A. Dickens, C. Greenhalgh, B. Koleva","doi":"10.1145/3123514.3123518","DOIUrl":"https://doi.org/10.1145/3123514.3123518","url":null,"abstract":"Musical interactions have the potential to increase emotional well-being, self-confidence and self-motivation. However, the ability to actively participate in creative activities involving music performance has so far been difficult for users with complex disabilities. This paper discusses placing a technology probe, using gesture based musical controls, in an existing music technology project for users with complex disabilities (conditions which affect both cognitive and motor abilities of an individual). The focus is on understanding the needs of this user group in a participatory design approach for creative music technologies that allow for tailored accessibility. Outcomes from this research show that many multi-level social interactions surrounding the technology, users, audience, and any third party facilitators exist in the context of 'facilitated performance'. Results suggest that including facilitators in the design of Digital Musical Instruments (DMIs) could allow for improved accessibility for users with complex disabilities.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114205745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Embedded Multichannel Linux Audiosystem for Musical Applications 嵌入式多声道Linux音频系统的音乐应用
Henrik Langer, R. Manzke
{"title":"Embedded Multichannel Linux Audiosystem for Musical Applications","authors":"Henrik Langer, R. Manzke","doi":"10.1145/3123514.3123523","DOIUrl":"https://doi.org/10.1145/3123514.3123523","url":null,"abstract":"Due to the quickly growing performance and decreasing cost, embedded systems have become suitable for new application areas in recent years. Especially digital signal processing in the audio domain requires high computing performance to complete complex calculations in a fixed amount of time (i.e. real-time processing). In this paper, a novel multichannel, low-latency Linux-based audio system is introduced. The driver architecture is described and an evaluation of the system is presented. The development of the driver architecture includes ALSA device drivers, that use the ASoC layer, sound server settings, device tree overlays and capes, register maps, and real-time patches to the kernel. The overall system has been evaluated, regarding technical sound quality and latency, to gauge its usefulness as a powerful new platform for audio development projects, such as embedded digital effect processors for musicians.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"31 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121000368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Datascaping: Data Sonification as a Narrative Device in Soundscape Composition 数据美化:作为声景构图叙事手段的数据声化
Jon Pigrem, M. Barthet
{"title":"Datascaping: Data Sonification as a Narrative Device in Soundscape Composition","authors":"Jon Pigrem, M. Barthet","doi":"10.1145/3123514.3123537","DOIUrl":"https://doi.org/10.1145/3123514.3123537","url":null,"abstract":"Soundscape composition is an art form that has grown from acoustic ecology and soundscape studies. Current practices foster a wide range of approaches, from the educational and documentary function of the world soundscape project (WSP) to the creation of imaginary sonic worlds supported by theories of acousmatic and electroacoustic music. Sonification is the process of rendering audio in response to data, and is often used in scenarios where visual representations of data are impractical. The field of auditory display has grown in isolation to soundscape composition, however fosters conceptual similarities in its representation of information in sonic form. This paper investigates the use of data sonification as a narrative tool in soundscape composition. A soundscape has been created using traditional concrete sounds (fixed media recorded sound objects), augmented with sonified real-time elements. An online survey and listening experiment was conducted, which asked participants to rate the soundscape on its ability to communicate specific detail with regard to environmental and social elements contained within. Research data collected shows a strong ability in participants to decode and comprehend additional layers of narrative information communicated through the soundscape.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114934746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Biosignal Augmented Embodied Performance 生物信号增强体现性能
Rikard Lindell, Tomas Kumlin
{"title":"Biosignal Augmented Embodied Performance","authors":"Rikard Lindell, Tomas Kumlin","doi":"10.1145/3123514.3123547","DOIUrl":"https://doi.org/10.1145/3123514.3123547","url":null,"abstract":"We explore the phenomenology of embodiment based on research through design and reflection on the design of artefacts for augmenting embodied performance. We present three designs for musicians and a dancer; the designs rely on the artists' mastery acquired from years of practice. Through the knowledge of the living body, their instruments ---cello, flute and dance ---are extensions of themselves; thus, we can explore technology with rich nuances and precision in corporeal schemas. With the help of Merleau-Ponty's phenomenology of embodiment we present two perspectives for augmented embodied performance: the interactively enacted teacher, and the humanisation of technology.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123806878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HaptEQ: A Collaborative Tool For Visually Impaired Audio Producers HaptEQ:视障音频制作人的协作工具
A. Karp, Bryan Pardo
{"title":"HaptEQ: A Collaborative Tool For Visually Impaired Audio Producers","authors":"A. Karp, Bryan Pardo","doi":"10.1145/3123514.3123531","DOIUrl":"https://doi.org/10.1145/3123514.3123531","url":null,"abstract":"Audio production includes processing audio tracks to adjust sound levels with tools like compressors and modifying the sound with reverberation and equalization. In this paper, we focus on audio equalizers. We seek to make a tactile interface that lets blind or visually impaired users create an equalization curve in an intuitive manner. This interface should also promote collaboration between blind and sighted users. Our primary goals were to make something easy to install and intuitively understandable for both sighted and blind users. The result of this research is the HaptEQ system.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"730 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134277270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信