Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion最新文献

筛选
英文 中文
My Sound Space: An attentional shield for immersive redirection 我的声音空间:沉浸式重定向的注意力盾牌
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243309
Martin Ljungdahl Eriksson, Lena Pareto, Ricardo Atienza, K. Hansen
{"title":"My Sound Space: An attentional shield for immersive redirection","authors":"Martin Ljungdahl Eriksson, Lena Pareto, Ricardo Atienza, K. Hansen","doi":"10.1145/3243274.3243309","DOIUrl":"https://doi.org/10.1145/3243274.3243309","url":null,"abstract":"In the context of extended reality, the term immersion is commonly used as a property denoting to which extent a technology can deliver an illusion of reality while occluding the users' sensory access to the physical environment. In this paper we discuss an alternative interpretation of immersion, used in the My Sound Space project. The project is a research endeavor aiming to develop a sound environment system that enables a personalized sound space suitable for individual work places. The medium, which in our case is sound, is transparent and thus becomes an entangled part of the surrounding environment. This type of immersion is only partly occluding the users sensory access to physical reality. The purpose of using the sound space is not to become immersed by the sounds, rather to use the sounds to direct cognitive attention to get immersed in another cognitive activity.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131094821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Some reflections on the relation between augmented and smart musical instruments 增强型乐器与智能乐器关系的思考
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243281
L. Turchet
{"title":"Some reflections on the relation between augmented and smart musical instruments","authors":"L. Turchet","doi":"10.1145/3243274.3243281","DOIUrl":"https://doi.org/10.1145/3243274.3243281","url":null,"abstract":"Augmented musical instruments (AMIs) consist of the augmentation of conventional instruments by means of sensor or actuator technologies. Smart musical instruments (SMIs) are instruments embedding not only sensor and actuator technology, but also wireless connectivity, onboard processing, and possibly systems delivering electronically produced sounds, haptic stimuli, and visuals. This paper attempts to disambiguate the concept of SMIs from that of AMIs on the basis of existing instances of the two families. We counterpose the features of these two families of musical instruments, the processes to build them (i.e., augmentation and smartification), and the respective supported practices. From the analysis it emerges that SMIs are not a subcategory of AMIs, rather they share some of their features. It is suggested that smartification is a process that encompasses augmentation, as well as that the artistic and pedagogical practices supported by SMIs may extend those offered by AMIs. These comparisons suggest that SMIs have the potential to bring more benefits to musicians and composers than AMIs, but also that they may be much more difficult to create in terms of resources and competences to be involved. Shedding light on these differences is useful to avoid confusing the two families and the respective terms, as well as for organological classifications.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116979375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Real time Pattern Based Melodic Query for Music Continuation System 基于实时模式的音乐接续系统旋律查询
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-01 DOI: 10.1145/3243274.3243283
Sanjay Majumder, Benjamin D. Smith
{"title":"Real time Pattern Based Melodic Query for Music Continuation System","authors":"Sanjay Majumder, Benjamin D. Smith","doi":"10.1145/3243274.3243283","DOIUrl":"https://doi.org/10.1145/3243274.3243283","url":null,"abstract":"This paper presents a music continuation system using pattern matching to find patterns within a library of MIDI files using a realtime algorithm to build a system which can be used as interactive DJ system. This paper also looks at the influence of different kinds of pattern matching on MIDI file analysis. Many pattern-matching algorithms have been developed for text analysis, voice recognition and Bio-informatics but as the domain knowledge and nature of the problems are different these algorithms are not ideally suitable for real time MIDI processing for interactive music continuation system. By taking patterns in real-time, via MIDI keyboard, the system searches patterns within a corpus of MIDI files and continues playing from the user's musical input. Four different types of pattern matching are used in this system (i.e. exact pattern matching, reverse pattern matching, pattern matching with mismatch and combinatorial pattern matching in a single system). After computing the results of the four types of pattern matching of each MIDI file, the system compares the results and locates the highest pattern matching possibility MIDI file within the library.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115793389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surfing with Sound: An Ethnography of the Art of No-Input Mixing: Starting to Understand Risk, Control and Feedback in Musical Performance 声音冲浪:无输入混音艺术的人种志:开始理解音乐表演中的风险、控制和反馈
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-06-18 DOI: 10.1145/3243274.3243289
A. Chamberlain
{"title":"Surfing with Sound: An Ethnography of the Art of No-Input Mixing: Starting to Understand Risk, Control and Feedback in Musical Performance","authors":"A. Chamberlain","doi":"10.1145/3243274.3243289","DOIUrl":"https://doi.org/10.1145/3243274.3243289","url":null,"abstract":"The idea of No-Input Mixing may appear at first difficult to understand, after all there is no input, yet artists, performers and sound designers have used a variety of approaches using such feedback systems to create music. This paper uses ethnographic approaches to start to understand the methods that people employ when using no-input systems, and in so doing tries to make the invisible, visible. In unpacking some of these techniques we are able to render understandings, of what at first appears to be a random and autonomous set of sounds, as a set of audio features that are controlled, created and are able to be manipulated by a given performer. This is particularly interesting for researchers that involved in the design of new feedback-based instruments, Human Computer Interaction and aleatoric-compositional software.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124768264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信