Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion最新文献

筛选
英文 中文
Subjective Evaluation of a Speech Emotion Recognition Interaction Framework 语音情感识别交互框架的主观评价
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243294
N. Vryzas, María Matsiola, Rigas Kotsakis, Charalampos A. Dimoulas, George M. Kalliris
{"title":"Subjective Evaluation of a Speech Emotion Recognition Interaction Framework","authors":"N. Vryzas, María Matsiola, Rigas Kotsakis, Charalampos A. Dimoulas, George M. Kalliris","doi":"10.1145/3243274.3243294","DOIUrl":"https://doi.org/10.1145/3243274.3243294","url":null,"abstract":"In the current work, a conducted subjective evaluation of three basic components of a framework for applied Speech Emotion Recognition (SER) for theatrical performance and social media communication and interaction is presented. The multidisciplinary survey group used for the evaluation is consisted of participants with Theatrical and Performance Arts background, as well as Journalism and Mass Communications Studies. Initially, a publically available database of emotional speech utterances, Acted Emotional Speech Dynamic Database (AESDD) is evaluated. We examine the degree of agreement between the perceived emotion by the participants and the intended expressed emotion in the AESDD recordings. Furthermore, the participants are asked to choose between different coloured lighting of certain scenes captured on video. Correlations between the emotional content of the scenes and selected colors are observed and discussed. Finally, a prototype application for SER and multimodal speech emotion data gathering is evaluated in terms of Usefulness, Ease of Use, Ease of Learning and Satisfaction.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123355070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion 2018年音频会议论文集:沉浸和情感中的声音
{"title":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","authors":"","doi":"10.1145/3243274","DOIUrl":"https://doi.org/10.1145/3243274","url":null,"abstract":"","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126079467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On Transformations between Paradigms in Audio Programming 论音频编程范式间的转换
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243298
R. Kraemer, Cornelius Pöpel
{"title":"On Transformations between Paradigms in Audio Programming","authors":"R. Kraemer, Cornelius Pöpel","doi":"10.1145/3243274.3243298","DOIUrl":"https://doi.org/10.1145/3243274.3243298","url":null,"abstract":"The research on paradigms in audio and music programming is an ongoing endeavor. However, although new audio programming paradigms have been created, already established paradigms did prevail and dominate major music production systems. Our research aims at the question, how programming paradigms and music production interacts. We describe the implementation process of an imperative algorithm calculating the greatest common divisor (gcd) in Pure Data and exemplify common problems of transformational processes between an imperative paradigm and a patch-paradigm. Having a closer look at related problems in research on programming paradigms in general, we raise the question of how constraints and boundaries of paradigms play a role in the design process of a program. With the deliberation on selected papers within the context of computer science, we give insight into different views of how the process of programming can be thought and how certain domains of application demand a specific paradigm.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114855520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Re-Thinking Immersive Technologies for Audiences of the Future 为未来观众重新思考沉浸式技术
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3275379
A. Chamberlain, S. Benford, A. Dix
{"title":"Re-Thinking Immersive Technologies for Audiences of the Future","authors":"A. Chamberlain, S. Benford, A. Dix","doi":"10.1145/3243274.3275379","DOIUrl":"https://doi.org/10.1145/3243274.3275379","url":null,"abstract":"This note introduces the notion of immersive technologies, accompanies a presentation and by starting to think about the nature of such systems we develop a position that questions existing preconceptions of immersive technologies. In order to accomplish this, we take a series of technologies that we have developed at the Mixed Reality Lab and present a vignette based on each of these technologies in order to stimulate debate and discussion at the workshop. Each of these technologies has its own particular qualities and are ideal for 'speculative' approaches to designing interactive possibilities. This short paper also starts to examine how qualitative approaches such as autoethnography can be used to understand and unpack our interaction and feelings about these technologies.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130143578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evolving in-game mood-expressive music with MetaCompose 使用MetaCompose改进游戏内的情绪表达音乐
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243292
Marco Scirea, Peter W. Eklund, J. Togelius, S. Risi
{"title":"Evolving in-game mood-expressive music with MetaCompose","authors":"Marco Scirea, Peter W. Eklund, J. Togelius, S. Risi","doi":"10.1145/3243274.3243292","DOIUrl":"https://doi.org/10.1145/3243274.3243292","url":null,"abstract":"MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126746122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Auditory Masking and the Precedence Effect in Studies of Musical Timekeeping 音乐计时研究中的听觉掩蔽与优先效应
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243312
Steffan Owens, Stuart Cunningham
{"title":"Auditory Masking and the Precedence Effect in Studies of Musical Timekeeping","authors":"Steffan Owens, Stuart Cunningham","doi":"10.1145/3243274.3243312","DOIUrl":"https://doi.org/10.1145/3243274.3243312","url":null,"abstract":"Musical timekeeping is an important and evolving area of research with applications in a variety of music education and performance situations. Studies in this Iield are of ten concerned with being able to measure the accuracy or consistency of human participants, for whatever purpose is being investigated. Our initial explorations suggest that little has been done to consider the role that auditory masking, speciIically the precedence effect, plays in the study of human timekeeping tasks. In this paper, we highlight the importance of integrating masking into studies of timekeeping and suggest areas for discussion and future research, to address shortfalls in the literature.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129043422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Design of Future Music Technologies: 'Sounding Out' AI, Immersive Experiences & Brain Controlled Interfaces 未来音乐技术的设计:“发声”AI、沉浸式体验和脑控界面
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243314
A. Chamberlain, Mads Bødker, Maria Kallionpää, Richard Ramchurn, D. D. Roure, S. Benford, A. Dix
{"title":"The Design of Future Music Technologies: 'Sounding Out' AI, Immersive Experiences & Brain Controlled Interfaces","authors":"A. Chamberlain, Mads Bødker, Maria Kallionpää, Richard Ramchurn, D. D. Roure, S. Benford, A. Dix","doi":"10.1145/3243274.3243314","DOIUrl":"https://doi.org/10.1145/3243274.3243314","url":null,"abstract":"This workshop examines the interplay between people, musical instruments, performance and technology. Now, more than ever technology is enabling us to augment the body, develop new ways to play and perform, and augment existing instruments that can span the physical and digital realms. By bringing together performers, artists, designers and researchers we aim to develop new understandings how we might design new performance technologies. Participants will be actively encouraged to participant, engaging with other workshop attendees to explore concepts such as; immersion, augmentation, emotion, physicality, data, improvisation, provenance, curation, context and temporality, and the ways that these might be employed and unpacked in respect to both performing and understanding interaction with new performance-based technologies that relate to the core themes of immersion and emotion.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131156717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Mandolin: autobiographical design, implementation, use cases, and lessons learned 智能曼陀林:自传式设计、实现、用例和经验教训
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243280
L. Turchet
{"title":"Smart Mandolin: autobiographical design, implementation, use cases, and lessons learned","authors":"L. Turchet","doi":"10.1145/3243274.3243280","DOIUrl":"https://doi.org/10.1145/3243274.3243280","url":null,"abstract":"This paper presents the Smart Mandolin, an exemplar of the family of the so-called smart instruments. Developed according to the paradigms of autobiographical design, it consists of a conventional acoustic mandolin enhanced with different types of sensors, a microphone, a loudspeaker, wireless connectivity to both local networks and the Internet, and a low-latency audio processing board. Various implemented use cases are presented, which leverage the smart qualities of the instrument. These include the programming of the instrument via applications for smartphones and desktop computer, as well as the wireless control of devices enabling multimodal performances such as screen projecting visuals, smartphones, and tactile devices used by the audience. The paper concludes with an evaluation conducted by the author himself after extensive use, which pinpointed pros and cons of the instrument and provided a comparison with the Hyper-Mandolin, an instance of augmented instruments previously developed by the author.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114419582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Designing Musical Soundtracks for Brain Controlled Interface (BCI) Systems 为脑控接口(BCI)系统设计音乐原声
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243288
Richard Ramchurn, A. Chamberlain, S. Benford
{"title":"Designing Musical Soundtracks for Brain Controlled Interface (BCI) Systems","authors":"Richard Ramchurn, A. Chamberlain, S. Benford","doi":"10.1145/3243274.3243288","DOIUrl":"https://doi.org/10.1145/3243274.3243288","url":null,"abstract":"This paper presents research based on the creation and development of two Brain Controlled Interface (BCI) based film experiences. The focus of this research is primarily on the audio in the films; the way that the overall experiences were designed, the ways in which the soundtracks were specifically developed for the experiences and the ways in which the audience perceived the use of the soundtrack in the film. Unlike traditional soundtracks the adaptive nature of the audio means that there are multiple parts that can be interacted with and combined at specific moments. The design of such adaptive audio systems is something that is yet to be fully understood and this paper goes someway to presenting our initial findings. We think that this research will be of interest and excite the Audio-HCI community.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126525666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Prototype Mixer to Improve Cross-Modal Attention During Audio Mixing 一个原型混音器,以提高跨模态注意力在音频混音
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243290
Josh Mycroft, T. Stockman, J. Reiss
{"title":"A Prototype Mixer to Improve Cross-Modal Attention During Audio Mixing","authors":"Josh Mycroft, T. Stockman, J. Reiss","doi":"10.1145/3243274.3243290","DOIUrl":"https://doi.org/10.1145/3243274.3243290","url":null,"abstract":"The Channel Strip mixer found on physical mixing desks is the primary Graphical User Interface design for most Digital Audio Workstations. While this metaphor provides transferable knowledge from hardware, there may be a risk that it does not always translate well into screen-based mixers. For example, the need to search through several windows of mix information may inhibit the engagement and 'flow' of the mixing process, and the subsequent screen management required to access the mixer across multiple windows can place high cognitive load on working memory and overload the limited capacity of the visual mechanism. This paper trials an eight-channel proto-type mixer which uses a novel approach to the mixer design to address these issues. The mixer uses an overview of the visual interface and employs multivariate data objects for channel parameters which can be filtered by the user. Our results suggest that this design, by reducing both the complexity of visual search and the amount of visual feedback on the screen at any one time, leads to improved results in terms of visual search, critical listening and mixing workflow.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116625006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信