Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion最新文献

筛选
英文 中文
Tamaglitchi Tamaglitchi
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243275
Karen M. Collins, R. Dockwray
{"title":"Tamaglitchi","authors":"Karen M. Collins, R. Dockwray","doi":"10.1145/3243274.3243275","DOIUrl":"https://doi.org/10.1145/3243274.3243275","url":null,"abstract":"In this paper, we present an overview of the current state of research in anthropomorphism as it relates specifically to product design, and then present a short pilot study of nonverbal sound's influence on anthropomorphism, through two short experiments, one qualitative and one quantitative. These experiments use an online variation of a virtual pet similar to the Tamagotchi, which we have called \"Tamaglitchi\". Results show that non-verbal sound increased the tendency to anthropomorphize a virtual pet.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128511565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Investigating Concurrent Speech-based Designs for Information Communication 基于并发语音的信息通信设计研究
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243284
M. A. U. Fazal, Sam Ferguson, Andrew Johnston
{"title":"Investigating Concurrent Speech-based Designs for Information Communication","authors":"M. A. U. Fazal, Sam Ferguson, Andrew Johnston","doi":"10.1145/3243274.3243284","DOIUrl":"https://doi.org/10.1145/3243274.3243284","url":null,"abstract":"Speech-based information is usually communicated to users in a sequential manner, but users are capable of obtaining information from multiple voices concurrently. This fact implies that the sequential approach is possibly under-utilizing human perception capabilities to some extent and restricting users to perform optimally in an immersive environment. This paper reports on an experiment that aimed to test different speech-based designs for concurrent information communication. Two audio streams from two types of content were played concurrently to 34 users, in both a continuous or intermittent form, with the manipulation of a variety of spatial configurations (i.e. Diotic, Diotic-Monotic, and Dichotic). In total, 12 concurrent speech-based design configurations were tested with each user. The results showed that the concurrent speech-based information designs involving intermittent form and the spatial difference in information streams produce comprehensibility equal to the level achieved in sequential information communication.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127160546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Perceived Emotion of Isolated Synthetic Audio: The EmoSynth Dataset and Results 孤立合成音频的感知情感:EmoSynth数据集和结果
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243277
Alice Baird, Emilia Parada-Cabaleiro, C. Fraser, Simone Hantke, Björn Schuller
{"title":"The Perceived Emotion of Isolated Synthetic Audio: The EmoSynth Dataset and Results","authors":"Alice Baird, Emilia Parada-Cabaleiro, C. Fraser, Simone Hantke, Björn Schuller","doi":"10.1145/3243274.3243277","DOIUrl":"https://doi.org/10.1145/3243274.3243277","url":null,"abstract":"The ability of sound to enhance human wellbeing has been known since ancient civilisations, and methods can be found today across domains of health and within a variety of cultures. There are an abundance of sound-based methods which show benefits for both physical and mental-states of wellbeing. Current methods vary from low frequency vibrations to high frequency distractions, and from drone-like sustain to rhythmical pulsing, with limited knowledge of a listeners psycho-physical perception of this. In this regard, for the presented study 40 listeners were asked to evaluate the perceived emotional dimensions of Valence and Arousal from a dataset of 144 isolated synthetic periodic waveforms. Results show that Arousal does correlate moderately to fundamental frequency, and that the sine waveform is perceived as significantly different to square and sawtooth waveforms when evaluating perceived Arousal. The general results suggest that isolated synthetic audio can be modelled as a means of evoking affective states of emotion.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115016033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Investigating metaphors of musical involvement: Immersion, flow, interaction and incorporation 研究音乐参与的隐喻:沉浸、流动、互动和结合
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243293
Oskari Koskela, Kai Tuuri
{"title":"Investigating metaphors of musical involvement: Immersion, flow, interaction and incorporation","authors":"Oskari Koskela, Kai Tuuri","doi":"10.1145/3243274.3243293","DOIUrl":"https://doi.org/10.1145/3243274.3243293","url":null,"abstract":"The concept of immersion, despite being relatively unknown within music research, presents a potentially productive way for understanding the well acknowledged phenomenon of \"being drawn into music\". This paper 1) discusses immersion as a metaphor for conceptualizing musical involvement by drawing on the research into video games and virtual reality and 2) aims to clarify the metaphor of immersion by utilizing the concept of image schema to analyze it in relation to alternative metaphors of flow, interaction and incorporation. The theoretical stance of the paper is based on the paradigm of enactive cognitive sciences, which stresses the bodily, constructive and interactive nature of experience. As a conclusion, the paper suggest several ways to consider the differences between the chosen metaphors based on their image schematic structures. In line with the enactive approach, it is suggested that the experience of immersion should be considered as a constructive activity of using music, thereby highlighting the view of experience as a skillful activity. All in all, the paper aims to offer one kind of approach for considering different experiences with media and to stress the role of metaphors in how we understand experiences.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126144508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Jam with Jamendo: Querying a Large Music Collection by Chords from a Learner's Perspective Jam with Jamendo:从学习者的角度查询和弦的大型音乐收藏
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243291
Anna Xambó, J. Pauwels, Gerard Roma, M. Barthet, György Fazekas
{"title":"Jam with Jamendo: Querying a Large Music Collection by Chords from a Learner's Perspective","authors":"Anna Xambó, J. Pauwels, Gerard Roma, M. Barthet, György Fazekas","doi":"10.1145/3243274.3243291","DOIUrl":"https://doi.org/10.1145/3243274.3243291","url":null,"abstract":"Nowadays, a number of online music databases are available under Creative Commons licenses (e.g. Jamendo, ccMixter). Typically, it is possible to navigate and play their content through search interfaces based on metadata and file-wide tags. However, because this music is largely unknown, additional methods of discovery need to be explored. In this paper, we focus on a use case for music learners. We present a web app prototype that allows novice and expert musicians to discover songs in Jamendo's music collection by specifying a set of chords. Its purpose is to provide a more pleasurable practice experience by suggesting novel songs to play along with, instead of practising isolated chords or with the same song over and over again. To handle less chord-oriented songs and transcription errors that inevitably arise from the automatic chord estimation used to populate the database, query results are ranked according to a computational confidence measure. In order to assess the validity of the confidence ranked system, we conducted a small pilot user study to assess its usefulness. Drawing on those preliminary findings, we identify some design recommendations for future applications of music learning and music search engines focusing on the user experience when interacting with sound.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125241351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Activating Archives: Combining Elements of Japanese Culture to Create a New and Playful Musical Experience 激活档案:结合日本文化的元素,创造一个新的和有趣的音乐体验
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243295
Oliver Halstead, Jack Davenport, Ruben Dejaegere
{"title":"Activating Archives: Combining Elements of Japanese Culture to Create a New and Playful Musical Experience","authors":"Oliver Halstead, Jack Davenport, Ruben Dejaegere","doi":"10.1145/3243274.3243295","DOIUrl":"https://doi.org/10.1145/3243274.3243295","url":null,"abstract":"This paper will discuss the 'Activating Archives' project, a community based project that focusses on the design and production of novel methods of musical playback intended to expose members of the general public to rare and traditional instruments from around the world in an interactive, tactile and playful manner. Additionally, this paper will discuss the design of the first Activating Archives interface, the Shogi Board; a novel method of composing music with sampled Japanese instruments, primarily aimed at members of the general public with no prior musical education.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132658410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Level Analysis of Audio Features for Identifying Emotional Valence in Human Singing 人类歌唱中识别情感价态的音频特征的高级分析
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243313
Stuart Cunningham, Jonathan Weinel, R. Picking
{"title":"High-Level Analysis of Audio Features for Identifying Emotional Valence in Human Singing","authors":"Stuart Cunningham, Jonathan Weinel, R. Picking","doi":"10.1145/3243274.3243313","DOIUrl":"https://doi.org/10.1145/3243274.3243313","url":null,"abstract":"Emotional analysis continues to be a topic that receives much attention in the audio and music community. The potential to link together human affective state and the emotional content or intention of musical audio has a variety of application areas in fields such as improving user experience of digital music libraries and music therapy. Less work has been directed into the emotional analysis of human acapella singing. Recently, the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) was released, which includes emotionally validated human singing samples. In this work, we apply established audio analysis features to determine if these can be used to detect underlying emotional valence in human singing. Results indicate that the short-term audio features of: energy; spectral centroid (mean); spectral centroid (spread); spectral entropy; spectral flux; spectral rolloff; and fundamental frequency can be useful predictors of emotion, although their efficacy is not consistent across positive and negative emotions.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131784094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Under Construction: Contemporary Opera in the Crossroads Between New Aesthetics, Techniques, and Technologies 建构中:新美学、新技术、新技术十字路口的当代歌剧
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243306
Maria Kallionpää, A. Chamberlain, Hans-Peter Gasselseder
{"title":"Under Construction: Contemporary Opera in the Crossroads Between New Aesthetics, Techniques, and Technologies","authors":"Maria Kallionpää, A. Chamberlain, Hans-Peter Gasselseder","doi":"10.1145/3243274.3243306","DOIUrl":"https://doi.org/10.1145/3243274.3243306","url":null,"abstract":"Despite of its long history, opera as an art form is constantly evolving. Composers have never lost their fascination about it and keep exploring with innovative aesthetics, techniques, and modes of expression. New technologies, such as Virtual Reality (VR), Robotics and Artificial Intelligence (AI) are steadily having an impact upon the world of opera. The evolving use of performance-based software such as Ableton Live and Max/MSP has created new and exciting compositional techniques that intertwine theatrical and musical performance. This paper presents some initial work on the development of an opera using such technologies that is being composed by Kallionpää and Chamberlain. Furthermore, it presents two composition case studies by Kallionpää: \"She\" (2017) and puppet opera \"Croak\" (2018), as well as their documentation within the world's first 360° 3D VR recordings with full spatial audio in third-order Ambisonics and the application of an unmixing paradigm for focusing and isolating individual voices.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115563337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Muscle activity response of the audience during an experimental music performance 实验音乐表演中观众的肌肉活动反应
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243278
V. E. G. Sánchez, Agata Zelechowska, A. Jensenius
{"title":"Muscle activity response of the audience during an experimental music performance","authors":"V. E. G. Sánchez, Agata Zelechowska, A. Jensenius","doi":"10.1145/3243274.3243278","DOIUrl":"https://doi.org/10.1145/3243274.3243278","url":null,"abstract":"This exploratory study investigates muscular activity characteristics of a group of audience members during an experimental music performance. The study was designed to be as ecologically valid as possible, collecting data in a concert venue and making use of low-invasive measurement techniques. Muscle activity (EMG) from the forearms of 8 participants revealed that sitting in a group could be an indication of a level of group engagement, while comparatively greater muscular activity from a participant sitting at close distance to the stage suggests performance-induced bodily responses. The self-reported measures rendered little evidence supporting the links between muscular activity and live music exposure, although a larger sample size and a wider range of music styles need to be included in future studies to provide conclusive results.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127127892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The London Soundmap: Integrating sonic interaction design in the urban realm 伦敦声音地图:在城市领域整合声音交互设计
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243302
S. Adhitya, D. Scott
{"title":"The London Soundmap: Integrating sonic interaction design in the urban realm","authors":"S. Adhitya, D. Scott","doi":"10.1145/3243274.3243302","DOIUrl":"https://doi.org/10.1145/3243274.3243302","url":null,"abstract":"This paper describes the development, implementation and impact of the London Soundmap, an interactive sound installation featuring London's soundscape which was exhibited in Regent Street, central London, 2016. We use this interactive urban installation as a case study to explore the opportunities and constraints associated with integrating sonic feedback into the existing urban realm, including the various administrative, design, technical and social challenges. First, we introduce the various stakeholders involved in the conception and implementation of the Soundmap, particularly Transport for London, who utilised this intervention as a way of better understanding how sonic interaction design could improve the design of their urban infrastructure. Then, we discuss the range of disciplines required to create a more immersive, multisensorial experience - from urban design and the visual arts, to electronic engineering and sound design - and the resulting design and technical outputs. Finally, we evaluate the range of interactions and reactions of the various users from data collected from interviews and video recordings. The overall response suggests that the integration of sonic interaction design in the public urban realm has a number of benefits, including increased awareness of the urban soundscape, increased social interaction, and a greater sense of community and place.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114625377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信