Marian Weger, Iason Svoronos-Kanavas, Robert Höldrich
{"title":"Schrödinger’s box: an artifact to study the limits of plausibility in auditory augmentations","authors":"Marian Weger, Iason Svoronos-Kanavas, Robert Höldrich","doi":"10.1145/3561212.3561222","DOIUrl":"https://doi.org/10.1145/3561212.3561222","url":null,"abstract":"For every physical interaction with our environment, we have some expectations concerning the resulting sound. As these expectations are quite rough, the auditory feedback can be modulated to convey additional information, without restricting the object’s original purpose. Such auditory augmentation is calm and unobtrusive as long it stays plausible with respect to the performed action. The plausibility range defines a hard limit for the information capacity of the auditory display. In order to maximize the information capacity of auditory augmentations, an estimate of the plausibility range of augmented auditory feedback is required. Here we present Schrödinger’s box, a mobile hardware- and software-platform that is designed for exploring the limits of plausibility of auditory feedback for unknown sounding objects. It renders augmented auditory feedback for its one and only affordance: striking it with a mallet. While hiding all electronics from the users, it meets the extreme requirements of latency that are necessary so that the original auditory feedback is effectively masked by the synthetic auditory feedback. With Schrödinger’s box, we now have a valuable research tool, not only for optimizing auditory augmentations, but also for investigating the plausibility of auditory feedback in general.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114711559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing Musicians and Non-musicians’ Expectations in Music and Vision","authors":"Kat R. Agres, Ting Yuan Tay, M. Pearce","doi":"10.1145/3561212.3561251","DOIUrl":"https://doi.org/10.1145/3561212.3561251","url":null,"abstract":"The role of expectation in music has been of research interest for decades. Expectation mechanisms have also received considerable attention in vision, due in part to the widespread interest in predictive coding. Past research has uncovered different types of expectations that may be formed when exposed to a sequential stimulus, such as schematic expectation (based on general knowledge) and dynamic expectation (based on properties within the current stimulus). Yet to our knowledge, a direct comparison of the relative contribution of these types of expectation has not been performed within the same subjects through careful manipulation of stimuli, nor has this comparison been made across the musical and visual domains. This listener study aims to uncover the relative influence of dynamic and schematic expectations in musical and visual stimuli, and investigate the role of expertise in forming expectations by testing both musicians and non-musicians. Our findings suggest that musicians are indeed more sensitive than non-musicians to the dynamic and schematic properties of musical stimuli, and they generally produce a wider range of expectedness ratings than non-musicians. Interestingly, musicians also interpret schematic information in the visual condition differently than non-musicians, suggesting that musical training may have influenced their expectation mechanisms more generally.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125623553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In Pursuit of Measuring Pre-reflective Music Listening Experiences","authors":"Kai Tuuri, Oskari Koskela, Jukka Vahlo","doi":"10.1145/3561212.3561220","DOIUrl":"https://doi.org/10.1145/3561212.3561220","url":null,"abstract":"While the diverse effects and uses of music and sound have been extensively documented within music psychology, relatively little attention has been paid to the process and experience of listening itself. Previous literature have, however, considered different ways of attending to sounds via the concept of listening modes, which highlights the different ways and strategies through which listeners intentionally orientate themselves to the activity of listening and creating the experiential meaning of the sound. In this paper, we continue on these lines by focusing on the very basic attentional dispositions for listening that often remain unconscious. As opposed to more deliberate and intentional listening strategies, this pre-reflective domain of listening is characterised by its receptive quality, that is, being attuned to sound in a pre-conceptual and pre-cognitive manner without cognitive appraisal of its meaning. Based on previous theoretisations and following ideas from embodied and enactive cognition, we re-conceptualise pre-reflective listening through five modes of listening. Moreover, in order to bring these theoretical considerations into dialogue with empirical research we also operationalise the suggested listening modes into prototyped survey items and discuss methodological issues with the aim of building a groundwork for developing psychometric measures of pre-reflective music listening experiences.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130799560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ImproScales: a self-tutoring web system for using scales in improvisations","authors":"Thomas Borgogno, L. Turchet","doi":"10.1145/3561212.3561229","DOIUrl":"https://doi.org/10.1145/3561212.3561229","url":null,"abstract":"This paper describes ImproScales, a Web Audio application devised to support musicians in the process of learning to use scales during improvisation. The web application detects in real-time the notes played by an individual instrument and assesses whether they belong to the scale, and as a result provides statistics about the number of errors made. Two use cases were implemented following a design process conducted with interviews with musicians: scale practicing with the sole instrument and scale practicing with accompanying music retrieved from YouTube. The first use case is primarily intended for those musicians who do not know well the musical scales and want to learn them properly, or for those who want to practice without a backing track, while the second is meant for people who already know the scales and want to improvise over a song or instrumental music. We report the results of a user study conducted with twelve intermediate musicians. Overall, results show that the application was deemed effectively capable of enhancing musicians’ improvisation skills. A critical reflection on the results achieved is reported along with the analysis of weaknesses and limits of the web application, as well as some proposals for future developments are provided.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130031073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A cross-modal UX design pedagogy for industrial design","authors":"Lucas Temor, Zainab Husain, P. Coppin","doi":"10.1145/3561212.3561241","DOIUrl":"https://doi.org/10.1145/3561212.3561241","url":null,"abstract":"Everyday experience is multi-sensory, and user experience (UX) design aims to extend this to interactions with products, services, and designed worlds. However, tools and pedagogies for UX are overwhelmingly visual, whereas human-rights-based accessibility legislation mandates the inclusion of diverse peoples, including blind and partially sighted individuals. Coupling auditory and haptic UX techniques from human-computer interaction with industrial design’s (ID) cross-modal tradition of prototyping physical products fostered our novel cross-modal UX course for second-year ID undergraduates. Affordance-based theories of perception-action and Gestalt principles of perceptual organization were used to inform design in auditory, tactile, and visual sensory modalities situated in a novel pedagogical framework. Each week theoretical models were presented alongside hands-on workshops using the BBC micro:bit, developing computational literacy through cross-modal physical prototyping. Student projects demonstrate an understanding of theory and practice and include auditory and tactile interfaces.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127840656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stine S. Johansen, R. M. Jacobsen, M. Skov, J. Kjeldskov
{"title":"Contextual and Informational Aspects of Sound Zone Visualisations","authors":"Stine S. Johansen, R. M. Jacobsen, M. Skov, J. Kjeldskov","doi":"10.1145/3561212.3561240","DOIUrl":"https://doi.org/10.1145/3561212.3561240","url":null,"abstract":"Sound zone systems introduce new properties that contradict users’ prior experiences with sound. This includes unique spatial properties that can be difficult to comprehend and thereby control. Visualisations are a potentially useful tool to support users in controlling sound zones. However, a number of different approaches can be taken for designing visualisations. In this paper, we unfold a framework of six design dimensions that are essential for visualisation of sound zones. These six dimensions are derived from a substantial and diverse range of research activities, including two design sessions and user studies of four visualisation prototypes. We exemplify the design framework through the four visualisation systems in the six dimensions and propose that it can be utilised as a tool for future research and design of sound zone visualisations.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126304609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mariana Seiça, Licínio Roque, Pedro Martins, F. A. Cardoso
{"title":"An Illustrative Design Case of Systemic Sonification","authors":"Mariana Seiça, Licínio Roque, Pedro Martins, F. A. Cardoso","doi":"10.1145/3561212.3561224","DOIUrl":"https://doi.org/10.1145/3561212.3561224","url":null,"abstract":"The experience of sonification as a living system is a recent proposal for designing audio-centred communication. Drawing concepts from embodied perception and phenomenology of interaction, the systemic sonification approach has been characterised as a dynamic, evolving auditory community of sound beings that act and are acted upon by humans through interactive exchanges. In this study, we take on this theoretical proposal to explore a tentative design and develop a proof-of-concept of how this approach can be realised in practice. Departing from a previous sonification exercise of retail consumption data and adopting the proposed foundations of systemism, we develop an illustrative design case for the system’s composition, its environment, its structure, and the mechanisms which translate the behaviour of the evolving system to human interaction. While discussing the particular results, we debate the generativity of such a perspective towards envisioning alternative design spaces and novel interactive experiences.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126010464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Manipulating Foley Footsteps and Character Realism to Influence Audience Perceptions of a 3D Animated Walk Cycle","authors":"Stuart Cunningham, I. Mcgregor","doi":"10.1145/3561212.3561221","DOIUrl":"https://doi.org/10.1145/3561212.3561221","url":null,"abstract":"Foley artistry is an essential part of the audio post-production process for film, television, games, and animation. By extension, it is as crucial in emergent media such as virtual, mixed, and augmented reality. Footsteps are a core activity that a Foley artist must undertake and convey information about the characters and environment presented on-screen. This study sought to identify if characteristics of age, gender, weight, health, and confidence could be conveyed, using sounds created by a professional Foley artist, in three different 3D humanoid models, following a single walk cycle. An experiment was conducted with human participants (n=100) and found that Foley manipulations could convey all the intended characteristics with varying degrees of contextual success. It was shown that the abstract models were capable of communicating characteristics of age, gender, and weight. The findings are relevant to researchers and practitioners in linear and interactive media and demonstrate mechanisms by which Foley can contribute useful information and concepts about on-screen characters.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116077469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graphic-to-Sound Sonification for Visual and Auditory Communication Design","authors":"Woohun Joo","doi":"10.1145/3561212.3561214","DOIUrl":"https://doi.org/10.1145/3561212.3561214","url":null,"abstract":"I designed two sonification platforms designed for visual/auditory communication design studies and audiovisual art. The purpose of this study was to examine whether test participants can associate visuals and sound without any prior training and sonification approaches in this paper can be utilized as an interactive musical expression. The platform for the communication design study was developed first and the artistic audiovisual platform with the same sonification methodology followed next. In this paper, I introduce the (former) sonification platform designed for the image-to-sound association studies, their sonification methodologies, and present the study results. The object-oriented sonification method that I newly developed describes each shape sonically. The five image-sound association studies were conducted to see whether people can successfully associate sounds and fundamental shapes (i.e., a circle, a triangle, a square, lines, curves, and other custom shapes). Regardless of age and educational background, the correct answer rate was high.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124890741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sounding Obstacles for Social Distance Sonification","authors":"T. Senan, B. Hengeveld, Berry Eggen","doi":"10.1145/3561212.3561239","DOIUrl":"https://doi.org/10.1145/3561212.3561239","url":null,"abstract":"This article reports the results of an experiment (N = 10) that employs continuous auditory feedback to influence participants’ routing choices while walking between two points by sonifying their interactions with invisible obstacles. A relative distance parameter, proximity, is defined and mapped simultaneously to perceived loudness and amplitude modulation frequencies of sine tones. The proximity parameter is divided into three sections: slow modulation, border zone, and fast modulation. The slow and fast modulation sections generate a monotonic relationship between proximity values and the resulting psychoacoustic parameters: fluctuation strength and roughness. A social distance sonification case study in a laboratory experiment evaluated the effectiveness of the generated hearing sensations and explored participants’ experiences through a semi-structured interview. The quantitative results show that the non-spatial, psychoacoustically-inspired sonification mappings successfully influenced participants’ routing choices during the experiental task of walking. On the other hand, the semi-structured interview revealed that participants ascribed a pleasantness/annoyance attribute to presented sounds, which was not intended.","PeriodicalId":379319,"journal":{"name":"Proceedings of the 17th International Audio Mostly Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115680562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}