Johan Fagerlönn, Anna Sirkka, Stefan Lindberg, R. Johnsson
{"title":"Acoustic Vehicle Alerting Systems: Will they affect the acceptance of electric vehicles?","authors":"Johan Fagerlönn, Anna Sirkka, Stefan Lindberg, R. Johnsson","doi":"10.1145/3243274.3243305","DOIUrl":"https://doi.org/10.1145/3243274.3243305","url":null,"abstract":"Vehicles powered by electric motors can be very quiet at low speeds, which can lead to new road safety issues. The European Parliament has decided that quiet vehicles should be equipped with an Acoustic Vehicle Alerting System (AVAS). The main purpose of the studies presented in this paper was to investigate whether future requirements could affect people's acceptance of electric vehicles (EVs). The strategy in the first study was to create an immersive, simulated auditory environment where people could experience the sounds of future traffic situations. The second study was conducted with a car on a test track. The results suggest that the requirements are not likely to have a major negative effect on people's experience of EVs or willingness to buy an EV. However, the sounds can have a certain negative effect on emotional response and acceptance, which should be considered by manufacturers. The results of the test track study indicate that unprotected road users may appreciate the function of an AVAS sound. The work did not reveal any large differences between AVAS sounds. But in the simulated environment, sounds designed to resemble an internal combustion engine tended to receive more positive scores.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133899514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Immersive Approach to 3D-Spatialized Music Composition: Tools and Pilot Survey","authors":"D. Ledoux, R. Normandeau","doi":"10.1145/3243274.3243300","DOIUrl":"https://doi.org/10.1145/3243274.3243300","url":null,"abstract":"Open-sourced 3D sound spatialisation software tools, developed by the Groupe de Recherche en Immersion Spatiale (GRIS) at Université de Montréal, were used as an integrated part of two music compositions, in an immersive, object-based audio approach. A preliminary listening experience has been conducted on two separate groups of students, in a 32.2 loudspeakers dome, as a pilot for a case study that aims to get a better sense of the immersive affect of complex spatialized compositions through the listener's reception behaviors. Data collected from their comments on these two different 3D-spatialized musics have been analysed to extract converging expressions of immersive qualities.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134539733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Music retiler: Using NMF2D source separation for audio mosaicing","authors":"H. F. Aarabi, G. Peeters","doi":"10.1145/3243274.3243299","DOIUrl":"https://doi.org/10.1145/3243274.3243299","url":null,"abstract":"Musaicing (music mosaicing) aims at reconstructing a target music track by superimposing audio samples selected from a collection. This selection is based on their acoustic similarity to the target. The baseline technique to perform this is concatenative synthesis in which the superposition only occurs in time. Non-Negative Matrix Factorization has also been proposed for this task. In this, a target spectrogram is factorized into an activation matrix and a predefined basis matrix which represents the sample collection. The superposition therefore occurs in time and frequency. However, in both methods the samples used for the reconstruction represent isolated sources (such as bees) and remain unchanged during the musaicing (samples need to be pre-pitch-shifted). This reduces the applicability of these methods. We propose here a variation of the musaicing in which the samples used for the reconstruction are obtained by applying a NMF2D separation algorithm to a music collection (such as a collection of Reggae tracks). Using these separated samples, a second NMF2D algorithm is then used to automatically find the best transposition factors to represent the target. We performed an online perceptual experiment of our method which shows that it outperforms the NMF algorithm when the sources are polyphonic and multi-source.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123821971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Athanasia Zlatintsi, P. Filntisis, Christos Garoufis, A. Tsiami, Kosmas Kritsis, Maximos A. Kaliakatsos-Papakostas, Aggelos Gkiokas, V. Katsouros, P. Maragos
{"title":"A Web-based Real-Time Kinect Application for Gestural Interaction with Virtual Musical Instruments","authors":"Athanasia Zlatintsi, P. Filntisis, Christos Garoufis, A. Tsiami, Kosmas Kritsis, Maximos A. Kaliakatsos-Papakostas, Aggelos Gkiokas, V. Katsouros, P. Maragos","doi":"10.1145/3243274.3243297","DOIUrl":"https://doi.org/10.1145/3243274.3243297","url":null,"abstract":"We present a web-based real-time application that enables gestural interaction with virtual instruments for musical expression. Skeletons of the users are tracked by a Kinect sensor, while the performance of the virtual instruments is accomplished using gestures inspired from their corresponding physical counterparts. The application supports the virtual performance of an air guitar and an upright bass, as well as a more abstract conductor-like performance with two instruments, while collaborative playing of two or more players is also allowed. The multimodal virtual interface of our application, which includes 3D avatars, allows users, even if not musically educated, to engage in innovative interactive musical activities, while its web-based architecture improves its accessibility and performance. The application was qualitatively evaluated by 13 users, in terms of its usability and enjoyability, among others, accomplishing high ratings and positive feedback.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124918040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Playing the Body: Making Music through Various Body Movements","authors":"Junko Ichino, Hayato Nao","doi":"10.1145/3243274.3243287","DOIUrl":"https://doi.org/10.1145/3243274.3243287","url":null,"abstract":"We explore a bodily interaction as a creative experience to support musical expression. This paper discusses an interactive system---Playing the Body---that supports the creative activity of composing music by incorporating large body movements in space. In order to encourage the user to form an overall image of the melody in the early stages of composition, the proposed system supports interaction using the whole body to generate a melody. Then, after going through a trial-and-error stage, it provides a refinement stage that encourages introspection, refining the melody to make the sound more consistent with the ideal image. This is done by supporting interaction using the hands and arms, which have a greater degree of freedom. In a pilot study, positive responses were obtained regarding the creation of a melody using the whole body. Future work includes improving the use of the hands and arms to refine the melody.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121272444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Staging sonic atmospheres as the new aesthetic work","authors":"E. Toppano, Alessandro Toppano","doi":"10.1145/3243274.3243286","DOIUrl":"https://doi.org/10.1145/3243274.3243286","url":null,"abstract":"Our primary concern in this paper is to bring attention to the promising, yet largely unexplored concept of atmosphere in sound design. Although this notion is not new, we approach it from a novel perspective i.e., New Phenomenology and New Aesthetics. Accordingly, we review some basic theoretical results in these fields and try to explore their possible application in the sonic context. In particular, the paper: i) compares the concept of sonic atmosphere with the notions of acoustic environment and soundscape by articulating salient elements that constitute each concept, ii) discusses some consequences of the above distinction with respect to the understanding of emotion and immersion, and, finally, iii) provides some initial suggestions about how to design for emotions.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115870097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Procedurally-Generated Audio for Soft-Body Animations","authors":"Feng Su, C. Joslin","doi":"10.1145/3243274.3243285","DOIUrl":"https://doi.org/10.1145/3243274.3243285","url":null,"abstract":"Procedurally-generated audio is an important method for the automatic synthesis of realistic sounds for computer animations and virtual environments. While synthesis techniques for rigid bodies have been well studied, few publications have tackled the challenges of synthesizing sounds for soft bodies. In this paper, we propose a data-driven synthesis approach to simultaneously generate audio based on certain given soft-body animations. Our method uses granular synthesis to extract a database of sound from real-world recordings and then retarget these grains of sounds based on the motion of any input animations. We demonstrate the effectiveness of this method on a variety of soft-body animations including a basketball bouncing, apple slicing, hand clapping and a jelly simulation.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130762375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotional Musification","authors":"Andrew Godbout, Iulius A. T. Popa, J. Boyd","doi":"10.1145/3243274.3243303","DOIUrl":"https://doi.org/10.1145/3243274.3243303","url":null,"abstract":"We present a method for emotional musification that utilizes the musical game MUSE. We take advantage of the strong links between music and emotion to represent emotions as music. While we provide a prototype for measuring emotion using facial expression and physiological signals our sonification is not dependent on this. Rather we identify states within MUSE that elicit certain emotions and map those onto the arousal and valence spatial representation of emotion. In this way our efforts are compatible with emotion detection methods which can be mapped to arousal and valence. Because MUSE is based on states and state transitions we gain the ability to transition seamlessly from one state to another as new emotions are detected thus avoiding abrupt changes between music types.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114065553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leya Breanna Baltaxe-Admony, Tom Hope, Kentaro Watanabe, M. Teodorescu, S. Kurniawan, Takuichi Nishimura
{"title":"Exploring the Creation of Useful Interfaces for Music Therapists","authors":"Leya Breanna Baltaxe-Admony, Tom Hope, Kentaro Watanabe, M. Teodorescu, S. Kurniawan, Takuichi Nishimura","doi":"10.1145/3243274.3243307","DOIUrl":"https://doi.org/10.1145/3243274.3243307","url":null,"abstract":"Music therapy is utilized worldwide to connect communities, strengthen mental and physiological wellbeing, and provide new means of communication for individuals with phonological, social, language, and other communication disorders. The incorporation of technology into music therapy has many potential benefits. Existing research has been done in creating user-friendly devices for music therapy clients, but these technologies have not been utilized due to complications in use by the music therapists themselves. This paper reports the iterative prototype design of a compact and intuitive device designed in close collaboration with music therapists across the globe to promote the usefulness and usability of prototypes. The device features interchangeable interfaces for work with diverse populations. It is portable and hand-held. A device which incorporates these features does not yet exist. The outlined design specifications for this device were found using human centered design techniques and may be of significant use in designing other technologies in this field. Specifications were created throughout two design iterations and evaluations of the device. In an evaluation of the second iteration of this device it was found that 5/8 therapists wanted to incorporate it into their practices.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124818378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lovelace's Legacy: Creative Algorithmic Interventions for Live Performance","authors":"D. D. Roure, P. Willcox, A. Chamberlain","doi":"10.1145/3243274.3275380","DOIUrl":"https://doi.org/10.1145/3243274.3275380","url":null,"abstract":"We describe a series of informal exercises in which we have put algorithms in the hands of human performers in order to encourage a human creative response to mathematical and algorithmic input. These 'interventions' include a web-based app, experiments in physical space using Arduinos, and algorithmic augmentation of a keyboard.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122184618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}