Jonathan N. Middleton, Jaakko Hakulinen, Katariina Tiitinen, J. Hella, Tuuli Keskinen, P. Huuskonen, Juhani Linna, M. Turunen, Mounia Ziat, R. Raisamo
{"title":"Sonification with Musical Characteristics: A Path Guided by User-Engagement","authors":"Jonathan N. Middleton, Jaakko Hakulinen, Katariina Tiitinen, J. Hella, Tuuli Keskinen, P. Huuskonen, Juhani Linna, M. Turunen, Mounia Ziat, R. Raisamo","doi":"10.21785/ICAD2018.006","DOIUrl":"https://doi.org/10.21785/ICAD2018.006","url":null,"abstract":"Sonification with musical characteristics can engage users, and this dynamic carries value as a mediator between data and human perception, analysis, and interpretation. A user engagement study has been designed to measure engagement levels from conditions within primarily melodic, rhythmic, and chordal contexts. This paper reports findings from the melodic portion of the study, and states the challenges of using musical characteristics in sonifications via the perspective of form and function – a long standing debate in Human-Computer Interaction. These results can guide the design of more complex sonifications of multivariable data suitable for real life use.","PeriodicalId":402143,"journal":{"name":"Proceedings of the 24th International Conference on Auditory Display - ICAD 2018","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127302185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Photone: Exploring Modal Synergy in Photographic Images and Music","authors":"N. Rönnberg, J. Löwgren","doi":"10.21785/ICAD2018.022","DOIUrl":"https://doi.org/10.21785/ICAD2018.022","url":null,"abstract":"We present Photone, an interactive installation combining photographic images and musical sonification. An image is displayed, and a dynamic musical score is generated based on the overall color properties of the image and the color value of the pixel under the cursor. Hence, the music changes as the user moves the cursor. This simple approach turns out to have interesting experiential qualities in use. The composition of images and music invites the user to explore the combination of hues and textures, and musical sounds. We characterize the resulting experience in Photone as one of modal synergy where visual and auditory output combine holistically with the chosen interaction technique. This tentative finding is potentially relevant to further research in auditory displays and multimodal interaction.","PeriodicalId":402143,"journal":{"name":"Proceedings of the 24th International Conference on Auditory Display - ICAD 2018","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130499805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SoundTrAD, A Method and Tool for Prototyping Auditory Displays: Can We Apply It to an Autonomous Driving Scenario?","authors":"D. MacDonald, T. Stockman","doi":"10.21785/ICAD2018.009","DOIUrl":"https://doi.org/10.21785/ICAD2018.009","url":null,"abstract":"This paper presents SoundTrAD, a method and tool for designing auditory displays for the user interface. SoundTrAD brings together ideas from user interface design and soundtrack composition and supports novice auditory display designers in building an auditory user interface. The paper argues for the need for such a method before going on to describe the fundamental structure of the method and construction of the supporting tools. The second half of the paper applies SoundTrAD to an autonomous driving scenario and demonstrates its use in prototyping ADs for a wide range of scenarios.","PeriodicalId":402143,"journal":{"name":"Proceedings of the 24th International Conference on Auditory Display - ICAD 2018","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124985464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reconsidering Human Capacity for Location-Aware Audio Pattern Recognition: A Case for Immersive Exocentric Sonification","authors":"I. Bukvic, G. Earle","doi":"10.21785/ICAD2018.021","DOIUrl":"https://doi.org/10.21785/ICAD2018.021","url":null,"abstract":"The following paper presents a cross-disciplinary snapshot of 21st century research in sonification and leverages the review to identify a new immersive exocentric approach to studying human capacity to perceive spatial aural cues. The paper further defines immersive exocentric sonification, highlights its unique affordances, and presents an argument for its potential to fundamentally change the way we understand and study the human capacity for location-aware audio pattern recognition. Finally, the paper describes an example of an externally funded research project that aims to tackle this newfound research whitespace.","PeriodicalId":402143,"journal":{"name":"Proceedings of the 24th International Conference on Auditory Display - ICAD 2018","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126832834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Uno, Yasuo Suzuki, Takashi Watanabe, Miku Matsumoto, Yan Wang
{"title":"Soundbased Image and Position Recognition System SIPReS","authors":"S. Uno, Yasuo Suzuki, Takashi Watanabe, Miku Matsumoto, Yan Wang","doi":"10.21785/ICAD2018.005","DOIUrl":"https://doi.org/10.21785/ICAD2018.005","url":null,"abstract":"We developed software called SIPReS, which describes two-dimensional images with sound. With this system, visually-impaired people can tell the location of a certain point in an image just by hearing notes of frequency each assigned according to the brightness of the point a user touches on. It can run on Android smartphones and tablets. We conducted a small-scale experiment to see if a visually-impaired person can recognize images with SIPReS. In the experiment, the subject successfully recognized if there is an object or not. He also recognized the location information. The experiment suggests this application’s potential as image recognition software.","PeriodicalId":402143,"journal":{"name":"Proceedings of the 24th International Conference on Auditory Display - ICAD 2018","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114358129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ToxSampler: Locative Sound Art Exploration of the Toxic Release Inventory","authors":"Michael Blandino","doi":"10.21785/ICAD2018.018","DOIUrl":"https://doi.org/10.21785/ICAD2018.018","url":null,"abstract":"Regulatory geographic datasets that inform citizen’s lives are, in general, responsive to engaged search and visual, attentive browsing, but are not designed for directly informing the lived context. The density of sensors and software interfaces present in mobile devices allows for integration of these resources with contextual applications. ToxSampler is an iOS application that modifies the immediate environmental audio scene with associated data from the Toxic Release Inventory (TRI) of the United States Environmental Protection Agency. The application applies digital signal processing (DSP) to the microphone signal based upon the location of the participant and associated TRI data releases. The system, as a result, affords an informed awareness of the datascape through an immediate augmentation of the sensed setting.","PeriodicalId":402143,"journal":{"name":"Proceedings of the 24th International Conference on Auditory Display - ICAD 2018","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125279228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}