Martin Ljungdahl Eriksson, Lena Pareto, Ricardo Atienza, K. Hansen
{"title":"My Sound Space: An attentional shield for immersive redirection","authors":"Martin Ljungdahl Eriksson, Lena Pareto, Ricardo Atienza, K. Hansen","doi":"10.1145/3243274.3243309","DOIUrl":"https://doi.org/10.1145/3243274.3243309","url":null,"abstract":"In the context of extended reality, the term immersion is commonly used as a property denoting to which extent a technology can deliver an illusion of reality while occluding the users' sensory access to the physical environment. In this paper we discuss an alternative interpretation of immersion, used in the My Sound Space project. The project is a research endeavor aiming to develop a sound environment system that enables a personalized sound space suitable for individual work places. The medium, which in our case is sound, is transparent and thus becomes an entangled part of the surrounding environment. This type of immersion is only partly occluding the users sensory access to physical reality. The purpose of using the sound space is not to become immersed by the sounds, rather to use the sounds to direct cognitive attention to get immersed in another cognitive activity.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131094821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Some reflections on the relation between augmented and smart musical instruments","authors":"L. Turchet","doi":"10.1145/3243274.3243281","DOIUrl":"https://doi.org/10.1145/3243274.3243281","url":null,"abstract":"Augmented musical instruments (AMIs) consist of the augmentation of conventional instruments by means of sensor or actuator technologies. Smart musical instruments (SMIs) are instruments embedding not only sensor and actuator technology, but also wireless connectivity, onboard processing, and possibly systems delivering electronically produced sounds, haptic stimuli, and visuals. This paper attempts to disambiguate the concept of SMIs from that of AMIs on the basis of existing instances of the two families. We counterpose the features of these two families of musical instruments, the processes to build them (i.e., augmentation and smartification), and the respective supported practices. From the analysis it emerges that SMIs are not a subcategory of AMIs, rather they share some of their features. It is suggested that smartification is a process that encompasses augmentation, as well as that the artistic and pedagogical practices supported by SMIs may extend those offered by AMIs. These comparisons suggest that SMIs have the potential to bring more benefits to musicians and composers than AMIs, but also that they may be much more difficult to create in terms of resources and competences to be involved. Shedding light on these differences is useful to avoid confusing the two families and the respective terms, as well as for organological classifications.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116979375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real time Pattern Based Melodic Query for Music Continuation System","authors":"Sanjay Majumder, Benjamin D. Smith","doi":"10.1145/3243274.3243283","DOIUrl":"https://doi.org/10.1145/3243274.3243283","url":null,"abstract":"This paper presents a music continuation system using pattern matching to find patterns within a library of MIDI files using a realtime algorithm to build a system which can be used as interactive DJ system. This paper also looks at the influence of different kinds of pattern matching on MIDI file analysis. Many pattern-matching algorithms have been developed for text analysis, voice recognition and Bio-informatics but as the domain knowledge and nature of the problems are different these algorithms are not ideally suitable for real time MIDI processing for interactive music continuation system. By taking patterns in real-time, via MIDI keyboard, the system searches patterns within a corpus of MIDI files and continues playing from the user's musical input. Four different types of pattern matching are used in this system (i.e. exact pattern matching, reverse pattern matching, pattern matching with mismatch and combinatorial pattern matching in a single system). After computing the results of the four types of pattern matching of each MIDI file, the system compares the results and locates the highest pattern matching possibility MIDI file within the library.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115793389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Surfing with Sound: An Ethnography of the Art of No-Input Mixing: Starting to Understand Risk, Control and Feedback in Musical Performance","authors":"A. Chamberlain","doi":"10.1145/3243274.3243289","DOIUrl":"https://doi.org/10.1145/3243274.3243289","url":null,"abstract":"The idea of No-Input Mixing may appear at first difficult to understand, after all there is no input, yet artists, performers and sound designers have used a variety of approaches using such feedback systems to create music. This paper uses ethnographic approaches to start to understand the methods that people employ when using no-input systems, and in so doing tries to make the invisible, visible. In unpacking some of these techniques we are able to render understandings, of what at first appears to be a random and autonomous set of sounds, as a set of audio features that are controlled, created and are able to be manipulated by a given performer. This is particularly interesting for researchers that involved in the design of new feedback-based instruments, Human Computer Interaction and aleatoric-compositional software.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124768264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}