John Desnoyers-Stewart, Megan L. Smith, David Gerhard
{"title":"Mixed Reality MIDI Keyboard Demonstration","authors":"John Desnoyers-Stewart, Megan L. Smith, David Gerhard","doi":"10.1145/3123514.3123560","DOIUrl":"https://doi.org/10.1145/3123514.3123560","url":null,"abstract":"The Mixed Reality MIDI Keyboard is a prototype designed to augment virtual reality experiences through the inclusion of a physical interface which aligns the user's senses with the virtual environment. It also serves as a platform on which the uses of virtual reality in music interaction and art installations can be experimented with. The main problem is that of synchronizing the real and virtual environments in a convincing way that makes the user feel more connected to the experience. To accomplish this a system of devices including an HTC Vive, Leap Motion hand tracker, and MIDI Keyboard are used together to produce a convincing mixed reality instrument that aligns with the user's visual, tactile and proprioceptive senses. The system is being developed as both a mixed reality musical instrument for use with common digital audio workstations, and as an installation piece which allows users to explore the nature of perception which this virtual reality system itself takes advantage of.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114065190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"More Cowbell: Measuring Beat Consistency With Respect To Tempo And Metronome Variations","authors":"Steffan Owens, Stuart Cunningham","doi":"10.1145/3123514.3123558","DOIUrl":"https://doi.org/10.1145/3123514.3123558","url":null,"abstract":"This paper investigates the relationship between a participants' ability to maintain consistent distance between taps or strikes (Inter-Onset Interval, or IOI), when provided with varying metronome conditions and tempos. This ability, alongside traditional isochronous sequence production, represents two qualities that can be measured to express a musicians' capacity to keep time accurately. The experiments asked participants to play along with a metronome. The timings of these taps were recorded and analysed to observe consistency and establish any effect metronome and tempo have. The results of the experiments suggest that when the metronome is continuous, its type (Cross-Rhythmic or Mono-Rhythmic) has no significant effect on IOI. This is also true of tempo. When the metronome is removed for a number of beats, however, the results suggest this does have a significant effect on IOI consistency, and that this also has a significant relationship to tempo. The results of this study suggest that a participants' ability to maintain a consistent IOI may not be influenced as strongly by metronomic audio information as their ability to reproduce an isochronous sequence in-phase with a metronome. This suggests that consistent IOIs and traditional, in-phase timekeeping are not as closely linked as could be expected.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124170435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Vryzas, A. Liatsou, Rigas Kotsakis, Charalampos A. Dimoulas, George M. Kalliris
{"title":"Augmenting Drama: A Speech Emotion - Controlled Stage Lighting Framework","authors":"N. Vryzas, A. Liatsou, Rigas Kotsakis, Charalampos A. Dimoulas, George M. Kalliris","doi":"10.1145/3123514.3123557","DOIUrl":"https://doi.org/10.1145/3123514.3123557","url":null,"abstract":"Lighting can play a key role in the aesthetic concept of a theatrical production. This paper explores the field of augmented interaction with stage lighting, providing a synesthetic approach to emotion perception. In the audio driven framework that is presented, the actors' speech is captured by stage microphones. The signals are led to a Speech Emotion Recognition system that classifies them by emotion, which are thereafter matched to different colors. Thus, stage lighting color can change in real-time in accordance with the actor's recognized speech emotion. The system is described in a generic form, suitable for different implementations of the main idea. For the purpose of this paper, 5 classes that represent different emotions were defined. Several audio features and classifiers were tested with audio data from different datasets of emotional speech to train a speech emotion recognition model. The final evaluation results are presented for the logistic regression classifier. Accuracy results and confusion matrix are presented for logistic regression classification. A wheel of emotions model for emotion visualization and color selection was adopted to render and simulate the colored lighting results.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130524992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Examples of use cases with Smart Instruments","authors":"L. Turchet, Michele Benincaso, C. Fischione","doi":"10.1145/3123514.3123553","DOIUrl":"https://doi.org/10.1145/3123514.3123553","url":null,"abstract":"This paper presents some of the possibilities for interaction between performers, audiences, and their smart devices, offered by the novel family of musical instruments, the Smart Instruments. For this purpose, some implemented use cases are described, which involved a preliminary prototype of MIND Music Labs' Sensus Smart Guitar, the first exemplar of Smart Instrument. Sensus consists of a guitar augmented with sensors, actuators, onboard processing, and wireless communication. Some of the novel interactions enabled by Sensus technology are presented, which are based on connectivity of the instrument to smart devices, virtual reality headsets, and the cloud.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johan Fagerlönn, Kristin Hammarberg, Stefan Lindberg, Anna Sirkka, Sofia Larsson
{"title":"Designing a Multimodal Warning Display for an Industrial Control Room","authors":"Johan Fagerlönn, Kristin Hammarberg, Stefan Lindberg, Anna Sirkka, Sofia Larsson","doi":"10.1145/3123514.3123516","DOIUrl":"https://doi.org/10.1145/3123514.3123516","url":null,"abstract":"This paper presents the development of a multimodal warning display for a paper mill control room. In previous work, an informative auditory display for control room warnings was proposed. The proposed auditory solution conveys information about urgent events by using a combination of auditory icons and tonal components. The main aim of the present study was to investigate if a complementary visual display could increase the effectiveness and acceptance of the existing auditory solution. The visual display was designed in a user-driven design process with operators. An evaluation was conducted both before and after the implementation. Subjective ratings showed that operators found it easier to identify the alarming section using the multimodal display. These results can be useful for any designer intending to implement a multimodal display for warnings in an industrial context.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"69 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116377554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Chamberlain, Mads Bødker, Konstantinos Papangelis
{"title":"Mapping Media and Meaning: Autoethnography as an Approach to Designing Personal Heritage Soundscapes","authors":"A. Chamberlain, Mads Bødker, Konstantinos Papangelis","doi":"10.1145/3123514.3123536","DOIUrl":"https://doi.org/10.1145/3123514.3123536","url":null,"abstract":"The paper presents reflections on understanding the issues of designing of locative sonic memory-scapes. As physical space and digital media become ever more intertwined, together forming and augmenting meaning and experience, we need methods to further explore possible ways in which physical places and intangible personal content can be used to develop meaningful experiences. The paper explores the use of autoethnography as a method for soundscape design in the fields of personal heritage and locative media. Specifically, we explore possible connections between digital media, space and 'meaning making', suggesting how autoethnographies might help discover design opportunities for merging digital media and places. These are methods that are more personally relevant than those typically associated with a more system-based design approaches that we often find are less sensitive to the way that emotion, relationships, memory and meaning come together. As a way to expand upon these relationships we also reflect on relations between personal and community-based responses.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"438 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116169634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sound recycling from public databases: Another BigData approach to sound collections","authors":"Hernán Ordiales, Matías Lennie Bruno","doi":"10.1145/3123514.3123550","DOIUrl":"https://doi.org/10.1145/3123514.3123550","url":null,"abstract":"Discovering new sounds from large databases or Internet is a tedious task. Standard search tools and manual exploration fails to manage the actual amount of information available. This paper presents a new approach to the problem which takes advantage of grown technologies like Big Data and Machine Learning, keeping in mind compositional concepts and focusing on artistic performances. Among several different distributed systems useful for music experimentation, a new workflow is proposed based on analysis techniques from Music Information Retrieval (MIR) combined with massive online databases, dynamic user interfaces, physical controllers and real-time synthesis. Based on Free Software tools and standard communication protocols to classify, cluster and segment sound. The control architecture allows multiple clients request the API services concurrently enabling collaborative work. The resulting system can retrieve well defined or pseudo-aleatory audio samples from the web, mix and transform them in real-time during a live-coding performance, play like another instrument in a band, as a solo artist combined with visual feedback or working alone as automated multimedia installation.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"772 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129564687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","authors":"György Fazekas, M. Barthet, T. Stockman","doi":"10.1145/3123514","DOIUrl":"https://doi.org/10.1145/3123514","url":null,"abstract":"","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130706207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulating Auditory Hallucinations in a Video Game: Three Prototype Mechanisms","authors":"Jonathan Weinel, Stuart Cunningham","doi":"10.1145/3123514.3123532","DOIUrl":"https://doi.org/10.1145/3123514.3123532","url":null,"abstract":"In previous work the authors have proposed the concept of 'ASC Simulation1: including audio-visual installations and experiences, as well as interactive video game systems, which simulate altered states of consciousness (ASCs) such as dreams and hallucinations. Building on the discussion of the authors' previous paper, where a large-scale qualitative study explored the changes to auditory perception that users of various intoxicating substances report, here the authors present three prototype audio mechanisms for simulating hallucinations in a video game. These were designed in the Unity video game engine as an early proof-of-concept. The first mechanism simulates 'selective auditory attention' to different sound sources, by attenuating the amplitude of unattended sources. The second simulates 'enhanced sounds', by adjusting perceived brightness through filtering. The third simulates 'spatial disruptions' to perception, by dislocating sound sources from their virtual acoustic origin in 3D-space, causing them to move in oscillations around a central location. In terms of programming structure, these mechanisms are designed using scripts that are attached to the collection of assets that make up the player character, and in future developments of this type of work we foresee a more advanced, standardised interface that models the senses, emotions and state of consciousness of player avatars.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131204292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grigore Burloiu, Stefan Damian, Bogdan Golumbeanu, Valentin Mihai
{"title":"Structured interaction in the SoundThimble real-time gesture sonification framework","authors":"Grigore Burloiu, Stefan Damian, Bogdan Golumbeanu, Valentin Mihai","doi":"10.1145/3123514.3123543","DOIUrl":"https://doi.org/10.1145/3123514.3123543","url":null,"abstract":"We introduce SoundThimble, a design platform for layered sonic interaction based on the relationship between human motion and virtual objects in 3D space. A Vicon motion capture system and custom software are used to track, interpret and sonify the movement and gestures of a performer relative to a virtual object. We define three possible interaction dynamics, centred around object search, manipulation and arrangement. We explore the resulting possibilities for layered structures and extended perception and expression. The software developed is open source and portable to similar hardware systems, leaving room for further extension of the interaction mechanics.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124878634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}