{"title":"A Network of Noise: Designing with a Decade of Data to Sonify JANET","authors":"I. Emsley, D. D. Roure, A. Chamberlain","doi":"10.1145/3123514.3123567","DOIUrl":"https://doi.org/10.1145/3123514.3123567","url":null,"abstract":"The existing sonification of networks mainly focuses on security. Our novel approach is framed by the ways in which network traffic changes over the national JANET network. Using a variety of sonification techniques, we examine the user context, how this sonification leads to system design considerations, and feeds back into the user experience.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"217 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124291354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a Parallel Computing Framework for Direct Sonification of Multivariate Chronological Data","authors":"G. Krekovic, I. Vican","doi":"10.1145/3123514.3123551","DOIUrl":"https://doi.org/10.1145/3123514.3123551","url":null,"abstract":"This paper presents a generic and scalable framework for direct sonification of large multivariate data sets with an explicit time dimension. As digitalization and the process of data collection gathers momentum in many fields of human activity, such large data sets with many dimensions of different data types are common. The specificity of our framework is uniformness of the synthesis technique on different temporal scales achieved by using direct sonification of particular data rows in corresponding sound grains. This way, both distinctiveness of individual data rows and patterns on the higher scale should become perceivable in the synthesized audio content. In order to attain scalability, the implementation relies on parallel computing.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115575189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian Hazzard, J. Spence, C. Greenhalgh, S. McGrath
{"title":"The Rough Mile: Reframing Location Through Locative Audio","authors":"Adrian Hazzard, J. Spence, C. Greenhalgh, S. McGrath","doi":"10.1145/3123514.3123540","DOIUrl":"https://doi.org/10.1145/3123514.3123540","url":null,"abstract":"We chart the design and deployment of The Rough Mile: a multi-layered locative audio walk that blends pre-recorded spoken word, original music, and ambient environmental sound with real-time external ambient sound by employing bone conduction headphones. The design of the walking experience -- set in a city centre streets -- deliberately sought to explore novel mechanisms to create thematic and functional relationships between the layers of audio and attributes of the built environment, with the intention of constructing an augmented environment where the sounds of real and fictional are blurred. Twenty-six participants completed the walk describing an absorbing and well paced experience that encouraged them to view the location with an altered perspective, one that pulled aspects of the built environment and its population into the fictional story. We distil the findings and present a set of implications for the design of such locative walking experiences.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122394274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mode Explorer: Using Model-based Sonification to Investigate Basins of Attraction","authors":"Jiajun Yang, T. Hermann","doi":"10.1145/3123514.3123525","DOIUrl":"https://doi.org/10.1145/3123514.3123525","url":null,"abstract":"This paper presents a novel interactive auditory data exploration method to investigate features of high-dimensional data distributions. The Mode Explorer couples a scratching-interaction on a 2D scatter plot of high-dimensional data to real-time dynamical processes, excited in data space at the nearest mode in the probability density function (pdf) obtained by kernel-density estimation. Specifically, the sign-inverted pdf is used as a potential function in which test particles perform oscillations at low friction, yielding signals that can directly be played back as sound. This Model-based sonification approach is used to interactively search the distribution for different modes, learn about their details, i.e. the Hessian matrix at the mode, and thus enable a non-parametric parameter selection for appropriate bandwidth.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124180550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User Requirements for Live Sound Visualization System Using Multitrack Audio","authors":"I. Olowe, M. Grierson, M. Barthet","doi":"10.1145/3123514.3123527","DOIUrl":"https://doi.org/10.1145/3123514.3123527","url":null,"abstract":"In this paper, we identify design requirements for a screen-based system that enables live sound visualization using multitrack audio. Our mixed methodology is grounded in user-centered design and involved a review of the literature to assess the state-of-the-art of Video Jockeying (VJing), and two online surveys to canvas practices within the audiovisual community and gain practical and aspirational awareness on the subject. We review ten studies about VJ practice and culture and human computer interaction topics within live performance. Results from the first survey, completed by 22 participants, were analysed to identify general practices, mapping preferences, and impressions about multitrack audio and audio-content feature extraction. A second complementary survey was designed to probe about specific implications of performing with a system that facilitates live visual performance using multitrack audio. Analyses from 29 participants' self-reports highlight that the creation of audiovisual content is a multivariate and subjective process and help define where multitrack audio, audio-content extraction, and live mapping could fit within. We analyze the findings and discuss how they can inform a design for our system.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133641950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zound: An interactive live electronics","authors":"Danilo Randazzo, Giovanni Cospito","doi":"10.1145/3123514.3123566","DOIUrl":"https://doi.org/10.1145/3123514.3123566","url":null,"abstract":"An interactive live electronics, completely controlled real-time by the mouth harp's (Jew's harp, Jaw harp) acoustic signal has been created, using Max (™ Cycling '74). The system interacts according to the various timbre's features of the instrument, described by the sonogram of the incoming signal and allows the performer's full involvement with the musical outcome. The interactive live electronics is demonstrative of the different instrument features that can be analyzed and mapped by the system.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117211886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Models for Ensemble Touch-Screen Improvisation","authors":"Charles Patrick Martin, K. Ellefsen, J. Tørresen","doi":"10.1145/3123514.3123556","DOIUrl":"https://doi.org/10.1145/3123514.3123556","url":null,"abstract":"For many, the pursuit and enjoyment of musical performance goes hand-in-hand with collaborative creativity, whether in a choir, jazz combo, orchestra, or rock band. However, few musical interfaces use the affordances of computers to create or enhance ensemble musical experiences. One possibility for such a system would be to use an artificial neural network (ANN) to model the way other musicians respond to a single performer. Some forms of music have well-understood rules for interaction; however, this is not the case for free improvisation with new touch-screen instruments where styles of interaction may be discovered in each new performance. This paper describes an ANN model of ensemble interactions trained on a corpus of such ensemble touch-screen improvisations. The results show realistic ensemble interactions and the model has been used to implement a live performance system where a performer is accompanied by the predicted and sonified touch gestures of three virtual players.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132520996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hear-Here: a choreographed peer-to-peer network for live participation on the radio","authors":"August Black","doi":"10.1145/3123514.3123552","DOIUrl":"https://doi.org/10.1145/3123514.3123552","url":null,"abstract":"This paper describes a software system and live audio event called \"Hear-Here\" that links live microphone input from multiple users together in an FM radio broadcast. The system connects users in a browser-based peer-to-peer network using WebRTC whereby each user, taking turns, is able to contribute 2 seconds of audio at a time, ad infinitum. The paper provides a description and evaluation of the system and radio event along with background motivation and related work.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125358236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Data-Driven Algorithmic Composer","authors":"J. Fitzpatrick, Flaithrí Neff","doi":"10.1145/3123514.3123549","DOIUrl":"https://doi.org/10.1145/3123514.3123549","url":null,"abstract":"The Data-Driven Algorithmic Composer (D-DAC) is an application designed to output data-driven algorithmically composed music via MIDI. The application requires input data to be in tab-separated format to be compatible. Each dataset results in a unique piece of music that remains consistent with each iteration of the application. The only varying elements between each iteration of the same dataset are factors defined by the user: tempo, scale, and intervals between rows. Each measure of the melody, harmony and bassline is derived from each row of the dataset. By utilizing this non-random algorithmic application, users can create a unique and predefined musical iteration of their dataset. The overall aim of the D-DAC is to inspire musical creativity from scientific data and encourage the sharing of datasets between various research communities.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125638688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sonic Interaction Design for Paper Wearables","authors":"G. Klauer, Annalisa Metus, P. Polotti","doi":"10.1145/3123514.3123533","DOIUrl":"https://doi.org/10.1145/3123514.3123533","url":null,"abstract":"The paper reports a workshop on sonic interaction design conceived and led by the authors in the context of a living lab born from the collaboration between a music conservatory and an IT university department. The main subject was the application of non-verbal sound in the process of product design, focusing on the augmentation of clothes and wearable accessories. The workshop resulted in exercises exploring the interactive role of the sound within three different scenarios: (a) abstract/relational, (b) strictly functional, (c) aesthetic/performative. Each exercise was carried on by a group of four to five participants working as a team. The presentation of the exercises comes along considerations regarding the participatory approach, that matched design techniques and tools with practices and theoretical foundations of electroacoustic music. Results are briefly discussed and improvements and further steps are accounted for.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123350621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}