{"title":"Physically Derived Sound Synthesis Model of a Propeller","authors":"R. Selfridge, D. Moffat, J. Reiss","doi":"10.1145/3123514.3123524","DOIUrl":"https://doi.org/10.1145/3123514.3123524","url":null,"abstract":"A real-time sound synthesis model for propeller sounds is presented. Equations obtained from fluid dynamics and aerodynamics research are utilised to produce authentic propeller-powered aircraft sounds. The result is a physical model in which the geometries of the objects involved are used in sound synthesis calculations. The model operates in real-time making it ideal for integration within a game or virtual reality environment. Comparison with real propeller-powered aircraft sounds indicates that some aspects of real recordings are not replicated by our model. Listening tests suggest that our model performs as well as another synthesis method but is not as plausible as a real recording.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123542364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sound and Interaction Design of an Augmented Drum System","authors":"Jeff Gregorio, P. English, Youngmoo E. Kim","doi":"10.1145/3123514.3123521","DOIUrl":"https://doi.org/10.1145/3123514.3123521","url":null,"abstract":"We present ongoing developments in the design of an augmented drum system that expands the timbral and expressive range of the acoustic drum using electromagnetic actuation of a drum membrane, driven by combinations of synthesized tones and modulated feedback signals taken from the opposing membrane. The system is designed to run on an embedded, WiFi-enabled platform, allowing multiple augmented drums to be configured as nodes in a directed graph. These multi-drum networks communicate by sending and receiving Open Sound Control (OSC) messages on a dedicated network, and can accommodate multiple performers in addition to semi-autonomous behavior. This work is developed in close collaboration with an artist in residence for use in both live performance and interactive sound installation.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128519903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Frailty of Formal Education: Visual Paradigms and Music Creation","authors":"A. Poscic, G. Krekovic","doi":"10.1145/3123514.3123534","DOIUrl":"https://doi.org/10.1145/3123514.3123534","url":null,"abstract":"Computer music technology strongly influenced artistic expression by opening new possibilities in the field of sound creation, music composition, interaction, and multimedia. Efficient and flexible use of technology unavoidably implies expressing various concepts through computer programming. Luckily, the visual programming paradigm provides a more intuitive and understandable, yet comprehensive approach for musicians, and seems more adaptable than textual programming. However, programming skills are still required, so the question arises whether music education appropriately prepares musicians for the digital world and visual programming in particular. In this study we explore relations between education and usage of digital tools in terms of language discovery, learning curves, purposes of using, and overall proficiency. We conducted a survey along 162 professional and amateur musicians who are also users of a visual programming tool. The results suggest that while formal education does not have a significant impact on programming skills, it plays an important role in discovery and selection of programming tools.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125893960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of Live Visuals in Audience Understanding of Electronic Music Performances","authors":"N. Correia, Deborah Castro, Atau Tanaka","doi":"10.1145/3123514.3123555","DOIUrl":"https://doi.org/10.1145/3123514.3123555","url":null,"abstract":"There is an identified lack of visual feedback in electronic music performances. Live visuals have been used to fill in this gap. However, there is a scarcity of studies that analyze the effectiveness of live visuals in conveying feedback. In this paper, we aim to study the contribution of live visuals to the understanding of electronic music performances, from the perspective of the audience. We present related work in the fields of audience studies in performing arts, electronic music and audiovisuals. For this purpose, we organized two live events, where 10 audiovisual performances took place. We used questionnaires to conduct an audience study in these events. Results point to a better audience understanding in two of the four design patterns we used as analytical framework. In our discussion, we suggest best practices for the design of audiovisual performance systems that can lead to improved audience understanding.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126809113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are the Robots Coming?: Designing with Autonomy & Control for Musical Creativity & Performance","authors":"A. Chamberlain","doi":"10.1145/3123514.3123568","DOIUrl":"https://doi.org/10.1145/3123514.3123568","url":null,"abstract":"This paper1 expands upon our previous work, and starts to unpack notions of autonomy and control in musical composition and performance-based systems. The term autonomous has become synonymous with technologies such as \"autonomous vehicles\" and \"drones\", while notions of control have mainly been raised in respect to the \"control\" of industrial systems and in respect to protocols. This position piece disrupts these notions and provides a platform, introducing a more radical proposition in respect to the representation of autonomy and control of features that can be used to design systems that support musical composition and performance. This paper supports a growing interest within the Design, HCI and Artificial Intelligence communities, leading us to think about Human Like Computing systems and the development of a Computational Creativity Continuum.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130979401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Hansen, Martin Ljungdahl Eriksson, Ricardo Atienza
{"title":"Large-Scale Interaction with a Sound Installation as a Design Tool","authors":"K. Hansen, Martin Ljungdahl Eriksson, Ricardo Atienza","doi":"10.1145/3123514.3123564","DOIUrl":"https://doi.org/10.1145/3123514.3123564","url":null,"abstract":"In this paper we present an installation done in collaboration with Volvo Cars® for the international motor shows in Geneva, New York, and Shanghai during spring 2017. To envision and produce a future car sound for silent vehicles, users were given high-level control of a sophisticated synthesizer through playing with an attainable and inviting \"color book\"-inspired interface. The synthesizer algorithm was designed to dynamically create a rich mix of looped sounds that could blend with a sonic background scenery that had ecoacoustic validity, and that could metaphorically align with the visual elements. The installation ran faultlessly for around thirty days and with tens of thousands recorded sessions.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127232545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recognition of Piano Pedalling Techniques Using Gesture Data","authors":"B. Liang, György Fazekas, M. Sandler","doi":"10.1145/3123514.3123535","DOIUrl":"https://doi.org/10.1145/3123514.3123535","url":null,"abstract":"This paper presents a study of piano pedalling technique recognition on the sustain pedal utilising gesture data that is collected using a novel measurement system. The recognition is comprised of two separate tasks: onset/offset detection and classification. The onset and offset time of each pedalling technique was computed through signal processing algorithms. Based on features extracted from every segment when the pedal is pressed, the task of classifying the segments by pedalling technique was undertaken using machine learning methods. We exploited and compared a Support Vector Machine (SVM) and a hidden Markov model (HMM) for classification. Recognition results can be represented by customised pedalling notations and visualised in a score following system.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116323456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Xambó, Pratik Shah, Gerard Roma, Jason Freeman, Brian Magerko
{"title":"Turn-Taking and Chatting in Collaborative Music Live Coding","authors":"Anna Xambó, Pratik Shah, Gerard Roma, Jason Freeman, Brian Magerko","doi":"10.1145/3123514.3123519","DOIUrl":"https://doi.org/10.1145/3123514.3123519","url":null,"abstract":"Co-located collaborative live coding is a potential approach to network music and to the music improvisation practice known as live coding. A common strategy to support communication between live coders and the audience is the use of a chat window. However, paying attention to simultaneous multi-user actions, such as chat texts and code, can be too demanding to follow. In this paper, we explore collaborative music live coding (CMLC) using the live coding environment and pedagogical tool EarSketch. In particular, we examine the use of turn-taking and a customized chat window inspired by the practice of pair programming, a team-based strategy to efficiently solving computational problems. Our approach to CMLC also aims at facilitating the understanding of this practice to the audience. We conclude discussing the benefits of this approach in both performance and educational settings.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116527424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"If These Walls Could Speak: Tangible Memories","authors":"M. Mosher","doi":"10.1145/3123514.3123562","DOIUrl":"https://doi.org/10.1145/3123514.3123562","url":null,"abstract":"If These Walls Could Speak provides an alternative memory storage system using tangible objects versus the written words common in diaries. Using river rocks as a memory token, a user can listen to past audio memories stored in the stones, or record their own new ones. This piece explores new forms in tangible memory collection and retrieval by allowing users to store their memories in a physical object. In this way, the project contributes to the development of ubiquitous tagging and computing as an aide for sharing and preserving stories.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114918408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metronome Music Time Capsule: rematerialising music consumption and exchange","authors":"Rishi Shukla, R. Stewart","doi":"10.1145/3123514.3123548","DOIUrl":"https://doi.org/10.1145/3123514.3123548","url":null,"abstract":"The dematerialisation of music consumption is a well evidenced and widely accepted trend. Though much literature has been produced discussing the economic and legal implications of this significant shift for the music industry, its impact on listening practices and consequent considerations for interface design are less well researched. This paper outlines the development of a prototype system that explores, symbolically, the interplay between contemporary dematerialised modes of music consumption with listening traditions of the recent past. A pre-internet age metronome was re-purposed as a tangible interface for a custom music player containing 25 songs, drawn from the period 1940 to 2012. Together, the controller and software reflect through sound, graphics and physicality the progress of Western commercial music, technology and society over this time.","PeriodicalId":282371,"journal":{"name":"Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129441114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}