Computer Music Journal最新文献

筛选
英文 中文
Borrowed Gestures: The Body as an Extension of the Musical Instrument 借用的手势:身体作为乐器的延伸
Computer Music Journal Pub Date : 2021-09-01 DOI: 10.1162/comj_a_00617
Doga Cavdir;Ge Wang
{"title":"Borrowed Gestures: The Body as an Extension of the Musical Instrument","authors":"Doga Cavdir;Ge Wang","doi":"10.1162/comj_a_00617","DOIUrl":"10.1162/comj_a_00617","url":null,"abstract":"Abstract This article presents design and performance practices for movement-based digital musical instruments. We develop the notion of borrowed gestures, which is a gesture-first approach that composes a gestural vocabulary of nonmusical body movements combined with nuanced instrumental gestures. These practices explore new affordances for physical interaction by transferring the expressive qualities and communicative aspects of body movements; these body movements and their qualities are borrowed from nonmusical domains. By merging musical and nonmusical domains through movement interaction, borrowed gestures offer shared performance spaces and cross-disciplinary practices. Our approach centers on use of the body and the design with body movement when developing digital musical instruments. The performer's body becomes an intermediate medium, physically connecting and uniting the performer and the instrument. This approach creates new ways of conceptualizing and designing movement-based musical interaction: (1) offering a design framework that transforms a broader range of expressive gestures (including nonmusical gestures) into sonic and musical interactions, and (2) creating a new dynamic between performer and instrument that reframes nonmusical gestures—such as dance movements or sign language gestures—into musical contexts. We aesthetically evaluate our design framework and performance practices based on three case studies: Bodyharp, Armtop, and Felt Sound. As part of this evaluation, we also present a set of design principles as a way of thinking about designing movement-based digital musical instruments.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 3","pages":"58-80"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43739178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leo Magnien: Clarières Leo Magnien: clareres
Computer Music Journal Pub Date : 2021-09-01 DOI: 10.1162/comj_r_00618
Seth Rozanoff
{"title":"Leo Magnien: Clarières","authors":"Seth Rozanoff","doi":"10.1162/comj_r_00618","DOIUrl":"10.1162/comj_r_00618","url":null,"abstract":"that conceptual space of materiality, virtual place, and virtual place as apparatus, and in a spiritual space where preexisting material features become part of virtual image (credit to virtual image scholar and professor Lisa Zaher). The concert’s final piece, “Moksha Black” by King Britt featuring Roba El-Essawy, disseminates sonospatial models that can be transduced (based on registered aspects) into one’s metaphysical space (which is cognitively and psychoacoustically entangled). For example, the initial voices are farther and larger in metaphysical space than physical space; then, a square-like tone cuts through in metaphysical space; and later, voice is physically still while circling and stuttering in metaphysical space. Towards the end, voices are vertical poles (in allocentric space) from which frequency spectra fall like glitter and sparks. “Sun Ra’s gift was understanding the passage of humankind through large swaths of time. . . . The sound of those insects. That continuity. Who’s to say that that sound . . . can’t be interpreted as a meaningful sequence of something like language abiding to something like a grammar?” (Thomas Stanley, in conversation with the reviewer). In his keynote address “You Haven’t Met the Captain of the Spaceship. . . Yet,” Thomas Stanley presented extensive info about interfacing with and interpreting Sun Ra’s teachings, including myth as tech—specifically, Alter Destiny, a leap into a zone of justice that is now possible because the original myth of dominion has gradually become unstable. It involves solving the many crises (e.g., racism, intergroup conflict, “extractive capitalism and the filth that goes along with this way of life,” potential mutually assured destruction, capitalist labor, an American empire whose populace is largely “distracted, paid off, sedated by . . . the fruits of oppression that happen in other peoples’ country”) predicated on that myth, simultaneously. This seems impossible, but the resolve “to be that broad in our attempts to ameliorate the situation is the starting point” (Stanley, in conversation with the reviewer). Sun Ra’s music contains messages that can help us question our fundamental beliefs rooted in that myth. The Sounds In Focus II concert begins with “The Shaman Ascending” by Barry Truax: a constantly circling vocal not circling in metaphysical space, through which spectral processes sculpt a spider-shaped cavern around me in allocentric and metaphysical space. In “Abwesenheit,” John Young clinically and playfully makes audible the air currents and stases in the room. Lidia Zielinska’s “Backstage Pass” treats idiomatic piano moments as seeds nourished with playful curiosity and passion, presented with spatial polymatic frequency poiesis in a room-sized piano bed. To start the Sounds Cubed II concert, centripetal whispers in “śūnyatā” by Chris Coleman construct connective tissue to the Cube’s center. In “Toys” by Orestis Karamanlis, flutters of sonic pulses along the pe","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 3","pages":"83-85"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47004546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Architecture for Real-Time Granular Synthesis With Per-Grain Processing: EmissionControl2 具有逐颗粒处理的实时颗粒合成体系结构:EmissionControl2
Computer Music Journal Pub Date : 2021-09-01 DOI: 10.1162/comj_a_00613
Curtis Roads;Jack Kilgore;Rodney DuPlessis
{"title":"Architecture for Real-Time Granular Synthesis With Per-Grain Processing: EmissionControl2","authors":"Curtis Roads;Jack Kilgore;Rodney DuPlessis","doi":"10.1162/comj_a_00613","DOIUrl":"10.1162/comj_a_00613","url":null,"abstract":"Abstract EmissionControl2 (EC2) is a precision tool that provides a versatile and expressive platform for granular synthesis education, research, performance, and studio composition. It is available as a free download on all major operating systems. In this article, we describe the theoretical underpinnings of the software and expose the design choices made in creating this instrument. We present a brief historical overview and cover the main features of EC2, with an emphasis on per-grain processing, which renders each grain as a unique particle of sound. We discuss the graphical user interface design choices, the theory of operation, and intended use cases that guided these choices. We describe the architecture of the real-time per-grain granular engine, which emits grains in synchronous or asynchronous streams. We conclude with an evaluation of the software.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 3","pages":"20-38"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46638143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fundamental Sound: A Conversation with Hubert Howe 基本声音:与休伯特·豪的对话
Computer Music Journal Pub Date : 2021-09-01 DOI: 10.1162/comj_a_00611
Mark Zaki
{"title":"Fundamental Sound: A Conversation with Hubert Howe","authors":"Mark Zaki","doi":"10.1162/comj_a_00611","DOIUrl":"10.1162/comj_a_00611","url":null,"abstract":"Hubert Howe (see Figure 1) received AB, MFA, and PhD degrees from Princeton University, where he studied with J. K. (“Jim”) Randall, Godfrey Winham, and Milton Babbitt. As one of the early researchers in computer music, he was a principal contributor to the development of the Music 4B and Music 4BF programs. In 1968, he joined the faculty of Queens College of the City University of New York (CUNY), where he became a professor of music and director of the electronic music studios. He also taught computer music at the Juilliard School in Manhattan for 20 years. Howe has been a member of the American Composers Alliance since 1974 and has served as its President from 2002 to 2011. He is also a member of the New York Composers Circle and has served as Executive Director since 2013. He is currently active as Director of the New York City Electroacoustic Music Festival, which he founded in 2009. Recordings of his music have been released on the labels Capstone and Centaur, among others. This conversation took place over Zoom during March and April 2022. It begins with a look at Howe’s student years at Princeton and traces his pioneering journey through to his musical activity today. Aspects of his composition and programming work are discussed, as well as his thoughts on pitch structure and timbral approaches to composition. More information about his music and work can be found at http://www.huberthowe.org.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 3","pages":"9-19"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42794851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
J.H. Kwabena Nketia
Computer Music Journal Pub Date : 2021-07-28 DOI: 10.1093/obo/9780199757824-0294
{"title":"J.H. Kwabena Nketia","authors":"","doi":"10.1093/obo/9780199757824-0294","DOIUrl":"https://doi.org/10.1093/obo/9780199757824-0294","url":null,"abstract":"Joseph Hanson Kwabena Nketia (b. 22 June 1921–d. 13 March 2019) from Ghana was the preeminent scholar of African musics, whose field research in the 1940s in varied ways formed the foundation of music scholarship in Africa and predated ethnomusicology as an academic discipline in the United States. A prolific writer, music educator, and composer, his publications on key topics in African musicology are pivotal to the transdisciplinary field of African studies. Born and raised in Asante Mampong, Nketia was tutored in two worlds of knowledge systems: his traditional musical environment generated and sustained a lifelong interest in indigenous systems, and his European-based formal education provided the space for scholarship at home and around the world. At the Presbyterian Training College at Akropong-Akwapem, he was introduced to the elements of European music by Robert Danso and Ephraim Amu. The latter’s choral and instrumental music in the African idiom made a lasting impression on Nketia as he combined oral compositional conventions in traditional music with compositional models in European classical music in his own written compositions. From 1944 to 1949, Nketia studied modern linguistics in SOAS at the University of London. His mentor was John Firth, who spearheaded the famous London school of linguistics. He also enrolled at the Trinity College of Music and Birkbeck College to study Western music, English, and history. The result of his studies in linguistics and history are the publications of classic texts cited in this bibliography. From 1952 to 1979, Nketia held positions at the University of Ghana including a research fellow in sociology, the founding director of the School of Performing Arts, and the first African director of the Institute of African Studies; and together with Mawere Opoku, he established the Ghana Dance Ensemble. This was a time that he embarked on extensive field research and documentation of music traditions all over Ghana. His students and the school provided creative outlets for his scholarly publications as he trained generations of Ghanaians. In 1958, a Rockefeller Foundation Fellowship enabled Nketia to study composition and musicology at Juilliard and Columbia with the likes of Henry Cowell, and he came out convinced that his compositions should reflect his African identity. Further, he interacted with Curt Sachs, Melville Herskovits, Alan Merriam, and Mantle Hood, which placed Nketia at the center of intellectual debates in the formative years of ethnomusicology. From 1979 to 1983, Nketia was appointed to the faculty of the Institute of Ethnomusicology at UCLA; and from 1983 to 1991, to the Mellon Chair at the University of Pittsburgh, where he trained generations of Americans and Africans. Nketia returned to Ghana and founded the International Center for African Music and Dance (1992–2010) and also served as the first chancellor of the Akrofi-Christaller Institute of Theology (2006–2016). Joseph Hanson Kwa","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85702174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
About This Issue 关于这个问题
Computer Music Journal Pub Date : 2021-06-01 DOI: 10.1162/comj_e_00602
{"title":"About This Issue","authors":"","doi":"10.1162/comj_e_00602","DOIUrl":"10.1162/comj_e_00602","url":null,"abstract":"This issue’s articles each consider a different area of audio processing. The first three deal with specific signal-processing techniques in the areas of filtering, spatialization, and synthesis, respectively. The fourth concerns data mining in audio corpora, typically employing descriptors obtained from signal processing. In the first article, Lazzarini and Timoney present their digital filter designs that are derived from analog filters. The authors contend that examining the high-level block diagrams and transfer functions of an analog model can yield benefits not found in the “virtual analog” approach of attempting to analyze and reproduce every detail of a specific analog circuit. As evidence, they offer both linear and nonlinear versions of a digital filter derived from the analog state variable filter. They then extend the nonlinear design to a filter that goes beyond the analog model by incorporating ideas stemming from waveshaping synthesis. In the area of spatialization, the article by Schlienger and Khashchanskiy demonstrates how acoustic localization can be used effectively, and at low cost, for tracking the position of a person participating in a musical performance or an art installation. Acoustic localization ascertains the distance and direction of a sound source or a sound recipient. The authors take advantage of loudspeakers already deployed in a performance, adding a measurement signal that is above the frequency range of human hearing to the audible music that the loudspeaker may be concurrently emitting. The human participant","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48270272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Physically Inspired Implementation of Xenakis's Stochastic Synthesis: Diffusion Dynamic Stochastic Synthesis Xenakis随机合成的物理启发实现:扩散动态随机合成
Computer Music Journal Pub Date : 2021-06-01 DOI: 10.1162/comj_a_00606
Emilio L. Rojas;Rodrigo F. Cádiz
{"title":"A Physically Inspired Implementation of Xenakis's Stochastic Synthesis: Diffusion Dynamic Stochastic Synthesis","authors":"Emilio L. Rojas;Rodrigo F. Cádiz","doi":"10.1162/comj_a_00606","DOIUrl":"10.1162/comj_a_00606","url":null,"abstract":"Abstract This article presents an extension of Iannis Xenakis's Dynamic Stochastic Synthesis (DSS) called Diffusion Dynamic Stochastic Synthesis (DDSS). This extension solves a diffusion equation whose solutions can be used to map particle positions to amplitude values of several breakpoints in a waveform, following traditional concepts of DSS by directly shaping the waveform of a sound. One significant difference between DSS and DDSS is that the latter includes a drift in the Brownian trajectories that each breakpoint experiences through time. Diffusion Dynamic Stochastic Synthesis can also be used in other ways, such as to control the amplitude values of an oscillator bank using additive synthesis, shaping in this case the spectrum, not the waveform. This second modality goes against Xenakis's original desire to depart from classical Fourier synthesis. The results of spectral analyses of the DDSS waveform approach, implemented using the software environment Max, are discussed and compared with the results of a simplified version of DSS to which, despite the similarity in the overall form of the frequency spectrum, noticeable differences are found. In addition to the Max implementation of the basic DDSS algorithm, a MIDI-controlled synthesizer is also presented here. With DDSS we introduce a real physical process, in this case diffusion, into traditional stochastic synthesis. This sort of sonification can suggest models of sound synthesis that are more complex and grounded in physical concepts.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"48-66"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42656475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Products of Interest 感兴趣的产品
Computer Music Journal Pub Date : 2021-06-01 DOI: 10.1162/comj_r_00601
{"title":"Products of Interest","authors":"","doi":"10.1162/comj_r_00601","DOIUrl":"https://doi.org/10.1162/comj_r_00601","url":null,"abstract":"Expressive E, creator of the Touche MIDI/CV controller and the Osmose keyboard synthesizer/controller, has teamed up with Applied Acoustics Systems (AAS), renowned for their physical modeling software instruments, to create a new software plug-in instrument called Imagine (see Figure 1). Imagine allows the user to create and play sounds based on the resonant bodies of physical real-life instruments and to modify them to create fantastical instruments and new acoustic landscapes. Expressive E has created hundreds of presets for Imagine based on feedback from musicians, composers, sound designers, and producers. Each preset is made up","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"91-106"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49947452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive Spatial Interactivity in Sonic Arts: The Acoustic Localization Positioning System 声学艺术中的沉浸式空间互动:声学定位系统
Computer Music Journal Pub Date : 2021-06-01 DOI: 10.1162/comj_a_00605
Dominik Schlienger;Victor Khashchanskiy
{"title":"Immersive Spatial Interactivity in Sonic Arts: The Acoustic Localization Positioning System","authors":"Dominik Schlienger;Victor Khashchanskiy","doi":"10.1162/comj_a_00605","DOIUrl":"10.1162/comj_a_00605","url":null,"abstract":"Abstract The Acoustic Localization Positioning System is the outcome of several years of participatory development with musicians and artists having a stake in sonic arts, collaboratively aiming for nonobtrusive tracking and indoors positioning technology that facilitates spatial interaction and immersion. Based on previous work on application scenarios for spatial reproduction of moving sound sources and the conception of the kinaesthetic interface, a tracking system for spatially interactive sonic arts is presented here. It is an open-source implementation in the form of a stand-alone application and associated Max patches. The implementation uses off-the-shelf, ubiquitous technology. Based on the findings of tests and experiments conducted in extensive creative workshops, we show how the approach addresses several technical problems and overcomes some typical obstacles to immersion in spatially interactive applications in sonic arts.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"24-47"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44694106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Celebrating Electronics: Music by John Bischoff (Concert) and John Bischoff: Bitplicity (Album) 庆祝电子:约翰·比肖夫的音乐(音乐会)和约翰·比肖夫·Bitplicity(专辑)
Computer Music Journal Pub Date : 2021-06-01 DOI: 10.1162/comj_r_00609
Ralph Lewis
{"title":"Celebrating Electronics: Music by John Bischoff (Concert) and John Bischoff: Bitplicity (Album)","authors":"Ralph Lewis","doi":"10.1162/comj_r_00609","DOIUrl":"10.1162/comj_r_00609","url":null,"abstract":"","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"84-85"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49577153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信