{"title":"Distant Listening and Resonance","authors":"Tanya E. Clement","doi":"10.1353/esc.2020.a903548","DOIUrl":null,"url":null,"abstract":"For speech recordings, sound is text—the words people speak, but also other sounds that indicate a speaking and listening context: tone and laughter, coughing and crying, bird song, car engines and horns, a baby crying, thunder clapping, gun shots, the needle dropping, the needle scratching, to name a few. Using computation to analyze many texts at once in big data sets has been called “distant reading” in Digital Humanities (Underwood). I have described “distant listening” to sound texts as using computing to “distill the many-layered four-dimensional space of the text in performance (i.e., embodied within the performance network of interpretations with the listener in time and space) into a two-dimensional script called ‘code’ ” (Clement, “Distant Listening”). Distant listening, like distant reading, implies a lack of granular observation based on proximity in terms of space as well as a removal in terms of emotion, experience, and individual or subjective knowledge. Sound travels differently than light; what is lacking is made up for in other ways. What is too close can be too loud. What is far can be communicated loud and clear. Resonance is both an embodied, physical experience as well as a cultural hermeneutic. Specifying sound computationally is a process of discretization. Without going too far down the mathematical rabbit hole, discretization, it is safe to say, is a means of mathematically representing a continuous signal Distant Listening and Resonance","PeriodicalId":384095,"journal":{"name":"ESC: English Studies in Canada","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ESC: English Studies in Canada","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1353/esc.2020.a903548","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
For speech recordings, sound is text—the words people speak, but also other sounds that indicate a speaking and listening context: tone and laughter, coughing and crying, bird song, car engines and horns, a baby crying, thunder clapping, gun shots, the needle dropping, the needle scratching, to name a few. Using computation to analyze many texts at once in big data sets has been called “distant reading” in Digital Humanities (Underwood). I have described “distant listening” to sound texts as using computing to “distill the many-layered four-dimensional space of the text in performance (i.e., embodied within the performance network of interpretations with the listener in time and space) into a two-dimensional script called ‘code’ ” (Clement, “Distant Listening”). Distant listening, like distant reading, implies a lack of granular observation based on proximity in terms of space as well as a removal in terms of emotion, experience, and individual or subjective knowledge. Sound travels differently than light; what is lacking is made up for in other ways. What is too close can be too loud. What is far can be communicated loud and clear. Resonance is both an embodied, physical experience as well as a cultural hermeneutic. Specifying sound computationally is a process of discretization. Without going too far down the mathematical rabbit hole, discretization, it is safe to say, is a means of mathematically representing a continuous signal Distant Listening and Resonance