Proceedings of the 3rd International workshop on Digital Libraries for Musicology最新文献

筛选
英文 中文
Representing and Linking Music Performance Data with Score Information 用乐谱信息表示和连接音乐表演数据
J. Devaney, Hubert Léveillé Gauvin
{"title":"Representing and Linking Music Performance Data with Score Information","authors":"J. Devaney, Hubert Léveillé Gauvin","doi":"10.1145/2970044.2970052","DOIUrl":"https://doi.org/10.1145/2970044.2970052","url":null,"abstract":"This paper argues for the need to develop a representation for music performance data that is linked with corresponding score information at the note, beat, and measure levels. Building on the results of a survey of music scholars about their music performance data encoding needs, we propose best-practices for encoding perceptually relevant descriptors of the timing, pitch, loudness, and timbral aspects of performance. We are specifically interested in using descriptors that are sufficiently generalized that multiple performances of the same piece can be directly compared with one another. This paper also proposes a specific representation for encoding performance data and presents prototypes of this representation in both Humdrum and Music Encoding Initiative (MEI) formats.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114693568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Mining metadata from the web for AcousticBrainz 为AcousticBrainz从网络上挖掘元数据
Alastair Porter, D. Bogdanov, Xavier Serra
{"title":"Mining metadata from the web for AcousticBrainz","authors":"Alastair Porter, D. Bogdanov, Xavier Serra","doi":"10.1145/2970044.2970048","DOIUrl":"https://doi.org/10.1145/2970044.2970048","url":null,"abstract":"Semantic annotations of music collections in digital libraries are important for organization and navigation of the collection. These annotations and their associated metadata are useful in many Music Information Retrieval tasks, and related fields in musicology. Music collections used in research are growing in size, and therefore it is useful to use semi-automatic means to obtain such annotations. We present software tools for mining metadata from the web for the purpose of annotating music collections. These tools expand on data present in the AcousticBrainz database, which contains software-generated analysis of music audio files. Using this tool we gather metadata and semantic information from a variety of sources including both community-based services such as MusicBrainz, Last.fm, and Discogs, and commercial databases including Itunes and AllMusic. The tool can be easily expanded to collect data from a new source, and is automatically updated when new items are added to AcousticBrainz. We extract genre annotations for recordings in AcousticBrainz using our tool and study the agreement between folksonomies and expert sources. We discuss the results and explore possibilities for future work.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122730185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
In Collaboration with In Concert: Reflecting a Digital Library as Linked Data for Performance Ephemera 与In Concert合作:反映数字图书馆作为表演蜉蝣的关联数据
Terhi Nurmikko-Fuller, A. Dix, David M. Weigl, Kevin R. Page
{"title":"In Collaboration with In Concert: Reflecting a Digital Library as Linked Data for Performance Ephemera","authors":"Terhi Nurmikko-Fuller, A. Dix, David M. Weigl, Kevin R. Page","doi":"10.1145/2970044.2970049","DOIUrl":"https://doi.org/10.1145/2970044.2970049","url":null,"abstract":"Diverse datasets in the area of Digital Musicology expose complementary information describing works, composers, performers, and wider historical and cultural contexts. Interlinking across such datasets enables new digital methods of scholarly investigation. Such bridging presents challenges when working with legacy tabular or relational datasets that do not natively facilitate linking and referencing to and from external sources. Here, we present pragmatic approaches in turning such legacy datasets into linked data. InConcert is a research collaboration exemplifying these approaches. In this paper, we describe and build on this resource, which is comprised of distinct digital libraries focusing on performance data and on concert ephemera. These datasets were merged with each other and opened up for enrichment from other sources on the Web via conversion to RDF. We outline the main features of the constituent datasets, describe conversion workflows, and perform a comparative analysis. Our findings provide practical recommendations for future efforts focused on exposing legacy datasets as linked data.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127491637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
MORTY: A Toolbox for Mode Recognition and Tonic Identification 莫蒂:调式识别和音调识别工具箱
Altug Karakurt, Sertan Sentürk, Xavier Serra
{"title":"MORTY: A Toolbox for Mode Recognition and Tonic Identification","authors":"Altug Karakurt, Sertan Sentürk, Xavier Serra","doi":"10.1145/2970044.2970054","DOIUrl":"https://doi.org/10.1145/2970044.2970054","url":null,"abstract":"In the general sense, mode defines the melodic framework and tonic acts as the reference tuning pitch for the melody in the performances of many music cultures. The mode and tonic information of the audio recordings is essential for many music information retrieval tasks such as automatic transcription, tuning analysis and music similarity. In this paper we present MORTY, an open source toolbox for mode recognition and tonic identification. The toolbox implements generalized variants of two state-of-the-art methods based on pitch distribution analysis. The algorithms are designed in a generic manner such that they can be easily optimized according to the culture-specific aspects of the studied music tradition. We test the generalized methodology systematically on the largest mode recognition dataset curated for Ottoman-Turkish makam music so far, which is composed of 1000 recordings in 50 modes. We obtained 95.8%, 71.8% and 63.6% accuracy in tonic identification, mode recognition and joint mode and tonic estimation tasks, respectively. We additionally present recent experiments on Carnatic and Hindustani music in comparison with several methodologies recently proposed for raga/raag recognition. We prioritized the reproducibility of our work and provide all of our data, code and results publicly. Hence we hope that our toolbox would be used as a benchmark for future methodologies proposed for mode recognition and tonic identification, especially for music traditions in which these computational tasks have not been addressed yet.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A standard format proposal for hierarchical analyses and representations 用于分层分析和表示的标准格式建议
D. Rizo, A. Marsden
{"title":"A standard format proposal for hierarchical analyses and representations","authors":"D. Rizo, A. Marsden","doi":"10.1145/2970044.2970046","DOIUrl":"https://doi.org/10.1145/2970044.2970046","url":null,"abstract":"In the realm of digital musicology, standardizations efforts to date have mostly concentrated on the representation of music. Analyses of music are increasingly being generated or communicated by digital means. We demonstrate that the same arguments for the desirability of standardization in the representation of music apply also to the representation of analyses of music: proper preservation, sharing of data, and facilitation of digital processing. We concentrate here on analyses which can be described as hierarchical and show that this covers a broad range of existing analytical formats. We propose an extension of MEI (Music Encoding Initiative) to allow the encoding of analyses unambiguously associated with and aligned to a representation of the music analysed, making use of existing mechanisms within MEI's parent TEI (Text Encoding Initiative) for the representation of trees and graphs.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128003892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Document Analysis for Music Scores via Machine Learning 基于机器学习的乐谱文档分析
Jorge Calvo-Zaragoza, Gabriel Vigliensoni, Ichiro Fujinaga
{"title":"Document Analysis for Music Scores via Machine Learning","authors":"Jorge Calvo-Zaragoza, Gabriel Vigliensoni, Ichiro Fujinaga","doi":"10.1145/2970044.2970047","DOIUrl":"https://doi.org/10.1145/2970044.2970047","url":null,"abstract":"Content within musical documents not only contains musical notation but can also include text, ornaments, annotations, and editorial data. Before any attempt at automatic recognition of elements in these layers, it is necessary to perform a document analysis process to detect and classify each of its constituent parts. The obstacle for this analysis is the high heterogeneity amongst collections, which makes it difficult to propose methods that can be generalizable to a broader range of sources. In this paper we propose a data-driven document analysis framework based on machine learning, which focuses on classifying regions of interest at pixel level. The main advantage of this approach is that it can be exploited regardless of the type of document provided, as long as training data is available. Our preliminary experimentation includes a set of specific tasks that can be performed on music such as the detection of staff lines, isolation of music symbols, and the layering of the document into its elemental parts.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125433060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Exploring J-DISC: Some Preliminary Analyses 探索J-DISC:一些初步分析
Yun Hao, Kahyun Choi, J. S. Downie
{"title":"Exploring J-DISC: Some Preliminary Analyses","authors":"Yun Hao, Kahyun Choi, J. S. Downie","doi":"10.1145/2970044.2970050","DOIUrl":"https://doi.org/10.1145/2970044.2970050","url":null,"abstract":"J-DISC, a specialized digital library for information about jazz recording sessions that includes rich structured and searchable metadata, has the potential for supporting a wide range of studies on jazz, especially the musicological work of those interested in the social network aspects of jazz creation and production. This paper provides an overview of the entire J-DISC dataset. It also presents some exemplar analyses across this dataset to better illustrate the kinds of uses that musicologists could make of this collection. Our illustrative analyses include both informetric and network analyses of the entire J-DISC data which comprises data on 2,711 unique recording sessions associated with 3,744 distinct artists including such influential jazz figures as Dizzy Gillespie, Don Byas, Charlie Parker, John Coltrane and Kenny Dorham, etc. Our analyses also show that around 60% of the recording sessions included in J-DISC were recorded in New York City, Englewood Cliffs (NJ), Los Angeles (CA) and Paris during the year of 1923 to 2011. Furthermore, our analyses of the J-DISC data show the top venues captured in the J-DISC data include Rudy Van Gelder Studio, Birdland and Reeves Sound Studios. The potential research uses of the J-DISC data in both the DL (Digital Libraries) and MIR (Music Information Retrieval) domains are also briefly discussed.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132601530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Data Generation and Multi-Modal Analysis for Recorded Operatic Performance 记录歌剧表演的数据生成和多模态分析
Joshua Neumann
{"title":"Data Generation and Multi-Modal Analysis for Recorded Operatic Performance","authors":"Joshua Neumann","doi":"10.1145/2970044.2970045","DOIUrl":"https://doi.org/10.1145/2970044.2970045","url":null,"abstract":"Commercial recordings of live opera performance are only sporadically available, mostly due to various legal protections held by opera houses. The resulting onsite, archive-only access for them inhibits analysis of the creative process in \"live\" environments. Based on a technique I developed for generating performance data from copyright protected archival recordings, this paper presents a means of interrogating the creative practice in individual operatic performances and across the corpus of a recorded performance history. My analysis uses \"In questa Reggia\" from Giacomo Puccini's Turandot as performed at New York's Metropolitan Opera. The first part of my analysis builds on tempo mapping developed by the Centre for the History and Analysis of Recorded Music. Given the natural relationship in which performances of the same work exist, statistical and network analyses of the data extracted from a corpus of performances offer ways to contextualize and understand how performances create a tradition to which and through which they relate to varying degrees.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126661058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Music Addressability API: A draft specification for addressing portions of music notation on the web 音乐可寻址API:一个用于在网络上寻址部分音乐符号的规范草案
Raffaele Viglianti
{"title":"The Music Addressability API: A draft specification for addressing portions of music notation on the web","authors":"Raffaele Viglianti","doi":"10.1145/2970044.2970056","DOIUrl":"https://doi.org/10.1145/2970044.2970056","url":null,"abstract":"This paper describes an Application Programming Interface (API) for addressing music notation on the web regardless of the format in which it is stored. This API was created as a method for addressing and extracting specific portions of music notation published in machine-readable formats on the web. Music notation, like text, can be \"addressed\" in new ways in a digital environment, allowing scholars to identify and name structures of various kinds, thus raising such questions as how can one virtually \"circle\" some music notation? How can a machine interpret this \"circling\" to select and retrieve the relevant music notation? The API was evaluated by: 1) creating an implementation of the API for documents in the Music Encoding Initiative (MEI) format; and by 2) remodelling a dataset of music analysis statements from the Du Chemin: Lost Voices project (Haverford College) by using the API to connect the analytical statements with the portion of notaiton they refer to. Building this corpus has demonstrated that the Music Addressability API is capable of modelling complex analytical statements containing references to music notation.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132317061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Approaches to handwritten conductor annotation extraction in musical scores 乐谱中手写指挥注释的提取方法
Eamonn Bell, L. Pugin
{"title":"Approaches to handwritten conductor annotation extraction in musical scores","authors":"Eamonn Bell, L. Pugin","doi":"10.1145/2970044.2970053","DOIUrl":"https://doi.org/10.1145/2970044.2970053","url":null,"abstract":"Conductor copies of musical scores are typically rich in handwritten annotations. Ongoing archival efforts to digitize orchestral conductors' scores have made scanned copies of hundreds of these annotated scores available in digital formats. The extraction of handwritten annotations from digitized printed documents is a difficult task for computer vision, with most approaches focusing on the extraction of handwritten text. However, conductors' annotation practices provide us with at least two affordances, which make the task more tractable in the musical domain. First, many conductors opt to mark their scores using colored pencils, which contrast with the black and white print of sheet music. Consequently, we show promising results when using color separation techniques alone to recover handwritten annotations from conductors' scores. We also compare annotated scores to unannotated copies and use a printed sheet music comparison tool to recover handwritten annotations as additions to the clean copy. We then investigate the use of both of these techniques in a combined method, which improves the results of the color separation technique. These techniques are demonstrated using a sample of orchestral scores annotated by professional conductors of the New York Philharmonic. Handwritten annotation extraction in musical scores has applications to the systematic investigation of score annotation practices by performers, annotator attribution, and to the interactive presentation of annotated scores, which we briefly discuss.","PeriodicalId":422109,"journal":{"name":"Proceedings of the 3rd International workshop on Digital Libraries for Musicology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114463480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信