{"title":"Comparing Meta-Classifiers for Automatic Music Genre Classification","authors":"V. Y. Shinohara, J. Foleiss, T. Tavares","doi":"10.5753/sbcm.2019.10434","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10434","url":null,"abstract":"Automatic music genre classification is the problem of associating mutually-exclusive labels to audio tracks. This process fosters the organization of collections and facilitates searching and marketing music. One approach for automatic music genre classification is to use diverse vector representations for each track, and then classify them individually. After that, a majority voting system can be used to infer a single label to the whole track. In this work, we evaluated the impact of changing the majority voting system to a meta-classifier. The classification results with the meta-classifier showed statistically significant improvements when related to the majority-voting classifier. This indicates that the higher-level information used by the meta-classifier might be relevant for automatic music genre classification.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115129886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computer Music research at FEEC/Unicamp: a snapshot of 2019","authors":"T. Tavares, B. Masiero","doi":"10.5753/sbcm.2019.10438","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10438","url":null,"abstract":"This is a lab report paper about the state of affairs in the computer music research group at the School of Electrical and Computer Engineering of the University of Campinas (FEEC/Unicamp). This report discusses the people involved in the group, the efforts in teaching and the current research work performed. Last, it provides some discussions on the lessons learned from the past few years and some pointers for future work.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"469 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125840723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ha Dou Ken Music: Mapping a joysticks as a musical controller","authors":"Gabriel Lopes Rocha, J. Araújo, F. Schiavoni","doi":"10.5753/sbcm.2019.10425","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10425","url":null,"abstract":"The structure of a digital musical instrument (DMI) can be splitted up in three parts: interface, mapping and synthesizer. For DMI’s, in which sound synthesis is done via software, the interaction interface serves to capture the performer’s gestures, which can be mapped under various techniques to different sounds. In this work, we bring videogame controls as an interface for musical interaction. Due to its great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures captured by the interface can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126643360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fábio Dos Passos Carvalho, F. Schiavoni, João Teixeira
{"title":"Per(sino)ficação","authors":"Fábio Dos Passos Carvalho, F. Schiavoni, João Teixeira","doi":"10.5753/sbcm.2019.10456","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10456","url":null,"abstract":"The bell’s culture is a secular tradition strongly linked to the religious and social activities of the old Brazilian’s villages. In São João del-Rei, where the singular bell tradition composes the soundscape of the city, the bell’s ringing created from different rhythmic and timbral patterns, establish a language capable of transmitting varied types of messages to the local population. In this way, the social function of these ringing, added to real or legendary facts related to the bell’s culture, were able to produce affections and to constitute a strong relation with the identity of the community. The link of this community with the bells, therefore transcends the man-object relationship, tending to an interpersonal relationship practically. Thus, to emphasize this connection in an artistic way, it is proposed the installation called: PER (SINO) FICAÇÂO. This consists of an environment where users would have their physical attributes collected through the use of computer vision. From the interlocking of these data with timbral attributes of the bells, visitors would be able to sound like these, through mapped bodily attributes capable of performing syntheses based on original samples of the bells. Thus the inverse sense of the personification of the bell is realized, producing the human “bellification”.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122622506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonardo Antunes Ferreira, Estela Ribeiro, C. Thomaz
{"title":"A cluster analysis of benchmark acoustic features on Brazilian music","authors":"Leonardo Antunes Ferreira, Estela Ribeiro, C. Thomaz","doi":"10.5753/sbcm.2019.10444","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10444","url":null,"abstract":"In this work, we extend a standard and successful acoustic feature extraction approach based on trigger selection to examples of Brazilian Bossa-Nova and Heitor Villa Lobos music pieces. Additionally, we propose and implement a computational framework to disclose whether all the acoustic features extracted are statistically relevant, that is, non-redundant. Our experimental results show that not all these well-known features might be necessary for trigger selection, given the multivariate statistical redundancy found, which associated all these acoustic features into 3 clusters with different factor loadings and, consequently, representatives.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124747080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prototyping Web instruments with Mosaicode","authors":"A. Gomes, F. Resende, L. Goncalves, F. Schiavoni","doi":"10.5753/sbcm.2019.10431","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10431","url":null,"abstract":"Many HTML 5 features enable you to build audio applications for web browsers, simplifying the distribution of these applications, and turning any computer, mobile, and portable device into a digital musical instrument. Developing such applications is not an easy task for layprogrammers or non-programmers and may require some effort by musicians and artists to encode audio applications based on HTML5 technologies and APIs. In order to simplify this task, this paper presents the Mosaicode, a Visual programming environment that enables the development of Digital Musical Instruments using the visual programming paradigm. Applications can be developed in the Mosaicode from diagrams – blocks, which encapsulate basic programming functions, and connections, to exchange information among the blocks. The Mosaicode, by having the functionality of generating, compiling and executing codes, can be used to quickly prototype musical instruments, and make it easy to use for beginners looking for learn programming and expert developers who need to optimize the construction of musical applications.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124441119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Almada, João Penchel, Igor Chagas, Max Kühn, Claudia Usai, Eduardo Cabral, Vinicius Braga, Ana Miccolis
{"title":"J-Analyzer: A Software for Computer-Assisted Analysis of Antônio Carlos Jobims Songs","authors":"C. Almada, João Penchel, Igor Chagas, Max Kühn, Claudia Usai, Eduardo Cabral, Vinicius Braga, Ana Miccolis","doi":"10.5753/sbcm.2019.10416","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10416","url":null,"abstract":"The present paper describes structure and functioning of J-Analyzer, a computational tool for assistedanalysis. It integrates a research project intended to investigate the complete song collection by Brazilian composer Antônio Carlos Jobim, focusing on the aspect of harmonic transformation. The program is used to determine the nature of transformational relations between any chordal pair of chords present in a song, as well as the structure of the chords themselves.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126434613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Loureiro, T. Magalhaes, Davi Mota, T. Campolina, Aluizio Oliveira
{"title":"A retrospective of the research on musical expression conducted at CEGeME","authors":"M. Loureiro, T. Magalhaes, Davi Mota, T. Campolina, Aluizio Oliveira","doi":"10.5753/sbcm.2019.10440","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10440","url":null,"abstract":"CEGeME - Center for Research on Musical Gesture and Expression is affiliated to the Graduate Program in Music of the Universidade Federal de Minas Gerais (UFMG), hosted by the School of Music, Belo Horizonte, Brazil, since 2008. Focused on the empirical investigation of music performance, research at CEGeME departs from musical content information extracted from audio signals and three-dimensional spatial position of musicians, recorded during a music performance. Our laboratories are properly equipped for the acquisition of such data. Aiming at establishing a musicological approach to different aspects of musical expressiveness, we investigate causal relations between the expressive intention of musicians and the way they manipulate the acoustic material and how they move while playing a piece of music. The methodology seeks support on knowledge such as computational modeling, statistical analysis, and digital signal processing, which adds to traditional musicology skills. The group has attracted study postulants from different specialties, such as Computer Science, Engineering, Physics, Phonoaudiology and Music Therapy, as well as collaborations from professional musicians instigated by specific inquiries on the performance on their instruments. This paper presents a brief retrospective of the different research projects conducted at CEGeME.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134045285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic onset detection using convolutional neural networks","authors":"W. Cornelissen, M. Loureiro","doi":"10.5753/sbcm.2019.10446","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10446","url":null,"abstract":"A very significant task for music research is to estimate instants when meaningful events begin (onset) and when they end (offset). Onset detection is widely applied in many fields: electrocardiograms, seismographic data, stock market results and many Music Information Research(MIR) tasks, such as Automatic Music Transcription, Rhythm Detection, Speech Recognition, etc. Automatic Onset Detection(AOD) received, recently, a huge contribution coming from Artificial Intelligence (AI) methods, mainly Machine Learning and Deep Learning. In this work, the use of Convolutional Neural Networks (CNN) is explored by adapting its original architecture in order to apply the approach to automatic onset detection on audio musical signals. We used a CNN network for onset detection on a very general dataset, well acknowledged by the MIR community, and examined the accuracy of the method by comparison to ground truth data published by the dataset. The results are promising and outperform another methods of musical onset detection.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125843574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PSYCHO library for Pure Data","authors":"Alexandre Torres Porres","doi":"10.5753/sbcm.2019.10432","DOIUrl":"https://doi.org/10.5753/sbcm.2019.10432","url":null,"abstract":"This paper describes the PSYCHO library for the Pure Data programming language. This library provides novel functions for Pure Data and is a collection of compiled objects, abstractions and patches that include psychoacoustic models and conversions. Most notably, it provides models related to Sensory Dissonance, such as Sharpness, Roughness, Tonalness and Pitch Commonality. This library is an evolution and revision of earlier research work developed during a masters and PhD program. The previous developments had not been made easily available as a single and well documented library. Moreover, the work went through a major overhaul, got rid of the dependance of Pd Extended (now an abandoned and unsupported software) and provides new features. This paper describes the evolution of the early work into the PSYCHO library and presents its main objects, functions and contributions.","PeriodicalId":338771,"journal":{"name":"Anais do Simpósio Brasileiro de Computação Musical (SBCM 2019)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122655196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}