现实条件下基于cnn的声流分割与分类

Eleni Tsalera, A. Papadakis, M. Samarakou, I. Voyiatzis
{"title":"现实条件下基于cnn的声流分割与分类","authors":"Eleni Tsalera, A. Papadakis, M. Samarakou, I. Voyiatzis","doi":"10.1145/3575879.3576020","DOIUrl":null,"url":null,"abstract":"Audio datasets support the training and validation of Machine Learning algorithms in audio classification problems. Such datasets include different, arbitrarily chosen audio classes. We initially investigate a unifying approach, based on the mapping of audio classes according to the Audioset ontology. Using the ESC-10 audio dataset, a tree-like representation of its classes is created. In addition, we employ an audio similarity calculation tool based on the values of extracted features (spectrum centroid, the spectrum flux and the spectral roll-off). This way the audio classes are connected both semantically and in feature-based manner. Employing the same dataset, ESC-10, we perform sound classification using CNN-based algorithms, after transforming the sound excerpts into images (based on their Mel spectrograms). The YAMNet and VGGish networks are used for audio classification and the accuracy reaches 90%. We extend the classification algorithm with segmentation logic, so that it can be applied into more complex sound excerpts, where multiple sound types are included in a sequential and/or overlapping manner. Quantitative metrics are defined on the behavior of the combined segmentation and segmentation functionality, including two key parameters for the merging operation, the minimum duration of the identified sounds and the intervals. The qualitative metrics are related to the number of sound identification events for a concatenated sound excerpt of the dataset and per each sound class. This way the segmentation logic can operate in a fine- and coarse-grained manner while the dataset and the individual sound classes are characterized in terms of clearness and distinguishability.","PeriodicalId":164036,"journal":{"name":"Proceedings of the 26th Pan-Hellenic Conference on Informatics","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CNN-based Segmentation and Classification of Sound Streams under realistic conditions\",\"authors\":\"Eleni Tsalera, A. Papadakis, M. Samarakou, I. Voyiatzis\",\"doi\":\"10.1145/3575879.3576020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Audio datasets support the training and validation of Machine Learning algorithms in audio classification problems. Such datasets include different, arbitrarily chosen audio classes. We initially investigate a unifying approach, based on the mapping of audio classes according to the Audioset ontology. Using the ESC-10 audio dataset, a tree-like representation of its classes is created. In addition, we employ an audio similarity calculation tool based on the values of extracted features (spectrum centroid, the spectrum flux and the spectral roll-off). This way the audio classes are connected both semantically and in feature-based manner. Employing the same dataset, ESC-10, we perform sound classification using CNN-based algorithms, after transforming the sound excerpts into images (based on their Mel spectrograms). The YAMNet and VGGish networks are used for audio classification and the accuracy reaches 90%. We extend the classification algorithm with segmentation logic, so that it can be applied into more complex sound excerpts, where multiple sound types are included in a sequential and/or overlapping manner. Quantitative metrics are defined on the behavior of the combined segmentation and segmentation functionality, including two key parameters for the merging operation, the minimum duration of the identified sounds and the intervals. The qualitative metrics are related to the number of sound identification events for a concatenated sound excerpt of the dataset and per each sound class. This way the segmentation logic can operate in a fine- and coarse-grained manner while the dataset and the individual sound classes are characterized in terms of clearness and distinguishability.\",\"PeriodicalId\":164036,\"journal\":{\"name\":\"Proceedings of the 26th Pan-Hellenic Conference on Informatics\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 26th Pan-Hellenic Conference on Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3575879.3576020\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th Pan-Hellenic Conference on Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3575879.3576020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

音频数据集支持机器学习算法在音频分类问题中的训练和验证。这些数据集包括不同的、任意选择的音频类。我们首先研究了一种统一的方法,基于音频类根据Audioset本体的映射。使用ESC-10音频数据集,将创建类的树状表示。此外,我们采用了基于提取特征(频谱质心、频谱通量和频谱滚降)值的音频相似度计算工具。通过这种方式,音频类可以在语义上和基于特性的方式上连接起来。使用相同的数据集ESC-10,在将声音摘录转换为图像(基于其Mel谱图)之后,我们使用基于cnn的算法进行声音分类。采用YAMNet和VGGish网络进行音频分类,准确率达到90%。我们用分割逻辑扩展了分类算法,使其可以应用于更复杂的声音摘录,其中多个声音类型以顺序和/或重叠的方式包含。定量指标定义了组合分割的行为和分割功能,包括合并操作的两个关键参数,识别声音的最小持续时间和间隔。定性指标与数据集的连接声音摘录和每个声音类的声音识别事件的数量有关。这样,分割逻辑可以以细粒度和粗粒度的方式操作,而数据集和单个声音类在清晰度和可区分性方面具有特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CNN-based Segmentation and Classification of Sound Streams under realistic conditions
Audio datasets support the training and validation of Machine Learning algorithms in audio classification problems. Such datasets include different, arbitrarily chosen audio classes. We initially investigate a unifying approach, based on the mapping of audio classes according to the Audioset ontology. Using the ESC-10 audio dataset, a tree-like representation of its classes is created. In addition, we employ an audio similarity calculation tool based on the values of extracted features (spectrum centroid, the spectrum flux and the spectral roll-off). This way the audio classes are connected both semantically and in feature-based manner. Employing the same dataset, ESC-10, we perform sound classification using CNN-based algorithms, after transforming the sound excerpts into images (based on their Mel spectrograms). The YAMNet and VGGish networks are used for audio classification and the accuracy reaches 90%. We extend the classification algorithm with segmentation logic, so that it can be applied into more complex sound excerpts, where multiple sound types are included in a sequential and/or overlapping manner. Quantitative metrics are defined on the behavior of the combined segmentation and segmentation functionality, including two key parameters for the merging operation, the minimum duration of the identified sounds and the intervals. The qualitative metrics are related to the number of sound identification events for a concatenated sound excerpt of the dataset and per each sound class. This way the segmentation logic can operate in a fine- and coarse-grained manner while the dataset and the individual sound classes are characterized in terms of clearness and distinguishability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信