B. Fazenda, P. Kendrick, T. Cox, Francis F. Li, Iain Jackson
{"title":"用户生成内容中音频质量的感知和自动评估:一个改进的模型","authors":"B. Fazenda, P. Kendrick, T. Cox, Francis F. Li, Iain Jackson","doi":"10.1109/QoMEX.2016.7498974","DOIUrl":null,"url":null,"abstract":"Technology to record sound, available in personal devices such as smartphones or video recording devices, is now ubiquitous. However, the production quality of the sound on this user-generated content is often very poor: distorted, noisy, with garbled speech or indistinct music. Our interest lies in the causes of the poor recording, especially what happens between the sound source and the electronic signal emerging from the microphone, and finding an automated method to warn the user of such problems. Typical problems, such as distortion, wind noise, microphone handling noise and frequency response, were tested. A perceptual model has been developed from subjective tests on the perceived quality of such errors and data measured from a training dataset composed of various audio files. It is shown that perceived quality is associated with distortion and frequency response, with wind and handling noise being just slightly less important. In addition, the contextual content of the audio sample was found to modulate perceived quality at similar levels to degradations such as wind and rendering those introduced by handling noise negligible.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"33 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Perception and automated assessment of audio quality in user generated content: An improved model\",\"authors\":\"B. Fazenda, P. Kendrick, T. Cox, Francis F. Li, Iain Jackson\",\"doi\":\"10.1109/QoMEX.2016.7498974\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Technology to record sound, available in personal devices such as smartphones or video recording devices, is now ubiquitous. However, the production quality of the sound on this user-generated content is often very poor: distorted, noisy, with garbled speech or indistinct music. Our interest lies in the causes of the poor recording, especially what happens between the sound source and the electronic signal emerging from the microphone, and finding an automated method to warn the user of such problems. Typical problems, such as distortion, wind noise, microphone handling noise and frequency response, were tested. A perceptual model has been developed from subjective tests on the perceived quality of such errors and data measured from a training dataset composed of various audio files. It is shown that perceived quality is associated with distortion and frequency response, with wind and handling noise being just slightly less important. In addition, the contextual content of the audio sample was found to modulate perceived quality at similar levels to degradations such as wind and rendering those introduced by handling noise negligible.\",\"PeriodicalId\":6645,\"journal\":{\"name\":\"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)\",\"volume\":\"33 1\",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/QoMEX.2016.7498974\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QoMEX.2016.7498974","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Perception and automated assessment of audio quality in user generated content: An improved model
Technology to record sound, available in personal devices such as smartphones or video recording devices, is now ubiquitous. However, the production quality of the sound on this user-generated content is often very poor: distorted, noisy, with garbled speech or indistinct music. Our interest lies in the causes of the poor recording, especially what happens between the sound source and the electronic signal emerging from the microphone, and finding an automated method to warn the user of such problems. Typical problems, such as distortion, wind noise, microphone handling noise and frequency response, were tested. A perceptual model has been developed from subjective tests on the perceived quality of such errors and data measured from a training dataset composed of various audio files. It is shown that perceived quality is associated with distortion and frequency response, with wind and handling noise being just slightly less important. In addition, the contextual content of the audio sample was found to modulate perceived quality at similar levels to degradations such as wind and rendering those introduced by handling noise negligible.