Sam O’Connor Russell , Iona Gessinger , Anna Krason , Gabriella Vigliocco , Naomi Harte
{"title":"自动语音识别在会话语音转录方面能做什么,不能做什么","authors":"Sam O’Connor Russell , Iona Gessinger , Anna Krason , Gabriella Vigliocco , Naomi Harte","doi":"10.1016/j.rmal.2024.100163","DOIUrl":null,"url":null,"abstract":"<div><div>Transcripts are vital in any research involving conversation. Most transcription is conducted manually, by experts; a process which can take many times longer than the conversation itself. Recently, there has been interest in using automatic speech recognition (ASR) to automate transcription, driven by the wide availability of ASR platforms such as OpenAI’s Whisper. However as studies typically focus on metrics such as the word error rate, there is a lack of detail about ASR transcript quality and the practicalities of ASR use in research. In this paper we review six state-of-the-art ASR technologies, three commercial and three open-source. We assess their capabilities as automatic transcription tools. We find that the commercial ASR systems mostly capture an accurate representation of what was said, and overlapping speech is handled well. Unlike prior work, we show that commercial ASR also preserves the location, but not necessarily the spelling of a large majority of non-lexical tokens: short words such as <em>uh-hum</em> which play vital roles in conversation. We show that the open-source ASR systems produce substantially more errors than their commercial counterparts. However, we highlight how the cost and privacy advantages of open-source ASR may outweigh performance issues in certain applications. We discuss practical considerations for ASR deployment in research, concluding that present ASR technology cannot yet replace the trained transcriber. However, a high-quality initial transcript generated by ASR can provide a good starting point and may be further refined by manual correction. We make all ASR-generated transcripts available for future research in the supplementary material.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"3 3","pages":"Article 100163"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"What automatic speech recognition can and cannot do for conversational speech transcription\",\"authors\":\"Sam O’Connor Russell , Iona Gessinger , Anna Krason , Gabriella Vigliocco , Naomi Harte\",\"doi\":\"10.1016/j.rmal.2024.100163\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Transcripts are vital in any research involving conversation. Most transcription is conducted manually, by experts; a process which can take many times longer than the conversation itself. Recently, there has been interest in using automatic speech recognition (ASR) to automate transcription, driven by the wide availability of ASR platforms such as OpenAI’s Whisper. However as studies typically focus on metrics such as the word error rate, there is a lack of detail about ASR transcript quality and the practicalities of ASR use in research. In this paper we review six state-of-the-art ASR technologies, three commercial and three open-source. We assess their capabilities as automatic transcription tools. We find that the commercial ASR systems mostly capture an accurate representation of what was said, and overlapping speech is handled well. Unlike prior work, we show that commercial ASR also preserves the location, but not necessarily the spelling of a large majority of non-lexical tokens: short words such as <em>uh-hum</em> which play vital roles in conversation. We show that the open-source ASR systems produce substantially more errors than their commercial counterparts. However, we highlight how the cost and privacy advantages of open-source ASR may outweigh performance issues in certain applications. We discuss practical considerations for ASR deployment in research, concluding that present ASR technology cannot yet replace the trained transcriber. However, a high-quality initial transcript generated by ASR can provide a good starting point and may be further refined by manual correction. We make all ASR-generated transcripts available for future research in the supplementary material.</div></div>\",\"PeriodicalId\":101075,\"journal\":{\"name\":\"Research Methods in Applied Linguistics\",\"volume\":\"3 3\",\"pages\":\"Article 100163\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research Methods in Applied Linguistics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772766124000697\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Methods in Applied Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772766124000697","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
在任何涉及对话的研究中,转录都是至关重要的。大多数转录工作都是由专家手工完成的;这一过程所需的时间可能是对话本身的好几倍。最近,在自动语音识别(ASR)平台(如 OpenAI 的 Whisper)广泛应用的推动下,人们对使用自动语音识别(ASR)自动转录产生了兴趣。然而,由于研究通常侧重于单词错误率等指标,因此缺乏有关 ASR 转录质量和在研究中使用 ASR 的实用性的详细信息。在本文中,我们回顾了六种最先进的 ASR 技术,其中三种为商业技术,三种为开源技术。我们评估了它们作为自动转录工具的能力。我们发现,商业 ASR 系统大多能准确捕捉语音内容,并能很好地处理重叠语音。与之前的研究不同,我们发现商业 ASR 还能保留大部分非词汇标记的位置,但不一定能保留其拼写:如 uh-hum 等在对话中起重要作用的短词。我们的研究表明,开源 ASR 系统产生的错误远远多于商用系统。不过,我们强调了开源 ASR 在成本和隐私方面的优势在某些应用中可能会超过性能问题。我们讨论了在研究中部署 ASR 的实际考虑因素,认为目前的 ASR 技术还不能取代训练有素的誊写员。不过,ASR 生成的高质量初始誊本可以提供一个良好的起点,并可通过人工校正进一步完善。我们在补充材料中提供了所有 ASR 生成的记录誊本,供未来研究使用。
What automatic speech recognition can and cannot do for conversational speech transcription
Transcripts are vital in any research involving conversation. Most transcription is conducted manually, by experts; a process which can take many times longer than the conversation itself. Recently, there has been interest in using automatic speech recognition (ASR) to automate transcription, driven by the wide availability of ASR platforms such as OpenAI’s Whisper. However as studies typically focus on metrics such as the word error rate, there is a lack of detail about ASR transcript quality and the practicalities of ASR use in research. In this paper we review six state-of-the-art ASR technologies, three commercial and three open-source. We assess their capabilities as automatic transcription tools. We find that the commercial ASR systems mostly capture an accurate representation of what was said, and overlapping speech is handled well. Unlike prior work, we show that commercial ASR also preserves the location, but not necessarily the spelling of a large majority of non-lexical tokens: short words such as uh-hum which play vital roles in conversation. We show that the open-source ASR systems produce substantially more errors than their commercial counterparts. However, we highlight how the cost and privacy advantages of open-source ASR may outweigh performance issues in certain applications. We discuss practical considerations for ASR deployment in research, concluding that present ASR technology cannot yet replace the trained transcriber. However, a high-quality initial transcript generated by ASR can provide a good starting point and may be further refined by manual correction. We make all ASR-generated transcripts available for future research in the supplementary material.