Gökhan Tür, A. Stolcke, L. L. Voss, J. Dowding, Benoit Favre, R. Fernández, Matthew Frampton, Michael W. Frandsen, Clint Frederickson, M. Graciarena, Dilek Z. Hakkani-Tür, Donald Kintzing, Kyle Leveque, Shane Mason, J. Niekrasz, S. Peters, Matthew Purver, K. Riedhammer, Elizabeth Shriberg, Jing Tien, D. Vergyri, Fan Yang
{"title":"CALO会议语音识别理解系统","authors":"Gökhan Tür, A. Stolcke, L. L. Voss, J. Dowding, Benoit Favre, R. Fernández, Matthew Frampton, Michael W. Frandsen, Clint Frederickson, M. Graciarena, Dilek Z. Hakkani-Tür, Donald Kintzing, Kyle Leveque, Shane Mason, J. Niekrasz, S. Peters, Matthew Purver, K. Riedhammer, Elizabeth Shriberg, Jing Tien, D. Vergyri, Fan Yang","doi":"10.1109/SLT.2008.4777842","DOIUrl":null,"url":null,"abstract":"The CALO meeting assistant provides for distributed meeting capture, annotation, automatic transcription and semantic analysis of multiparty meetings, and is part of the larger CALO personal assistant system. This paper summarizes the CALO-MA architecture and its speech recognition and understanding components, which include real-time and offline speech transcription, dialog act segmentation and tagging, question-answer pair identification, action item recognition, decision extraction, and summarization.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"174 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"59","resultStr":"{\"title\":\"The CALO meeting speech recognition and understanding system\",\"authors\":\"Gökhan Tür, A. Stolcke, L. L. Voss, J. Dowding, Benoit Favre, R. Fernández, Matthew Frampton, Michael W. Frandsen, Clint Frederickson, M. Graciarena, Dilek Z. Hakkani-Tür, Donald Kintzing, Kyle Leveque, Shane Mason, J. Niekrasz, S. Peters, Matthew Purver, K. Riedhammer, Elizabeth Shriberg, Jing Tien, D. Vergyri, Fan Yang\",\"doi\":\"10.1109/SLT.2008.4777842\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The CALO meeting assistant provides for distributed meeting capture, annotation, automatic transcription and semantic analysis of multiparty meetings, and is part of the larger CALO personal assistant system. This paper summarizes the CALO-MA architecture and its speech recognition and understanding components, which include real-time and offline speech transcription, dialog act segmentation and tagging, question-answer pair identification, action item recognition, decision extraction, and summarization.\",\"PeriodicalId\":186876,\"journal\":{\"name\":\"2008 IEEE Spoken Language Technology Workshop\",\"volume\":\"174 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"59\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 IEEE Spoken Language Technology Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SLT.2008.4777842\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 IEEE Spoken Language Technology Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT.2008.4777842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The CALO meeting speech recognition and understanding system
The CALO meeting assistant provides for distributed meeting capture, annotation, automatic transcription and semantic analysis of multiparty meetings, and is part of the larger CALO personal assistant system. This paper summarizes the CALO-MA architecture and its speech recognition and understanding components, which include real-time and offline speech transcription, dialog act segmentation and tagging, question-answer pair identification, action item recognition, decision extraction, and summarization.