R. Cole, T. Carmell, Pam Connors, Michael W. Macon, J. Wouters, Jacques de Villiers, Alice Tarachow, D. Massaro, Michael M. Cohen, J. Beskow, Jie Yang, U. Meier, A. Waibel, P. Stone, Alice Davis, Chris Soland, George E. Fortier
{"title":"Intelligent animated agents for interactive language training","authors":"R. Cole, T. Carmell, Pam Connors, Michael W. Macon, J. Wouters, Jacques de Villiers, Alice Tarachow, D. Massaro, Michael M. Cohen, J. Beskow, Jie Yang, U. Meier, A. Waibel, P. Stone, Alice Davis, Chris Soland, George E. Fortier","doi":"10.1145/288076.288077","DOIUrl":null,"url":null,"abstract":"This report describes a three-year project, now eight months old, to develop interactive learning tools for language training with profoundly deaf children. The tools combine four key technologies: speech recognition, developed at the Oregon Graduate Institute; speech synthesis, developed at the University of Edinburgh and modified at OGI; facial animation, developed at University of California, Santa Cruz; and face tracking and speech reading, developed at Carnegie Mellon University. These technologies are being combined to create an intelligent conversational agent; a three-dimensional face that produces and understands auditory and visual speech. The agent has been incorporated into CSLU Toolkit, a software environment for developing and researching spoken language systems. We describe our experiences in bringing interactive learning tools to classrooms at the Tucker-Maxon Oral School in Portland, Oregon, and the technological advances that are required for this project to succeed.","PeriodicalId":105690,"journal":{"name":"ACM Sigcaph Computers and The Physically Handicapped","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"48","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Sigcaph Computers and The Physically Handicapped","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/288076.288077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 48
Abstract
This report describes a three-year project, now eight months old, to develop interactive learning tools for language training with profoundly deaf children. The tools combine four key technologies: speech recognition, developed at the Oregon Graduate Institute; speech synthesis, developed at the University of Edinburgh and modified at OGI; facial animation, developed at University of California, Santa Cruz; and face tracking and speech reading, developed at Carnegie Mellon University. These technologies are being combined to create an intelligent conversational agent; a three-dimensional face that produces and understands auditory and visual speech. The agent has been incorporated into CSLU Toolkit, a software environment for developing and researching spoken language systems. We describe our experiences in bringing interactive learning tools to classrooms at the Tucker-Maxon Oral School in Portland, Oregon, and the technological advances that are required for this project to succeed.