{"title":"On adaptive acquisition of spoken language","authors":"A. Gorin, S. Levinson, L. G. Miller, A. Gertner","doi":"10.1109/NNSP.1991.239499","DOIUrl":null,"url":null,"abstract":"At present, automatic speech recognition technology is based upon constructing models of the various levels of linguistic structure assumed to compose spoken language. These models are either constructed manually or automatically trained by example. A major impediment is the cost, or even the feasibility, of producing models of sufficient fidelity to enable the desired level of performance. The proposed alternative is to build a device capable of acquiring the necessary linguistic skills during the course of performing its task. The authors provide a progress report on their work in this direction, describing some principles and mechanisms upon which such a device might be based, and recounting several rudimentary experiments evaluating their utility. The basic principles and mechanisms underlying this research program are briefly reviewed. The authors have been investigating the application of those ideas to devices with spoken input, and which are capable of larger and more complex sets of actions. The authors propose some corollaries to those basic principles, thereby motivating extensions of earlier experimental mechanisms to these more complex devices. They also briefly describe these experimental systems and observe how they demonstrate the utility of their ideas.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1991.239499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
At present, automatic speech recognition technology is based upon constructing models of the various levels of linguistic structure assumed to compose spoken language. These models are either constructed manually or automatically trained by example. A major impediment is the cost, or even the feasibility, of producing models of sufficient fidelity to enable the desired level of performance. The proposed alternative is to build a device capable of acquiring the necessary linguistic skills during the course of performing its task. The authors provide a progress report on their work in this direction, describing some principles and mechanisms upon which such a device might be based, and recounting several rudimentary experiments evaluating their utility. The basic principles and mechanisms underlying this research program are briefly reviewed. The authors have been investigating the application of those ideas to devices with spoken input, and which are capable of larger and more complex sets of actions. The authors propose some corollaries to those basic principles, thereby motivating extensions of earlier experimental mechanisms to these more complex devices. They also briefly describe these experimental systems and observe how they demonstrate the utility of their ideas.<>