O. Gambino, A. Augello, A. Caronia, G. Pilato, R. Pirrone, Salvatore Gaglio
{"title":"Virtual conversation with a real talking head","authors":"O. Gambino, A. Augello, A. Caronia, G. Pilato, R. Pirrone, Salvatore Gaglio","doi":"10.1109/HSI.2008.4581446","DOIUrl":null,"url":null,"abstract":"A talking head is system performing an animated face model synchronized with a speech synthesis module. It is used as a presentation layer of a conversational agent which provide an answer. It provides an answer when a query is written as an input by the user. The textual answer is converted into facial movements of a 3D face model whose lips and tongue movements are synchronized with the sound of the synthetic voice. The client-server paradigm has been used for the WEB infrastructure delegating the animation and synchronization to the client, so that the server can satisfy multiple requests from clients; while the Chatbot, the digital signal processing and the natural language processing are provided by the server.","PeriodicalId":139846,"journal":{"name":"2008 Conference on Human System Interactions","volume":"18 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Conference on Human System Interactions","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HSI.2008.4581446","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
A talking head is system performing an animated face model synchronized with a speech synthesis module. It is used as a presentation layer of a conversational agent which provide an answer. It provides an answer when a query is written as an input by the user. The textual answer is converted into facial movements of a 3D face model whose lips and tongue movements are synchronized with the sound of the synthetic voice. The client-server paradigm has been used for the WEB infrastructure delegating the animation and synchronization to the client, so that the server can satisfy multiple requests from clients; while the Chatbot, the digital signal processing and the natural language processing are provided by the server.