Jean-David Boucher, J. Ventre-Dominey, Peter Ford Dominey, Sacha Fagel, G. Bailly
{"title":"交流凝视和言语在人机合作中的促进作用","authors":"Jean-David Boucher, J. Ventre-Dominey, Peter Ford Dominey, Sacha Fagel, G. Bailly","doi":"10.1145/1877826.1877845","DOIUrl":null,"url":null,"abstract":"Human interaction in natural environments relies on a variety of perceptual cues to guide and stabilize the interaction. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should be able to manipulate and exploit these communicative cues in cooperation with their human partners. In the current research we identify a set of principal communicative speech and gaze cues in human-human interaction, and then formalize and implement these cues in a humanoid robot. The objective of the work is to render the humanoid robot more human-like in its ability to communicate with humans. The first phase of this research, described here, is to provide the robot with a generative capability - that is to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.","PeriodicalId":433717,"journal":{"name":"AFFINE '10","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Facilitative effects of communicative gaze and speech in human-robot cooperation\",\"authors\":\"Jean-David Boucher, J. Ventre-Dominey, Peter Ford Dominey, Sacha Fagel, G. Bailly\",\"doi\":\"10.1145/1877826.1877845\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human interaction in natural environments relies on a variety of perceptual cues to guide and stabilize the interaction. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should be able to manipulate and exploit these communicative cues in cooperation with their human partners. In the current research we identify a set of principal communicative speech and gaze cues in human-human interaction, and then formalize and implement these cues in a humanoid robot. The objective of the work is to render the humanoid robot more human-like in its ability to communicate with humans. The first phase of this research, described here, is to provide the robot with a generative capability - that is to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.\",\"PeriodicalId\":433717,\"journal\":{\"name\":\"AFFINE '10\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AFFINE '10\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1877826.1877845\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AFFINE '10","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1877826.1877845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Facilitative effects of communicative gaze and speech in human-robot cooperation
Human interaction in natural environments relies on a variety of perceptual cues to guide and stabilize the interaction. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should be able to manipulate and exploit these communicative cues in cooperation with their human partners. In the current research we identify a set of principal communicative speech and gaze cues in human-human interaction, and then formalize and implement these cues in a humanoid robot. The objective of the work is to render the humanoid robot more human-like in its ability to communicate with humans. The first phase of this research, described here, is to provide the robot with a generative capability - that is to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.