D. Astler, Harrison Chau, Kailin Hsu, A. Hua, A. Kannan, Lydia Lei, Melissa Nathanson, Esmaeel Paryavi, Michelle Rosen, Hayato Unno, Carol Wang, Khadija Zaidi, Xuemin Zhang, Cha-Min Tang
{"title":"通过面部和表情识别技术为盲人/视障对象增加非语言交流的可及性","authors":"D. Astler, Harrison Chau, Kailin Hsu, A. Hua, A. Kannan, Lydia Lei, Melissa Nathanson, Esmaeel Paryavi, Michelle Rosen, Hayato Unno, Carol Wang, Khadija Zaidi, Xuemin Zhang, Cha-Min Tang","doi":"10.1145/2049536.2049596","DOIUrl":null,"url":null,"abstract":"Conversation between two individuals requires verbal dialogue; the majority of human communication however consists of non-verbal cues such as gestures and facial expressions. Blind individuals are thus hindered in their interaction capabilities. To address this, we are building a computer vision system with facial recognition and expression algorithms to relay nonverbal messages to a blind user. The device will communicate the identities and facial expressions of communication partners in realtime. In order to ensure that this device will be useful to the blind community, we conducted surveys and interviews and we are working with subjects to test prototypes of the device. This paper describes the algorithms and design concepts incorporated in this device, and it provides a commentary on early survey and interview results. A corresponding poster with demonstration stills is exhibited at this conference.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":"{\"title\":\"Increased accessibility to nonverbal communication through facial and expression recognition technologies for blind/visually impaired subjects\",\"authors\":\"D. Astler, Harrison Chau, Kailin Hsu, A. Hua, A. Kannan, Lydia Lei, Melissa Nathanson, Esmaeel Paryavi, Michelle Rosen, Hayato Unno, Carol Wang, Khadija Zaidi, Xuemin Zhang, Cha-Min Tang\",\"doi\":\"10.1145/2049536.2049596\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Conversation between two individuals requires verbal dialogue; the majority of human communication however consists of non-verbal cues such as gestures and facial expressions. Blind individuals are thus hindered in their interaction capabilities. To address this, we are building a computer vision system with facial recognition and expression algorithms to relay nonverbal messages to a blind user. The device will communicate the identities and facial expressions of communication partners in realtime. In order to ensure that this device will be useful to the blind community, we conducted surveys and interviews and we are working with subjects to test prototypes of the device. This paper describes the algorithms and design concepts incorporated in this device, and it provides a commentary on early survey and interview results. A corresponding poster with demonstration stills is exhibited at this conference.\",\"PeriodicalId\":351090,\"journal\":{\"name\":\"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"31\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2049536.2049596\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2049536.2049596","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Increased accessibility to nonverbal communication through facial and expression recognition technologies for blind/visually impaired subjects
Conversation between two individuals requires verbal dialogue; the majority of human communication however consists of non-verbal cues such as gestures and facial expressions. Blind individuals are thus hindered in their interaction capabilities. To address this, we are building a computer vision system with facial recognition and expression algorithms to relay nonverbal messages to a blind user. The device will communicate the identities and facial expressions of communication partners in realtime. In order to ensure that this device will be useful to the blind community, we conducted surveys and interviews and we are working with subjects to test prototypes of the device. This paper describes the algorithms and design concepts incorporated in this device, and it provides a commentary on early survey and interview results. A corresponding poster with demonstration stills is exhibited at this conference.