Shuo-Ping Wang, Mei-Ling Chen, Hao-Chuan Wang, Chien-Tung Lai, A. Huang
{"title":"基于去识别特征的面部表情可视化增强文本聊天","authors":"Shuo-Ping Wang, Mei-Ling Chen, Hao-Chuan Wang, Chien-Tung Lai, A. Huang","doi":"10.20380/GI2017.25","DOIUrl":null,"url":null,"abstract":"The lack of visibility in text-based chat can hinder communication, especially when nonverbal cues are instrumental to the production and understanding of messages. However, communicating rich nonverbal cues such as facial expressions may be technologically more costly (e.g., demand of bandwidth for video streaming) and socially less desirable (e.g., disclosing other personal and context information through video). We consider how to balance the tension by supporting people to convey facial expressions without compromising the benefits of invisibility in text communication. We present KinChat, an enhanced text chat tool that integrates motion sensing and 2D graphical visualization as a technique to convey information of key facial features during text conversations. We conducted two studies to examine how KinChat influences the de-identification and awareness of facial cues in comparison to other techniques using raw and blurring-processed videos, as well as its impact on real-time text chat. We show that feature-based visualization of facial expression can preserve both awareness of facial cues and non-identifiability at the same time, leading to better understanding and reduced anxiety.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"199-207"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"De-Identified Feature-based Visualization of Facial Expression for Enhanced Text Chat\",\"authors\":\"Shuo-Ping Wang, Mei-Ling Chen, Hao-Chuan Wang, Chien-Tung Lai, A. Huang\",\"doi\":\"10.20380/GI2017.25\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The lack of visibility in text-based chat can hinder communication, especially when nonverbal cues are instrumental to the production and understanding of messages. However, communicating rich nonverbal cues such as facial expressions may be technologically more costly (e.g., demand of bandwidth for video streaming) and socially less desirable (e.g., disclosing other personal and context information through video). We consider how to balance the tension by supporting people to convey facial expressions without compromising the benefits of invisibility in text communication. We present KinChat, an enhanced text chat tool that integrates motion sensing and 2D graphical visualization as a technique to convey information of key facial features during text conversations. We conducted two studies to examine how KinChat influences the de-identification and awareness of facial cues in comparison to other techniques using raw and blurring-processed videos, as well as its impact on real-time text chat. We show that feature-based visualization of facial expression can preserve both awareness of facial cues and non-identifiability at the same time, leading to better understanding and reduced anxiety.\",\"PeriodicalId\":93493,\"journal\":{\"name\":\"Proceedings. Graphics Interface (Conference)\",\"volume\":\"1 1\",\"pages\":\"199-207\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. Graphics Interface (Conference)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.20380/GI2017.25\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Graphics Interface (Conference)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20380/GI2017.25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
De-Identified Feature-based Visualization of Facial Expression for Enhanced Text Chat
The lack of visibility in text-based chat can hinder communication, especially when nonverbal cues are instrumental to the production and understanding of messages. However, communicating rich nonverbal cues such as facial expressions may be technologically more costly (e.g., demand of bandwidth for video streaming) and socially less desirable (e.g., disclosing other personal and context information through video). We consider how to balance the tension by supporting people to convey facial expressions without compromising the benefits of invisibility in text communication. We present KinChat, an enhanced text chat tool that integrates motion sensing and 2D graphical visualization as a technique to convey information of key facial features during text conversations. We conducted two studies to examine how KinChat influences the de-identification and awareness of facial cues in comparison to other techniques using raw and blurring-processed videos, as well as its impact on real-time text chat. We show that feature-based visualization of facial expression can preserve both awareness of facial cues and non-identifiability at the same time, leading to better understanding and reduced anxiety.