{"title":"The frequency of facial muscles engaged in expressing emotions in people with visual disabilities via cloud-based video communication","authors":"H. N. Kim","doi":"10.1080/1463922X.2022.2081374","DOIUrl":null,"url":null,"abstract":"Abstract As technology is advancing quickly, and various assistive technology applications are introduced to users with visual disabilities, many people with visual disabilities use smartphones and cloud-based video communication platforms such as Zoom. This study aims at advancing knowledge of how people with visual disabilities visualize voluntary emotions via facial expressions, especially in online contexts. A convenience sample of 28 participants with visual disabilities were observed as to how they show voluntary facial expressions via Zoom. The facial expressions were coded using the Facial Action Coding System (FACS) Action Units (AU). Individual differences were found in the frequency of facial action units, which were influenced by the participants’ visual acuity levels (i.e., visual impairment and blindness) and emotion characteristics (i.e., positive/negative valence and high/low arousal levels). The research findings are anticipated to be widely beneficial to many researchers and professionals in the field of facial expressions of emotions, such as facial recognition systems and emotion sensing technologies. Relevance to human factors/ergonomics theoryThis study advanced knowledge of facial muscle engagements while people with visual disabilities visualize their emotions via facial expressions, especially in online contexts. The advanced understanding would contribute to building a fundamental knowledge foundation, ultimately applicable to universal designs of emotion technology that can read users’ facial expressions to customize services with the focus on adequately accommodating the users’ emotional needs (e.g., ambient intelligence) regardless of users’ visual ability/disability.","PeriodicalId":22852,"journal":{"name":"Theoretical Issues in Ergonomics Science","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theoretical Issues in Ergonomics Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/1463922X.2022.2081374","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ERGONOMICS","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract As technology is advancing quickly, and various assistive technology applications are introduced to users with visual disabilities, many people with visual disabilities use smartphones and cloud-based video communication platforms such as Zoom. This study aims at advancing knowledge of how people with visual disabilities visualize voluntary emotions via facial expressions, especially in online contexts. A convenience sample of 28 participants with visual disabilities were observed as to how they show voluntary facial expressions via Zoom. The facial expressions were coded using the Facial Action Coding System (FACS) Action Units (AU). Individual differences were found in the frequency of facial action units, which were influenced by the participants’ visual acuity levels (i.e., visual impairment and blindness) and emotion characteristics (i.e., positive/negative valence and high/low arousal levels). The research findings are anticipated to be widely beneficial to many researchers and professionals in the field of facial expressions of emotions, such as facial recognition systems and emotion sensing technologies. Relevance to human factors/ergonomics theoryThis study advanced knowledge of facial muscle engagements while people with visual disabilities visualize their emotions via facial expressions, especially in online contexts. The advanced understanding would contribute to building a fundamental knowledge foundation, ultimately applicable to universal designs of emotion technology that can read users’ facial expressions to customize services with the focus on adequately accommodating the users’ emotional needs (e.g., ambient intelligence) regardless of users’ visual ability/disability.