{"title":"有意义的沟通而非肤浅的拟人化促进人-自动化信任校准:人-自动化信任期望模型(HATEM)。","authors":"Owen B J Carter, Shayne Loft, Troy A W Visser","doi":"10.1177/00187208231218156","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation.</p><p><strong>Background: </strong>Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures.</p><p><strong>Method: </strong>Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM <i>appearance</i> (anthropomorphic avatar vs. camera eye) and voice <i>inflection</i> (monotone vs. meaningless vs. meaningful), with the <i>meaningful</i> inflections communicating contextually useful information about automated advice regarding certainty and uncertainty.</p><p><strong>Results: </strong><i>Avatar</i> SAM appearance was rated as more anthropomorphic than camera <i>eye</i>, and <i>meaningless</i> and <i>meaningful</i> inflections were both rated more anthropomorphic than <i>monotone</i>. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that <i>meaningful</i> inflections yielded better outcomes on these trust measures than <i>monotone</i> and <i>meaningless</i> inflections.</p><p><strong>Conclusion: </strong>Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance.</p><p><strong>Application: </strong>Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2485-2502"},"PeriodicalIF":2.9000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457490/pdf/","citationCount":"0","resultStr":"{\"title\":\"Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM).\",\"authors\":\"Owen B J Carter, Shayne Loft, Troy A W Visser\",\"doi\":\"10.1177/00187208231218156\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation.</p><p><strong>Background: </strong>Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures.</p><p><strong>Method: </strong>Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM <i>appearance</i> (anthropomorphic avatar vs. camera eye) and voice <i>inflection</i> (monotone vs. meaningless vs. meaningful), with the <i>meaningful</i> inflections communicating contextually useful information about automated advice regarding certainty and uncertainty.</p><p><strong>Results: </strong><i>Avatar</i> SAM appearance was rated as more anthropomorphic than camera <i>eye</i>, and <i>meaningless</i> and <i>meaningful</i> inflections were both rated more anthropomorphic than <i>monotone</i>. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that <i>meaningful</i> inflections yielded better outcomes on these trust measures than <i>monotone</i> and <i>meaningless</i> inflections.</p><p><strong>Conclusion: </strong>Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance.</p><p><strong>Application: </strong>Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.</p>\",\"PeriodicalId\":56333,\"journal\":{\"name\":\"Human Factors\",\"volume\":\" \",\"pages\":\"2485-2502\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457490/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Factors\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/00187208231218156\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/12/2 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208231218156","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/12/2 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM).
Objective: The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation.
Background: Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures.
Method: Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM appearance (anthropomorphic avatar vs. camera eye) and voice inflection (monotone vs. meaningless vs. meaningful), with the meaningful inflections communicating contextually useful information about automated advice regarding certainty and uncertainty.
Results: Avatar SAM appearance was rated as more anthropomorphic than camera eye, and meaningless and meaningful inflections were both rated more anthropomorphic than monotone. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that meaningful inflections yielded better outcomes on these trust measures than monotone and meaningless inflections.
Conclusion: Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance.
Application: Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.
期刊介绍:
Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.