{"title":"并非越多越好:人工智能生成的信心和解释对人机交互的影响。","authors":"Shihong Ling, Yutong Zhang, Na Du","doi":"10.1177/00187208241234810","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction.</p><p><strong>Background: </strong>System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics.</p><p><strong>Method: </strong>We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated.</p><p><strong>Results: </strong>Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities.</p><p><strong>Conclusion: </strong>The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects.</p><p><strong>Application: </strong>These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2606-2620"},"PeriodicalIF":2.9000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.\",\"authors\":\"Shihong Ling, Yutong Zhang, Na Du\",\"doi\":\"10.1177/00187208241234810\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction.</p><p><strong>Background: </strong>System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics.</p><p><strong>Method: </strong>We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated.</p><p><strong>Results: </strong>Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities.</p><p><strong>Conclusion: </strong>The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects.</p><p><strong>Application: </strong>These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.</p>\",\"PeriodicalId\":56333,\"journal\":{\"name\":\"Human Factors\",\"volume\":\" \",\"pages\":\"2606-2620\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Factors\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/00187208241234810\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/4 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208241234810","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/4 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.
Objective: The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction.
Background: System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics.
Method: We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated.
Results: Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities.
Conclusion: The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects.
Application: These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.
期刊介绍:
Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.