Elizabeth R. Merwin , Allen C. Hagen , Joseph R. Keebler , Chad Forbes
{"title":"Self-disclosure to AI: People provide personal information to AI and humans equivalently","authors":"Elizabeth R. Merwin , Allen C. Hagen , Joseph R. Keebler , Chad Forbes","doi":"10.1016/j.chbah.2025.100180","DOIUrl":null,"url":null,"abstract":"<div><div>As Artificial Intelligence (AI) increasingly emerges as a tool in therapeutic settings, understanding individuals' willingness to disclose personal information to AI versus humans is critical. This study examined how participants chose between self-disclosure-based and fact-based statements when responses were thought to be analyzed by an AI, a human researcher, or kept private. Participants completed forced-choice trials where they selected a self-disclosure-based or fact-based statement for one of the three agent conditions. Results showed that participants were statistically more likely to select self-disclosure over fact-based statements. Choice for self-disclosure rates were similar for the AI and human researcher, but significantly lower when responses were kept private. Multiple regression analyses revealed that individuals with a higher score on the negative attitude toward AI scale were less likely to choose Self-based statements across the three agent conditions. Overall, individuals were just as likely to choose to self-disclose to an AI as to a human researcher, and more likely to choose either agent over keeping self-disclosure information private. In addition, personality traits and attitudes toward AI were able to significantly influence disclosure choices. These findings provide insights into how individual differences impact the willingness to self-disclose information in human-AI interactions and offer a foundation for exploring the feasibility of AI as a clinical and social tool. Future research should expand on these results to further understand self-disclosure behaviors and evaluate AI's role in therapeutic settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100180"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000647","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As Artificial Intelligence (AI) increasingly emerges as a tool in therapeutic settings, understanding individuals' willingness to disclose personal information to AI versus humans is critical. This study examined how participants chose between self-disclosure-based and fact-based statements when responses were thought to be analyzed by an AI, a human researcher, or kept private. Participants completed forced-choice trials where they selected a self-disclosure-based or fact-based statement for one of the three agent conditions. Results showed that participants were statistically more likely to select self-disclosure over fact-based statements. Choice for self-disclosure rates were similar for the AI and human researcher, but significantly lower when responses were kept private. Multiple regression analyses revealed that individuals with a higher score on the negative attitude toward AI scale were less likely to choose Self-based statements across the three agent conditions. Overall, individuals were just as likely to choose to self-disclose to an AI as to a human researcher, and more likely to choose either agent over keeping self-disclosure information private. In addition, personality traits and attitudes toward AI were able to significantly influence disclosure choices. These findings provide insights into how individual differences impact the willingness to self-disclose information in human-AI interactions and offer a foundation for exploring the feasibility of AI as a clinical and social tool. Future research should expand on these results to further understand self-disclosure behaviors and evaluate AI's role in therapeutic settings.