{"title":"Co-Design of a Health Screening Program Fact Sheet by People Experiencing Homelessness and ChatGPT: Focus Group Study.","authors":"Nóra Radó, Orsolya Németh, Sándor Békási","doi":"10.2196/68316","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>People experiencing homelessness have worse oral health outcomes and a notable health informational asymmetry compared to the general population. Screening programs present a viable option for this population; however, barriers to access, such as lower levels of health literacy, lack of information, and mistrust, narrow their chances to participate in such programs.</p><p><strong>Objective: </strong>The aim of this study is to investigate the applicability of generative artificial intelligence (AI) in designing a homeless health screening program fact sheet with experts by experience using co-design principles.</p><p><strong>Methods: </strong>Six fact sheet text variants were created by the open-access version of ChatGPT 3.5 for an oral cancer screening program targeting people experiencing homelessness in Budapest, Hungary. Clients of homeless social services (N=23) were invited to a short questionnaire survey and 3 semistructured focus group discussions between May and July 2024. General opinions regarding generative AI technology and direct feedback on the text variants were obtained. Additionally, a standardized readability assessment of the text variants was completed via the Sydney Health Literacy Lab Editor.</p><p><strong>Results: </strong>Almost two-thirds of participants (17/23) stated that they had previously heard about AI; however, their self-assessment regarding the extent of their knowledge resulted in an average of 2.38 (n=16) on a 5-point Likert scale. During the first focus group discussion, all 6 variants received a high score (between 4.63 and 4.92 on a 5-point Likert scale). In the next sessions, participants remained positive when the pool was narrowed to 4 versions, although they scored the texts lower. During open discussions, text variants were considered understandable, while difficulties with medical expressions, lengthiness of sentences, and references to a stereotypical homeless subgroup (rough sleepers) were also reported. The health literacy editor showed that most AI-generated text variants were difficult to read and too complex for the target group.</p><p><strong>Conclusions: </strong>The co-design process revealed that focus group participants actively wanted to shape the fact sheet drafts. They shared their insights on how to make the text variants more appealing for the target audience. Moreover, the involvement of generative AI technology revealed that the participants have heard about the concept of AI and text generation as a potential function, and they have not rejected its use in health care settings.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"9 ","pages":"e68316"},"PeriodicalIF":2.0000,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12248257/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Formative Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/68316","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Background: People experiencing homelessness have worse oral health outcomes and a notable health informational asymmetry compared to the general population. Screening programs present a viable option for this population; however, barriers to access, such as lower levels of health literacy, lack of information, and mistrust, narrow their chances to participate in such programs.
Objective: The aim of this study is to investigate the applicability of generative artificial intelligence (AI) in designing a homeless health screening program fact sheet with experts by experience using co-design principles.
Methods: Six fact sheet text variants were created by the open-access version of ChatGPT 3.5 for an oral cancer screening program targeting people experiencing homelessness in Budapest, Hungary. Clients of homeless social services (N=23) were invited to a short questionnaire survey and 3 semistructured focus group discussions between May and July 2024. General opinions regarding generative AI technology and direct feedback on the text variants were obtained. Additionally, a standardized readability assessment of the text variants was completed via the Sydney Health Literacy Lab Editor.
Results: Almost two-thirds of participants (17/23) stated that they had previously heard about AI; however, their self-assessment regarding the extent of their knowledge resulted in an average of 2.38 (n=16) on a 5-point Likert scale. During the first focus group discussion, all 6 variants received a high score (between 4.63 and 4.92 on a 5-point Likert scale). In the next sessions, participants remained positive when the pool was narrowed to 4 versions, although they scored the texts lower. During open discussions, text variants were considered understandable, while difficulties with medical expressions, lengthiness of sentences, and references to a stereotypical homeless subgroup (rough sleepers) were also reported. The health literacy editor showed that most AI-generated text variants were difficult to read and too complex for the target group.
Conclusions: The co-design process revealed that focus group participants actively wanted to shape the fact sheet drafts. They shared their insights on how to make the text variants more appealing for the target audience. Moreover, the involvement of generative AI technology revealed that the participants have heard about the concept of AI and text generation as a potential function, and they have not rejected its use in health care settings.