Testing the capability of generative artificial intelligence for parent and caregiver information seeking

IF 1.7 3区 社会学 Q2 FAMILY STUDIES
YaeBin Kim, Silvia L. Vilches, Sidney Shapiro, Anne Clarkson
{"title":"Testing the capability of generative artificial intelligence for parent and caregiver information seeking","authors":"YaeBin Kim,&nbsp;Silvia L. Vilches,&nbsp;Sidney Shapiro,&nbsp;Anne Clarkson","doi":"10.1111/fare.13167","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Objective</h3>\n \n <p>This study explored the quality of generative artificial intelligence (AI) responses to common parenting questions across diverse sources of digitally available information.</p>\n </section>\n \n <section>\n \n <h3> Background</h3>\n \n <p>The recent rise of generative AI, such as ChatGPT and other large language models (LLMs), which generate answers by synthesizing publicly available information, raises questions about the quality of digital responses and the effect on parenting and outcomes for children.</p>\n </section>\n \n <section>\n \n <h3> Method</h3>\n \n <p>We hypothesized that querying a professionally prepared parenting newsletter would have higher quality responses than an LLM. We explored this by running 11 tests with five common parenting and caregiving topics about young children across controlled and open data sources. We analyzed three Cs (correctness, clarity, and connection), reliability (artificiality, credibility, and citation quality), and readability to assess the quality of LLM responses.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>ChatGPT largely provided correct and clear answers although citations were frequently absent and inaccurate. LLM responses often lacked emphasis on parent–child connection and developmental context, and reading level difficulty increased steeply.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Generative AI offers reasonably good answers to general parenting questions. However, parents and caregivers need to contextualize the information.</p>\n </section>\n \n <section>\n \n <h3> Implications</h3>\n \n <p>Topical experts may help meet nuanced parenting needs with cultural relevance and plain language, but AI can be useful for summarizing open-access content.</p>\n </section>\n </div>","PeriodicalId":48206,"journal":{"name":"Family Relations","volume":"74 3","pages":"1266-1284"},"PeriodicalIF":1.7000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Family Relations","FirstCategoryId":"90","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/fare.13167","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"FAMILY STUDIES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

This study explored the quality of generative artificial intelligence (AI) responses to common parenting questions across diverse sources of digitally available information.

Background

The recent rise of generative AI, such as ChatGPT and other large language models (LLMs), which generate answers by synthesizing publicly available information, raises questions about the quality of digital responses and the effect on parenting and outcomes for children.

Method

We hypothesized that querying a professionally prepared parenting newsletter would have higher quality responses than an LLM. We explored this by running 11 tests with five common parenting and caregiving topics about young children across controlled and open data sources. We analyzed three Cs (correctness, clarity, and connection), reliability (artificiality, credibility, and citation quality), and readability to assess the quality of LLM responses.

Results

ChatGPT largely provided correct and clear answers although citations were frequently absent and inaccurate. LLM responses often lacked emphasis on parent–child connection and developmental context, and reading level difficulty increased steeply.

Conclusion

Generative AI offers reasonably good answers to general parenting questions. However, parents and caregivers need to contextualize the information.

Implications

Topical experts may help meet nuanced parenting needs with cultural relevance and plain language, but AI can be useful for summarizing open-access content.

测试生成式人工智能在父母和照顾者信息搜索方面的能力
目的本研究探讨了生成式人工智能(AI)在不同数字信息来源中对常见育儿问题的回答质量。最近兴起的生成式人工智能,如ChatGPT和其他大型语言模型(llm),通过综合公共信息生成答案,引发了对数字回答质量以及对养育子女和子女结局的影响的质疑。方法我们假设询问一份专业准备的育儿通讯比法学硕士有更高质量的回应。我们通过在受控和开放的数据源中对5个关于幼儿的常见养育和照顾主题进行了11次测试来探索这一点。我们分析了三个c(正确性、清晰度和连接性)、可靠性(人为性、可信度和引用质量)和可读性来评估法学硕士回复的质量。结果ChatGPT提供了正确清晰的答案,但引文经常缺失和不准确。LLM反应往往缺乏对亲子关系和发展背景的重视,阅读水平难度急剧增加。生成式AI为一般的育儿问题提供了合理的答案。然而,父母和照顾者需要将这些信息置于环境中。话题专家可以通过文化相关性和简单的语言帮助满足微妙的育儿需求,但人工智能可以用于总结开放获取的内容。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Family Relations
Family Relations Multiple-
CiteScore
3.40
自引率
13.60%
发文量
164
期刊介绍: A premier, applied journal of family studies, Family Relations is mandatory reading for family scholars and all professionals who work with families, including: family practitioners, educators, marriage and family therapists, researchers, and social policy specialists. The journal"s content emphasizes family research with implications for intervention, education, and public policy, always publishing original, innovative and interdisciplinary works with specific recommendations for practice.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信