Perspectives on Psychological Science最新文献

筛选
英文 中文
Psychological AI: Designing Algorithms Informed by Human Psychology. 心理人工智能:根据人类心理学设计算法》(Psychological AI: Designing Algorithms Informed by Human Psychology)。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-07-31 DOI: 10.1177/17456916231180597
Gerd Gigerenzer
{"title":"Psychological AI: Designing Algorithms Informed by Human Psychology.","authors":"Gerd Gigerenzer","doi":"10.1177/17456916231180597","DOIUrl":"10.1177/17456916231180597","url":null,"abstract":"<p><p>Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373155/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10274200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges in Understanding Human-Algorithm Entanglement During Online Information Consumption. 理解在线信息消费过程中人与算法纠缠的挑战。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-07-10 DOI: 10.1177/17456916231180809
Stephan Lewandowsky, Ronald E Robertson, Renee DiResta
{"title":"Challenges in Understanding Human-Algorithm Entanglement During Online Information Consumption.","authors":"Stephan Lewandowsky, Ronald E Robertson, Renee DiResta","doi":"10.1177/17456916231180809","DOIUrl":"10.1177/17456916231180809","url":null,"abstract":"<p><p>Most content consumed online is curated by proprietary algorithms deployed by social media platforms and search engines. In this article, we explore the interplay between these algorithms and human agency. Specifically, we consider the extent of entanglement or coupling between humans and algorithms along a continuum from implicit to explicit demand. We emphasize that the interactions people have with algorithms not only shape users' experiences in that moment but because of the mutually shaping nature of such systems can also have longer-term effects through modifications of the underlying social-network structure. Understanding these mutually shaping systems is challenging given that researchers presently lack access to relevant platform data. We argue that increased transparency, more data sharing, and greater protections for external researchers examining the algorithms are required to help researchers better understand the entanglement between humans and algorithms. This better understanding is essential to support the development of algorithms with greater benefits and fewer risks to the public.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373152/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9765071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models. 父母的罪过要加诸于子女:有偏见的人类、有偏见的数据、有偏见的模型。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-07-18 DOI: 10.1177/17456916231180099
Merrick R Osborne, Ali Omrani, Morteza Dehghani
{"title":"The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models.","authors":"Merrick R Osborne, Ali Omrani, Morteza Dehghani","doi":"10.1177/17456916231180099","DOIUrl":"10.1177/17456916231180099","url":null,"abstract":"<p><p>Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10185871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three Challenges for AI-Assisted Decision-Making. 人工智能辅助决策面临的三大挑战。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-07-13 DOI: 10.1177/17456916231181102
Mark Steyvers, Aakriti Kumar
{"title":"Three Challenges for AI-Assisted Decision-Making.","authors":"Mark Steyvers, Aakriti Kumar","doi":"10.1177/17456916231181102","DOIUrl":"10.1177/17456916231181102","url":null,"abstract":"<p><p>Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9770751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet). 传播与真理,模仿与创新:孩子能做什么,大型语言和语言视觉模型(目前)做不到。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-10-26 DOI: 10.1177/17456916231201401
Eunice Yiu, Eliza Kosoy, Alison Gopnik
{"title":"Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet).","authors":"Eunice Yiu, Eliza Kosoy, Alison Gopnik","doi":"10.1177/17456916231201401","DOIUrl":"10.1177/17456916231201401","url":null,"abstract":"<p><p>Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skill, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373165/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54230419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Inversion Problem: Why Algorithms Should Infer Mental State and Not Just Predict Behavior. 反转问题 为什么算法应该推断心理状态,而不仅仅是预测行为?
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-12-12 DOI: 10.1177/17456916231212138
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Manish Raghavan
{"title":"The Inversion Problem: Why Algorithms Should Infer Mental State and Not Just Predict Behavior.","authors":"Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Manish Raghavan","doi":"10.1177/17456916231212138","DOIUrl":"10.1177/17456916231212138","url":null,"abstract":"<p><p>More and more machine learning is applied to human behavior. Increasingly these algorithms suffer from a hidden-but serious-problem. It arises because they often predict one thing while hoping for another. Take a recommender system: It predicts clicks but hopes to identify preferences. Or take an algorithm that automates a radiologist: It predicts in-the-moment diagnoses while hoping to identify their reflective judgments. Psychology shows us the gaps between the objectives of such prediction tasks and the goals we hope to achieve: People can click mindlessly; experts can get tired and make systematic errors. We argue such situations are ubiquitous and call them \"inversion problems\": The real goal requires understanding a mental state that is not directly measured in behavioral data but must instead be inverted from the behavior. Identifying and solving these problems require new tools that draw on both behavioral and computational science.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138808387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
People Think That Social Media Platforms Do (but Should Not) Amplify Divisive Content. 人们认为社交媒体平台确实(但不应该)放大了分裂性的内容。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-09-26 DOI: 10.1177/17456916231190392
Steve Rathje, Claire Robertson, William J Brady, Jay J Van Bavel
{"title":"People Think That Social Media Platforms Do (but Should Not) Amplify Divisive Content.","authors":"Steve Rathje, Claire Robertson, William J Brady, Jay J Van Bavel","doi":"10.1177/17456916231190392","DOIUrl":"10.1177/17456916231190392","url":null,"abstract":"<p><p>Recent studies have documented the type of content that is most likely to spread widely, or go \"viral,\" on social media, yet little is known about people's perceptions of what goes viral or what should go viral. This is critical to understand because there is widespread debate about how to improve or regulate social media algorithms. We recruited a sample of participants that is nationally representative of the U.S. population (according to age, gender, and race/ethnicity) and surveyed them about their perceptions of social media virality (<i>n</i> = 511). In line with prior research, people believe that divisive content, moral outrage, negative content, high-arousal content, and misinformation are all likely to go viral online. However, they reported that this type of content should not go viral on social media. Instead, people reported that many forms of positive content-such as accurate content, nuanced content, and educational content-are not likely to go viral even though they think this content should go viral. These perceptions were shared among most participants and were only weakly related to political orientation, social media usage, and demographic variables. In sum, there is broad consensus around the type of content people think social media platforms should and should not amplify, which can help inform solutions for improving social media.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41109994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social Drivers and Algorithmic Mechanisms on Digital Media. 数字媒体的社会驱动力和算法机制。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-07-19 DOI: 10.1177/17456916231185057
Hannah Metzler, David Garcia
{"title":"Social Drivers and Algorithmic Mechanisms on Digital Media.","authors":"Hannah Metzler, David Garcia","doi":"10.1177/17456916231185057","DOIUrl":"10.1177/17456916231185057","url":null,"abstract":"<p><p>On digital media, algorithms that process data and recommend content have become ubiquitous. Their fast and barely regulated adoption has raised concerns about their role in well-being both at the individual and collective levels. Algorithmic mechanisms on digital media are powered by social drivers, creating a feedback loop that complicates research to disentangle the role of algorithms and already existing social phenomena. Our brief overview of the current evidence on how algorithms affect well-being, misinformation, and polarization suggests that the role of algorithms in these phenomena is far from straightforward and that substantial further empirical research is needed. Existing evidence suggests that algorithms mostly reinforce existing social drivers, a finding that stresses the importance of reflecting on algorithms in the larger societal context that encompasses individualism, populist politics, and climate change. We present concrete ideas and research questions to improve algorithms on digital platforms and to investigate their role in current problems and potential solutions. Finally, we discuss how the current shift from social media to more algorithmically curated media brings both risks and opportunities if algorithms are designed for individual and societal flourishing rather than short-term profit.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9822531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Normative Framework for Assessing the Information Curation Algorithms of the Internet. 评估互联网信息管理算法的规范框架。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-11-27 DOI: 10.1177/17456916231186779
David Lazer, Briony Swire-Thompson, Christo Wilson
{"title":"A Normative Framework for Assessing the Information Curation Algorithms of the Internet.","authors":"David Lazer, Briony Swire-Thompson, Christo Wilson","doi":"10.1177/17456916231186779","DOIUrl":"10.1177/17456916231186779","url":null,"abstract":"<p><p>It is critical to understand how algorithms structure the information people see and how those algorithms support or undermine society's core values. We offer a normative framework for the assessment of the information curation algorithms that determine much of what people see on the internet. The framework presents two levels of assessment: one for individual-level effects and another for systemic effects. With regard to individual-level effects we discuss whether (a) the information is aligned with the user's interests, (b) the information is accurate, and (c) the information is so appealing that it is difficult for a person's self-regulatory resources to ignore (\"agency hacking\"). At the systemic level we discuss whether (a) there are adverse civic-level effects on a system-level variable, such as political polarization; (b) there are negative distributional or discriminatory effects; and (c) there are anticompetitive effects, with the information providing an advantage to the platform. The objective of this framework is both to inform the direction of future scholarship as well as to offer tools for intervention for policymakers.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138445669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines. 蒙蔽以规避人类偏见:人类、机构和机器的故意无知。
IF 10.5 1区 心理学
Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-09-05 DOI: 10.1177/17456916231188052
Ralph Hertwig, Stefan M Herzog, Anastasia Kozyreva
{"title":"Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines.","authors":"Ralph Hertwig, Stefan M Herzog, Anastasia Kozyreva","doi":"10.1177/17456916231188052","DOIUrl":"10.1177/17456916231188052","url":null,"abstract":"<p><p>Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is <i>implicit social bias</i>-unconsciously formed associations between social groups and attributions such as \"nurturing,\" \"lazy,\" or \"uneducated.\" One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's \"veil of ignorance,\" and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373160/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10157746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信