{"title":"Psychological AI: Designing Algorithms Informed by Human Psychology.","authors":"Gerd Gigerenzer","doi":"10.1177/17456916231180597","DOIUrl":"10.1177/17456916231180597","url":null,"abstract":"<p><p>Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"839-848"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373155/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10274200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models.","authors":"Merrick R Osborne, Ali Omrani, Morteza Dehghani","doi":"10.1177/17456916231180099","DOIUrl":"10.1177/17456916231180099","url":null,"abstract":"<p><p>Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"796-807"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10185871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Three Challenges for AI-Assisted Decision-Making.","authors":"Mark Steyvers, Aakriti Kumar","doi":"10.1177/17456916231181102","DOIUrl":"10.1177/17456916231181102","url":null,"abstract":"<p><p>Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"722-734"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9770751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet).","authors":"Eunice Yiu, Eliza Kosoy, Alison Gopnik","doi":"10.1177/17456916231201401","DOIUrl":"10.1177/17456916231201401","url":null,"abstract":"<p><p>Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skill, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"874-883"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373165/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54230419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social Drivers and Algorithmic Mechanisms on Digital Media.","authors":"Hannah Metzler, David Garcia","doi":"10.1177/17456916231185057","DOIUrl":"10.1177/17456916231185057","url":null,"abstract":"<p><p>On digital media, algorithms that process data and recommend content have become ubiquitous. Their fast and barely regulated adoption has raised concerns about their role in well-being both at the individual and collective levels. Algorithmic mechanisms on digital media are powered by social drivers, creating a feedback loop that complicates research to disentangle the role of algorithms and already existing social phenomena. Our brief overview of the current evidence on how algorithms affect well-being, misinformation, and polarization suggests that the role of algorithms in these phenomena is far from straightforward and that substantial further empirical research is needed. Existing evidence suggests that algorithms mostly reinforce existing social drivers, a finding that stresses the importance of reflecting on algorithms in the larger societal context that encompasses individualism, populist politics, and climate change. We present concrete ideas and research questions to improve algorithms on digital platforms and to investigate their role in current problems and potential solutions. Finally, we discuss how the current shift from social media to more algorithmically curated media brings both risks and opportunities if algorithms are designed for individual and societal flourishing rather than short-term profit.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"735-748"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9822531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steve Rathje, Claire Robertson, William J Brady, Jay J Van Bavel
{"title":"People Think That Social Media Platforms Do (but Should Not) Amplify Divisive Content.","authors":"Steve Rathje, Claire Robertson, William J Brady, Jay J Van Bavel","doi":"10.1177/17456916231190392","DOIUrl":"10.1177/17456916231190392","url":null,"abstract":"<p><p>Recent studies have documented the type of content that is most likely to spread widely, or go \"viral,\" on social media, yet little is known about people's perceptions of what goes viral or what should go viral. This is critical to understand because there is widespread debate about how to improve or regulate social media algorithms. We recruited a sample of participants that is nationally representative of the U.S. population (according to age, gender, and race/ethnicity) and surveyed them about their perceptions of social media virality (<i>n</i> = 511). In line with prior research, people believe that divisive content, moral outrage, negative content, high-arousal content, and misinformation are all likely to go viral online. However, they reported that this type of content should not go viral on social media. Instead, people reported that many forms of positive content-such as accurate content, nuanced content, and educational content-are not likely to go viral even though they think this content should go viral. These perceptions were shared among most participants and were only weakly related to political orientation, social media usage, and demographic variables. In sum, there is broad consensus around the type of content people think social media platforms should and should not amplify, which can help inform solutions for improving social media.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"781-795"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41109994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ralph Hertwig, Stefan M Herzog, Anastasia Kozyreva
{"title":"Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines.","authors":"Ralph Hertwig, Stefan M Herzog, Anastasia Kozyreva","doi":"10.1177/17456916231188052","DOIUrl":"10.1177/17456916231188052","url":null,"abstract":"<p><p>Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is <i>implicit social bias</i>-unconsciously formed associations between social groups and attributions such as \"nurturing,\" \"lazy,\" or \"uneducated.\" One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's \"veil of ignorance,\" and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"849-859"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373160/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10157746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Barbara A Mellers, John P McCoy, Louise Lu, Philip E Tetlock
{"title":"Human and Algorithmic Predictions in Geopolitical Forecasting: Quantifying Uncertainty in Hard-to-Quantify Domains.","authors":"Barbara A Mellers, John P McCoy, Louise Lu, Philip E Tetlock","doi":"10.1177/17456916231185339","DOIUrl":"10.1177/17456916231185339","url":null,"abstract":"<p><p>Research on clinical versus statistical prediction has demonstrated that algorithms make more accurate predictions than humans in many domains. Geopolitical forecasting is an algorithm-unfriendly domain, with hard-to-quantify data and elusive reference classes that make predictive model-building difficult. Furthermore, the stakes can be high, with missed forecasts leading to mass-casualty consequences. For these reasons, geopolitical forecasting is typically done by humans, even though algorithms play important roles. They are essential as aggregators of crowd wisdom, as frameworks to partition human forecasting variance, and as inputs to hybrid forecasting models. Algorithms are extremely important in this domain. We doubt that humans will relinquish control to algorithms anytime soon-nor do we think they should. However, the accuracy of forecasts will greatly improve if humans are aided by algorithms.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"711-721"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373164/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10109373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaëtan Mertens, Iris M Engelhard, Derek M Novacek, Richard J McNally
{"title":"Managing Fear During Pandemics: Risks and Opportunities.","authors":"Gaëtan Mertens, Iris M Engelhard, Derek M Novacek, Richard J McNally","doi":"10.1177/17456916231178720","DOIUrl":"10.1177/17456916231178720","url":null,"abstract":"<p><p>Fear is an emotion triggered by the perception of danger and motivates safety behaviors. Within the context of the COVID-19 pandemic, there were ample danger cues (e.g., images of patients on ventilators) and a high need for people to use appropriate safety behaviors (e.g., social distancing). Given this central role of fear within the context of a pandemic, it is important to review some of the emerging findings and lessons learned during the COVID-19 pandemic and their implications for managing fear. We highlight factors that determine fear (i.e., proximity, predictability, and controllability) and review several adaptive and maladaptive consequences of fear of COVID-19 (e.g., following governmental health policies and panic buying). Finally, we provide directions for future research and make policy recommendations that can promote adequate health behaviors and limit the negative consequences of fear during pandemics.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"652-659"},"PeriodicalIF":10.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10293863/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9705158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jade Butterworth, David Smerdon, Roy Baumeister, William von Hippel
{"title":"Cooperation in the Time of COVID.","authors":"Jade Butterworth, David Smerdon, Roy Baumeister, William von Hippel","doi":"10.1177/17456916231178719","DOIUrl":"10.1177/17456916231178719","url":null,"abstract":"<p><p>Humans evolved to be hyper-cooperative, particularly when among people who are well known to them, when relationships involve reciprocal helping opportunities, and when the costs to the helper are substantially less than the benefits to the recipient. Because humans' cooperative nature evolved over many millennia when they lived exclusively in small groups, factors that cause cooperation to break down tend to be those associated with life in large, impersonal, modern societies: when people are not identifiable, when interactions are one-off, when self-interest is not tied to the interests of others, and when people are concerned that others might free ride. From this perspective, it becomes clear that policies for managing pandemics will be most effective when they highlight superordinate goals and connect people or institutions to one another over multiple identifiable interactions. When forging such connections is not possible, policies should mimic critical components of ancestral conditions by providing reputational markers for cooperators and reducing the systemic damage caused by free riding. In this article, we review policies implemented during the pandemic, highlighting spontaneous community efforts that leveraged these aspects of people's evolved psychology, and consider implications for future decision makers.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"640-651"},"PeriodicalIF":10.5,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10311366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9742818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}