Max Pellert, Clemens M Lechner, Claudia Wagner, Beatrice Rammstedt, Markus Strohmaier
{"title":"AI Psychometrics: Assessing the Psychological Profiles of Large Language Models Through Psychometric Inventories.","authors":"Max Pellert, Clemens M Lechner, Claudia Wagner, Beatrice Rammstedt, Markus Strohmaier","doi":"10.1177/17456916231214460","DOIUrl":null,"url":null,"abstract":"<p><p>We illustrate how standard psychometric inventories originally designed for assessing noncognitive human traits can be repurposed as diagnostic tools to evaluate analogous traits in large language models (LLMs). We start from the assumption that LLMs, inadvertently yet inevitably, acquire psychological traits (metaphorically speaking) from the vast text corpora on which they are trained. Such corpora contain sediments of the personalities, values, beliefs, and biases of the countless human authors of these texts, which LLMs learn through a complex training process. The traits that LLMs acquire in such a way can potentially influence their behavior, that is, their outputs in downstream tasks and applications in which they are employed, which in turn may have real-world consequences for individuals and social groups. By eliciting LLMs' responses to language-based psychometric inventories, we can bring their traits to light. Psychometric profiling enables researchers to study and compare LLMs in terms of noncognitive characteristics, thereby providing a window into the personalities, values, beliefs, and biases these models exhibit (or mimic). We discuss the history of similar ideas and outline possible psychometric approaches for LLMs. We demonstrate one promising approach, zero-shot classification, for several LLMs and psychometric inventories. We conclude by highlighting open challenges and future avenues of research for AI Psychometrics.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"808-826"},"PeriodicalIF":10.5000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373167/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Perspectives on Psychological Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/17456916231214460","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/2 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
We illustrate how standard psychometric inventories originally designed for assessing noncognitive human traits can be repurposed as diagnostic tools to evaluate analogous traits in large language models (LLMs). We start from the assumption that LLMs, inadvertently yet inevitably, acquire psychological traits (metaphorically speaking) from the vast text corpora on which they are trained. Such corpora contain sediments of the personalities, values, beliefs, and biases of the countless human authors of these texts, which LLMs learn through a complex training process. The traits that LLMs acquire in such a way can potentially influence their behavior, that is, their outputs in downstream tasks and applications in which they are employed, which in turn may have real-world consequences for individuals and social groups. By eliciting LLMs' responses to language-based psychometric inventories, we can bring their traits to light. Psychometric profiling enables researchers to study and compare LLMs in terms of noncognitive characteristics, thereby providing a window into the personalities, values, beliefs, and biases these models exhibit (or mimic). We discuss the history of similar ideas and outline possible psychometric approaches for LLMs. We demonstrate one promising approach, zero-shot classification, for several LLMs and psychometric inventories. We conclude by highlighting open challenges and future avenues of research for AI Psychometrics.
期刊介绍:
Perspectives on Psychological Science is a journal that publishes a diverse range of articles and reports in the field of psychology. The journal includes broad integrative reviews, overviews of research programs, meta-analyses, theoretical statements, book reviews, and articles on various topics such as the philosophy of science and opinion pieces about major issues in the field. It also features autobiographical reflections of senior members of the field, occasional humorous essays and sketches, and even has a section for invited and submitted articles.
The impact of the journal can be seen through the reverberation of a 2009 article on correlative analyses commonly used in neuroimaging studies, which still influences the field. Additionally, a recent special issue of Perspectives, featuring prominent researchers discussing the "Next Big Questions in Psychology," is shaping the future trajectory of the discipline.
Perspectives on Psychological Science provides metrics that showcase the performance of the journal. However, the Association for Psychological Science, of which the journal is a signatory of DORA, recommends against using journal-based metrics for assessing individual scientist contributions, such as for hiring, promotion, or funding decisions. Therefore, the metrics provided by Perspectives on Psychological Science should only be used by those interested in evaluating the journal itself.