{"title":"MacBehaviour: An R package for behavioural experimentation on large language models.","authors":"Xufeng Duan, Shixuan Li, Zhenguang G Cai","doi":"10.3758/s13428-024-02524-y","DOIUrl":null,"url":null,"abstract":"<p><p>The study of large language models (LLMs) and LLM-powered chatbots has gained significant attention in recent years, with researchers treating LLMs as participants in psychological experiments. To facilitate this research, we developed an R package called \"MacBehaviour \" ( https://github.com/xufengduan/MacBehaviour ), which interacts with over 100 LLMs, including OpenAI's GPT family, the Claude family, Gemini, Llama family, and other open-weight models. The package streamlines the processes of LLM behavioural experimentation by providing a comprehensive set of functions for experiment design, stimuli presentation, model behaviour manipulation, and logging responses and token probabilities. With a few lines of code, researchers can seamlessly set up and conduct psychological experiments, making LLM behaviour studies highly accessible. To validate the utility and effectiveness of \"MacBehaviour,\" we conducted three experiments on GPT-3.5 Turbo, Llama-2-7b-chat-hf, and Vicuna-1.5-13b, replicating the sound-gender association in LLMs. The results consistently demonstrated that these LLMs exhibit human-like tendencies to infer gender from novel personal names based on their phonology, as previously shown by Cai et al. (2024). In conclusion, \"MacBehaviour\" is a user-friendly R package that simplifies and standardises the experimental process for machine behaviour studies, offering a valuable tool for researchers in this field.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"19"},"PeriodicalIF":4.6000,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-024-02524-y","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
The study of large language models (LLMs) and LLM-powered chatbots has gained significant attention in recent years, with researchers treating LLMs as participants in psychological experiments. To facilitate this research, we developed an R package called "MacBehaviour " ( https://github.com/xufengduan/MacBehaviour ), which interacts with over 100 LLMs, including OpenAI's GPT family, the Claude family, Gemini, Llama family, and other open-weight models. The package streamlines the processes of LLM behavioural experimentation by providing a comprehensive set of functions for experiment design, stimuli presentation, model behaviour manipulation, and logging responses and token probabilities. With a few lines of code, researchers can seamlessly set up and conduct psychological experiments, making LLM behaviour studies highly accessible. To validate the utility and effectiveness of "MacBehaviour," we conducted three experiments on GPT-3.5 Turbo, Llama-2-7b-chat-hf, and Vicuna-1.5-13b, replicating the sound-gender association in LLMs. The results consistently demonstrated that these LLMs exhibit human-like tendencies to infer gender from novel personal names based on their phonology, as previously shown by Cai et al. (2024). In conclusion, "MacBehaviour" is a user-friendly R package that simplifies and standardises the experimental process for machine behaviour studies, offering a valuable tool for researchers in this field.
期刊介绍:
Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.