{"title":"道德在哪里?自动驾驶汽车伦理困境下大语言模型驱动决策的解码","authors":"Zixuan Xu , Neha Sengar , Tiantian Chen , Hyungchul Chung , Oscar Oviedo-Trespalacios","doi":"10.1016/j.tbs.2025.101039","DOIUrl":null,"url":null,"abstract":"<div><div>Large Language Models have attracted global attention due to their capabilities in understanding, knowledge synthesis, and generating contextually relevant responses, mimicking certain aspects of human reasoning. Although LLMs have demonstrated feasibility in performing autonomous driving tasks in simulated and real-world environments, little is known about their safety and ethical decision-making. To address these questions, we propose a novel framework for evaluating and interpreting the ethical decision-making mechanism of LLM-driven autonomous vehicles. Our study investigates the ethical dilemma of prioritizing saving pedestrians or passengers inspired by the Moral Machine Experiment. We used a stated preference survey to include factors of group size, age, gender, fatality risk, and pedestrian behavior to create 13,122 choice scenarios (a full factorial design) to analyze responses from advanced LLMs, including the GPT-4 series models from OpenAI and Mistral-Large from Mistral AI. Our findings reveal significant differences in the decision-making process and preferences for saving road users among these LLMs. Using a binary logit model to interpret GPT-4′s decisions, we found that the estimated number of deaths, age, and gender significantly affect the model’s choices. The decision tree method was also applied to analyze LLMs’ decision-making processes, uncovering potential ethical standards and conditions considered by the models. This study provides valuable insights into ethical considerations in AI systems and thus facilitates the responsible development of AI in autonomous vehicles.</div></div>","PeriodicalId":51534,"journal":{"name":"Travel Behaviour and Society","volume":"40 ","pages":"Article 101039"},"PeriodicalIF":5.1000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Where is morality on wheels? Decoding large language model (LLM)-driven decision in the ethical dilemmas of autonomous vehicles\",\"authors\":\"Zixuan Xu , Neha Sengar , Tiantian Chen , Hyungchul Chung , Oscar Oviedo-Trespalacios\",\"doi\":\"10.1016/j.tbs.2025.101039\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Large Language Models have attracted global attention due to their capabilities in understanding, knowledge synthesis, and generating contextually relevant responses, mimicking certain aspects of human reasoning. Although LLMs have demonstrated feasibility in performing autonomous driving tasks in simulated and real-world environments, little is known about their safety and ethical decision-making. To address these questions, we propose a novel framework for evaluating and interpreting the ethical decision-making mechanism of LLM-driven autonomous vehicles. Our study investigates the ethical dilemma of prioritizing saving pedestrians or passengers inspired by the Moral Machine Experiment. We used a stated preference survey to include factors of group size, age, gender, fatality risk, and pedestrian behavior to create 13,122 choice scenarios (a full factorial design) to analyze responses from advanced LLMs, including the GPT-4 series models from OpenAI and Mistral-Large from Mistral AI. Our findings reveal significant differences in the decision-making process and preferences for saving road users among these LLMs. Using a binary logit model to interpret GPT-4′s decisions, we found that the estimated number of deaths, age, and gender significantly affect the model’s choices. The decision tree method was also applied to analyze LLMs’ decision-making processes, uncovering potential ethical standards and conditions considered by the models. This study provides valuable insights into ethical considerations in AI systems and thus facilitates the responsible development of AI in autonomous vehicles.</div></div>\",\"PeriodicalId\":51534,\"journal\":{\"name\":\"Travel Behaviour and Society\",\"volume\":\"40 \",\"pages\":\"Article 101039\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Travel Behaviour and Society\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214367X25000572\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TRANSPORTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Travel Behaviour and Society","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214367X25000572","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION","Score":null,"Total":0}
Where is morality on wheels? Decoding large language model (LLM)-driven decision in the ethical dilemmas of autonomous vehicles
Large Language Models have attracted global attention due to their capabilities in understanding, knowledge synthesis, and generating contextually relevant responses, mimicking certain aspects of human reasoning. Although LLMs have demonstrated feasibility in performing autonomous driving tasks in simulated and real-world environments, little is known about their safety and ethical decision-making. To address these questions, we propose a novel framework for evaluating and interpreting the ethical decision-making mechanism of LLM-driven autonomous vehicles. Our study investigates the ethical dilemma of prioritizing saving pedestrians or passengers inspired by the Moral Machine Experiment. We used a stated preference survey to include factors of group size, age, gender, fatality risk, and pedestrian behavior to create 13,122 choice scenarios (a full factorial design) to analyze responses from advanced LLMs, including the GPT-4 series models from OpenAI and Mistral-Large from Mistral AI. Our findings reveal significant differences in the decision-making process and preferences for saving road users among these LLMs. Using a binary logit model to interpret GPT-4′s decisions, we found that the estimated number of deaths, age, and gender significantly affect the model’s choices. The decision tree method was also applied to analyze LLMs’ decision-making processes, uncovering potential ethical standards and conditions considered by the models. This study provides valuable insights into ethical considerations in AI systems and thus facilitates the responsible development of AI in autonomous vehicles.
期刊介绍:
Travel Behaviour and Society is an interdisciplinary journal publishing high-quality original papers which report leading edge research in theories, methodologies and applications concerning transportation issues and challenges which involve the social and spatial dimensions. In particular, it provides a discussion forum for major research in travel behaviour, transportation infrastructure, transportation and environmental issues, mobility and social sustainability, transportation geographic information systems (TGIS), transportation and quality of life, transportation data collection and analysis, etc.