Amandeep Singh, Yovela Murzello, Hyowon Lee, Shene Abdalla, Siby Samuel
{"title":"道德决策:关于工作记忆在自动驾驶中作用的可解释见解","authors":"Amandeep Singh, Yovela Murzello, Hyowon Lee, Shene Abdalla, Siby Samuel","doi":"10.1016/j.mlwa.2024.100599","DOIUrl":null,"url":null,"abstract":"<div><div>The intersection of Artificial Intelligence (AI) and moral philosophy presents unique challenges in the development of autonomous vehicles, particularly in scenarios requiring split-second ethical decisions. This study examines the relationship between working memory (WM) and moral judgments in simulated AV scenarios, quantifying the effects of varying cognitive load on utilitarian decision-making under different time constraints. We experimented with 336 participants, each completing 16 simulated driving trials presenting unique ethical dilemmas. Results reveal a complex interplay between cognitive load and ethical choices. Under high temporal pressure (1-second response window), utilitarian decisions decreased significantly from 92.77 % to 70.08 %. Extended time constraints led to increased utilitarian choices. Statistical analyses validated these findings across diverse ethical contexts. Chi-square tests revealed significant associations between WM load and utilitarian decisions in 1-second conditions, particularly for high-stakes scenarios. Logistic regression showed that WM significantly decreased the likelihood of utilitarian decisions in these scenarios. Six supervised machine learning models were employed, with Gaussian Naive Bayes achieving the highest predictive accuracy (82.2 % to 97.0 %) in distinguishing utilitarian decisions. Partial Dependence analysis revealed a strong negative correlation between WM and utilitarian decisions, especially in the 1-second interval. The 2-second interval emerged as potentially optimal for balancing time constraints and cognitive load. These findings contribute to the theoretical understanding of ethical decision-making under cognitive load and provide practical insights for developing ethically aligned autonomous systems, with implications for improving safety, optimizing takeover protocols, and enhancing the ethical reasoning capabilities of autonomous driving systems.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"18 ","pages":"Article 100599"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Moral decision making: Explainable insights into the role of working memory in autonomous driving\",\"authors\":\"Amandeep Singh, Yovela Murzello, Hyowon Lee, Shene Abdalla, Siby Samuel\",\"doi\":\"10.1016/j.mlwa.2024.100599\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The intersection of Artificial Intelligence (AI) and moral philosophy presents unique challenges in the development of autonomous vehicles, particularly in scenarios requiring split-second ethical decisions. This study examines the relationship between working memory (WM) and moral judgments in simulated AV scenarios, quantifying the effects of varying cognitive load on utilitarian decision-making under different time constraints. We experimented with 336 participants, each completing 16 simulated driving trials presenting unique ethical dilemmas. Results reveal a complex interplay between cognitive load and ethical choices. Under high temporal pressure (1-second response window), utilitarian decisions decreased significantly from 92.77 % to 70.08 %. Extended time constraints led to increased utilitarian choices. Statistical analyses validated these findings across diverse ethical contexts. Chi-square tests revealed significant associations between WM load and utilitarian decisions in 1-second conditions, particularly for high-stakes scenarios. Logistic regression showed that WM significantly decreased the likelihood of utilitarian decisions in these scenarios. Six supervised machine learning models were employed, with Gaussian Naive Bayes achieving the highest predictive accuracy (82.2 % to 97.0 %) in distinguishing utilitarian decisions. Partial Dependence analysis revealed a strong negative correlation between WM and utilitarian decisions, especially in the 1-second interval. The 2-second interval emerged as potentially optimal for balancing time constraints and cognitive load. These findings contribute to the theoretical understanding of ethical decision-making under cognitive load and provide practical insights for developing ethically aligned autonomous systems, with implications for improving safety, optimizing takeover protocols, and enhancing the ethical reasoning capabilities of autonomous driving systems.</div></div>\",\"PeriodicalId\":74093,\"journal\":{\"name\":\"Machine learning with applications\",\"volume\":\"18 \",\"pages\":\"Article 100599\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning with applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666827024000756\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827024000756","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Moral decision making: Explainable insights into the role of working memory in autonomous driving
The intersection of Artificial Intelligence (AI) and moral philosophy presents unique challenges in the development of autonomous vehicles, particularly in scenarios requiring split-second ethical decisions. This study examines the relationship between working memory (WM) and moral judgments in simulated AV scenarios, quantifying the effects of varying cognitive load on utilitarian decision-making under different time constraints. We experimented with 336 participants, each completing 16 simulated driving trials presenting unique ethical dilemmas. Results reveal a complex interplay between cognitive load and ethical choices. Under high temporal pressure (1-second response window), utilitarian decisions decreased significantly from 92.77 % to 70.08 %. Extended time constraints led to increased utilitarian choices. Statistical analyses validated these findings across diverse ethical contexts. Chi-square tests revealed significant associations between WM load and utilitarian decisions in 1-second conditions, particularly for high-stakes scenarios. Logistic regression showed that WM significantly decreased the likelihood of utilitarian decisions in these scenarios. Six supervised machine learning models were employed, with Gaussian Naive Bayes achieving the highest predictive accuracy (82.2 % to 97.0 %) in distinguishing utilitarian decisions. Partial Dependence analysis revealed a strong negative correlation between WM and utilitarian decisions, especially in the 1-second interval. The 2-second interval emerged as potentially optimal for balancing time constraints and cognitive load. These findings contribute to the theoretical understanding of ethical decision-making under cognitive load and provide practical insights for developing ethically aligned autonomous systems, with implications for improving safety, optimizing takeover protocols, and enhancing the ethical reasoning capabilities of autonomous driving systems.