Jerlyn Q.H. Ho , Andree Hartanto , Andrew Koh , Nadyanna M. Majeed
{"title":"人工智能和聊天技术中的性别偏见:证据、偏见来源和解决方案","authors":"Jerlyn Q.H. Ho , Andree Hartanto , Andrew Koh , Nadyanna M. Majeed","doi":"10.1016/j.chbah.2025.100145","DOIUrl":null,"url":null,"abstract":"<div><div>The growing adoption of Artificial Intelligence (AI) in various sectors has introduced significant benefits, but also raised concerns over biases, particularly in relation to gender. Despite AI's potential to enhance sectors like healthcare, education, and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations, or healthcare diagnostics, systematically disadvantaging women. This paper explores how AI systems and chatbots, notably ChatGPT, can perpetuate gender biases due to inherent flaws in training data, algorithms, and user feedback loops. This problem stems from several sources, including biased training datasets, algorithmic design choices, and human biases. To mitigate these issues, various interventions are discussed, including improving data quality, diversifying datasets and annotator pools, integrating fairness-centric algorithmic approaches, and establishing robust policy frameworks at corporate, national, and international levels. Ultimately, addressing AI bias requires a multi-faceted approach involving researchers, developers, and policymakers to ensure AI systems operate fairly and equitably.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100145"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions\",\"authors\":\"Jerlyn Q.H. Ho , Andree Hartanto , Andrew Koh , Nadyanna M. Majeed\",\"doi\":\"10.1016/j.chbah.2025.100145\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The growing adoption of Artificial Intelligence (AI) in various sectors has introduced significant benefits, but also raised concerns over biases, particularly in relation to gender. Despite AI's potential to enhance sectors like healthcare, education, and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations, or healthcare diagnostics, systematically disadvantaging women. This paper explores how AI systems and chatbots, notably ChatGPT, can perpetuate gender biases due to inherent flaws in training data, algorithms, and user feedback loops. This problem stems from several sources, including biased training datasets, algorithmic design choices, and human biases. To mitigate these issues, various interventions are discussed, including improving data quality, diversifying datasets and annotator pools, integrating fairness-centric algorithmic approaches, and establishing robust policy frameworks at corporate, national, and international levels. Ultimately, addressing AI bias requires a multi-faceted approach involving researchers, developers, and policymakers to ensure AI systems operate fairly and equitably.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"4 \",\"pages\":\"Article 100145\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000295\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000295","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions
The growing adoption of Artificial Intelligence (AI) in various sectors has introduced significant benefits, but also raised concerns over biases, particularly in relation to gender. Despite AI's potential to enhance sectors like healthcare, education, and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations, or healthcare diagnostics, systematically disadvantaging women. This paper explores how AI systems and chatbots, notably ChatGPT, can perpetuate gender biases due to inherent flaws in training data, algorithms, and user feedback loops. This problem stems from several sources, including biased training datasets, algorithmic design choices, and human biases. To mitigate these issues, various interventions are discussed, including improving data quality, diversifying datasets and annotator pools, integrating fairness-centric algorithmic approaches, and establishing robust policy frameworks at corporate, national, and international levels. Ultimately, addressing AI bias requires a multi-faceted approach involving researchers, developers, and policymakers to ensure AI systems operate fairly and equitably.