{"title":"Breaking the Bias: Gender Fairness in LLMs Using Prompt Engineering and In-Context Learning","authors":"Satyam Dwivedi, Sanjukta Ghosh, Shivam Dwivedi","doi":"10.21659/rupkatha.v15n4.10","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have been identified as carriers of societal biases, particularly in gender representation. This study introduces an innovative approach employing prompt engineering and in-context learning to rectify these biases in LLMs. Through our methodology, we effectively guide LLMs to generate more equitable content, emphasizing nuanced prompts and in-context feedback. Experimental results on openly available LLMs such as BARD, ChatGPT, and LLAMA2-Chat indicate a significant reduction in gender bias, particularly in traditionally problematic areas such as ‘Literature’. Our findings underscore the potential of prompt engineering and in-context learning as powerful tools in the quest for unbiased AI language models.","PeriodicalId":43128,"journal":{"name":"Rupkatha Journal on Interdisciplinary Studies in Humanities","volume":"54 1","pages":""},"PeriodicalIF":0.2000,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Rupkatha Journal on Interdisciplinary Studies in Humanities","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21659/rupkatha.v15n4.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"HUMANITIES, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Large Language Models (LLMs) have been identified as carriers of societal biases, particularly in gender representation. This study introduces an innovative approach employing prompt engineering and in-context learning to rectify these biases in LLMs. Through our methodology, we effectively guide LLMs to generate more equitable content, emphasizing nuanced prompts and in-context feedback. Experimental results on openly available LLMs such as BARD, ChatGPT, and LLAMA2-Chat indicate a significant reduction in gender bias, particularly in traditionally problematic areas such as ‘Literature’. Our findings underscore the potential of prompt engineering and in-context learning as powerful tools in the quest for unbiased AI language models.
期刊介绍:
“The fundamental idea for interdisciplinarity derives” as our Chief Editor Explains, “from an evolutionary necessity; namely the need to confront and interpret complex systems…An entity that is studied can no longer be analyzed in terms of an object of just single discipline, but as a contending hierarchy of components which could be studied under the rubric of multiple or variable branches of knowledge.” Following this, we encourage authors to engage themselves in interdisciplinary discussion of topics from the broad areas listed below and apply interdsiciplinary perspectives from other areas of the humanities and/or the sciences wherever applicable. We publish peer-reviewed original research papers and reviews in the interdisciplinary fields of humanities. A list, which is not exclusive, is given below for convenience. See Areas of discussion. We have firm conviction in Open Access philosophy and strongly support Open Access Initiatives. Rupkatha has signed on to the Budapest Open Access Initiative. In conformity with this, the principles of publications are primarily guided by the open nature of knowledge.