{"title":"SAGED:可定制公平校准的语言模型整体偏差基准管道","authors":"Xin Guan, Nathaniel Demchak, Saloni Gupta, Ze Wang, Ediz Ertekin Jr., Adriano Koshiyama, Emre Kazim, Zekun Wu","doi":"arxiv-2409.11149","DOIUrl":null,"url":null,"abstract":"The development of unbiased large language models is widely recognized as\ncrucial, yet existing benchmarks fall short in detecting biases due to limited\nscope, contamination, and lack of a fairness baseline. SAGED(-Bias) is the\nfirst holistic benchmarking pipeline to address these problems. The pipeline\nencompasses five core stages: scraping materials, assembling benchmarks,\ngenerating responses, extracting numeric features, and diagnosing with\ndisparity metrics. SAGED includes metrics for max disparity, such as impact\nratio, and bias concentration, such as Max Z-scores. Noticing that assessment\ntool bias and contextual bias in prompts can distort evaluation, SAGED\nimplements counterfactual branching and baseline calibration for mitigation.\nFor demonstration, we use SAGED on G20 Countries with popular 8b-level models\nincluding Gemma2, Llama3.1, Mistral, and Qwen2. With sentiment analysis, we\nfind that while Mistral and Qwen2 show lower max disparity and higher bias\nconcentration than Gemma2 and Llama3.1, all models are notably biased against\ncountries like Russia and (except for Qwen2) China. With further experiments to\nhave models role-playing U.S. (vice-/former-) presidents, we see bias amplifies\nand shifts in heterogeneous directions. Moreover, we see Qwen2 and Mistral not\nengage in role-playing, while Llama3.1 and Gemma2 role-play Trump notably more\nintensively than Biden and Harris, indicating role-playing performance bias in\nthese models.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration\",\"authors\":\"Xin Guan, Nathaniel Demchak, Saloni Gupta, Ze Wang, Ediz Ertekin Jr., Adriano Koshiyama, Emre Kazim, Zekun Wu\",\"doi\":\"arxiv-2409.11149\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The development of unbiased large language models is widely recognized as\\ncrucial, yet existing benchmarks fall short in detecting biases due to limited\\nscope, contamination, and lack of a fairness baseline. SAGED(-Bias) is the\\nfirst holistic benchmarking pipeline to address these problems. The pipeline\\nencompasses five core stages: scraping materials, assembling benchmarks,\\ngenerating responses, extracting numeric features, and diagnosing with\\ndisparity metrics. SAGED includes metrics for max disparity, such as impact\\nratio, and bias concentration, such as Max Z-scores. Noticing that assessment\\ntool bias and contextual bias in prompts can distort evaluation, SAGED\\nimplements counterfactual branching and baseline calibration for mitigation.\\nFor demonstration, we use SAGED on G20 Countries with popular 8b-level models\\nincluding Gemma2, Llama3.1, Mistral, and Qwen2. With sentiment analysis, we\\nfind that while Mistral and Qwen2 show lower max disparity and higher bias\\nconcentration than Gemma2 and Llama3.1, all models are notably biased against\\ncountries like Russia and (except for Qwen2) China. With further experiments to\\nhave models role-playing U.S. (vice-/former-) presidents, we see bias amplifies\\nand shifts in heterogeneous directions. Moreover, we see Qwen2 and Mistral not\\nengage in role-playing, while Llama3.1 and Gemma2 role-play Trump notably more\\nintensively than Biden and Harris, indicating role-playing performance bias in\\nthese models.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"30 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11149\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11149","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration
The development of unbiased large language models is widely recognized as
crucial, yet existing benchmarks fall short in detecting biases due to limited
scope, contamination, and lack of a fairness baseline. SAGED(-Bias) is the
first holistic benchmarking pipeline to address these problems. The pipeline
encompasses five core stages: scraping materials, assembling benchmarks,
generating responses, extracting numeric features, and diagnosing with
disparity metrics. SAGED includes metrics for max disparity, such as impact
ratio, and bias concentration, such as Max Z-scores. Noticing that assessment
tool bias and contextual bias in prompts can distort evaluation, SAGED
implements counterfactual branching and baseline calibration for mitigation.
For demonstration, we use SAGED on G20 Countries with popular 8b-level models
including Gemma2, Llama3.1, Mistral, and Qwen2. With sentiment analysis, we
find that while Mistral and Qwen2 show lower max disparity and higher bias
concentration than Gemma2 and Llama3.1, all models are notably biased against
countries like Russia and (except for Qwen2) China. With further experiments to
have models role-playing U.S. (vice-/former-) presidents, we see bias amplifies
and shifts in heterogeneous directions. Moreover, we see Qwen2 and Mistral not
engage in role-playing, while Llama3.1 and Gemma2 role-play Trump notably more
intensively than Biden and Harris, indicating role-playing performance bias in
these models.