Guijin Son, Hyunwoo Ko, Hoyoung Lee, Yewon Kim, Seunghyeok Hong
{"title":"法学硕士担任法官与奖励模式:他们能做什么,不能做什么","authors":"Guijin Son, Hyunwoo Ko, Hoyoung Lee, Yewon Kim, Seunghyeok Hong","doi":"arxiv-2409.11239","DOIUrl":null,"url":null,"abstract":"LLM-as-a-Judge and reward models are widely used alternatives of\nmultiple-choice questions or human annotators for large language model (LLM)\nevaluation. Their efficacy shines in evaluating long-form responses, serving a\ncritical role as evaluators of leaderboards and as proxies to align LLMs via\nreinforcement learning. However, despite their popularity, their effectiveness\noutside of English remains largely unexplored. In this paper, we conduct a\ncomprehensive analysis on automated evaluators, reporting key findings on their\nbehavior in a non-English environment. First, we discover that English\nevaluation capabilities significantly influence language-specific capabilities,\noften more than the language proficiency itself, enabling evaluators trained in\nEnglish to easily transfer their skills to other languages. Second, we identify\ncritical shortcomings, where LLMs fail to detect and penalize errors, such as\nfactual inaccuracies, cultural misrepresentations, and the presence of unwanted\nlanguage. Finally, we release Kudge, the first non-English meta-evaluation\ndataset containing 5,012 human annotations in Korean.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"50 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LLM-as-a-Judge & Reward Model: What They Can and Cannot Do\",\"authors\":\"Guijin Son, Hyunwoo Ko, Hoyoung Lee, Yewon Kim, Seunghyeok Hong\",\"doi\":\"arxiv-2409.11239\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"LLM-as-a-Judge and reward models are widely used alternatives of\\nmultiple-choice questions or human annotators for large language model (LLM)\\nevaluation. Their efficacy shines in evaluating long-form responses, serving a\\ncritical role as evaluators of leaderboards and as proxies to align LLMs via\\nreinforcement learning. However, despite their popularity, their effectiveness\\noutside of English remains largely unexplored. In this paper, we conduct a\\ncomprehensive analysis on automated evaluators, reporting key findings on their\\nbehavior in a non-English environment. First, we discover that English\\nevaluation capabilities significantly influence language-specific capabilities,\\noften more than the language proficiency itself, enabling evaluators trained in\\nEnglish to easily transfer their skills to other languages. Second, we identify\\ncritical shortcomings, where LLMs fail to detect and penalize errors, such as\\nfactual inaccuracies, cultural misrepresentations, and the presence of unwanted\\nlanguage. Finally, we release Kudge, the first non-English meta-evaluation\\ndataset containing 5,012 human annotations in Korean.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"50 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11239\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11239","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LLM-as-a-Judge & Reward Model: What They Can and Cannot Do
LLM-as-a-Judge and reward models are widely used alternatives of
multiple-choice questions or human annotators for large language model (LLM)
evaluation. Their efficacy shines in evaluating long-form responses, serving a
critical role as evaluators of leaderboards and as proxies to align LLMs via
reinforcement learning. However, despite their popularity, their effectiveness
outside of English remains largely unexplored. In this paper, we conduct a
comprehensive analysis on automated evaluators, reporting key findings on their
behavior in a non-English environment. First, we discover that English
evaluation capabilities significantly influence language-specific capabilities,
often more than the language proficiency itself, enabling evaluators trained in
English to easily transfer their skills to other languages. Second, we identify
critical shortcomings, where LLMs fail to detect and penalize errors, such as
factual inaccuracies, cultural misrepresentations, and the presence of unwanted
language. Finally, we release Kudge, the first non-English meta-evaluation
dataset containing 5,012 human annotations in Korean.