{"title":"A fine-grained evaluation framework for language models: Combining pointwise grading and pairwise comparison","authors":"Yijie Li , Yuan Sun","doi":"10.1016/j.ipm.2025.104270","DOIUrl":null,"url":null,"abstract":"<div><div>Automated evaluation of Large Language Models (LLMs) responses face fundamental challenges: evaluation bias, protocol inflexibility, and the trade-off between quality and accessibility. Current paradigms either rely heavily on expensive proprietary models or suffer from systematic biases and limited evaluation modes. We introduce MELD, an 8B-parameter evaluation model designed to overcome these limitations via systematic bias mitigation and multi-protocol adaptability. MELD is trained on a comprehensive dataset covering eight categories and 50 subcategories, each with tailored evaluation criteria. It supports both pointwise grading and pairwise comparison through model merging, achieving robust performance across protocols. Experiments show MELD consistently outperforms open-source baselines and matches or surpasses GPT-4 in human alignment. Notably, MELD reduces bias in position, length, and content. The framework includes a lightweight quantized deployment option, enabling high-quality evaluation in resource-constrained settings. This work provides a practical, cost-effective solution for LLM evaluation. Resources are available at: <span><span>https://github.com/Bound2-2/MELD-Eval</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 1","pages":"Article 104270"},"PeriodicalIF":6.9000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325002110","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Automated evaluation of Large Language Models (LLMs) responses face fundamental challenges: evaluation bias, protocol inflexibility, and the trade-off between quality and accessibility. Current paradigms either rely heavily on expensive proprietary models or suffer from systematic biases and limited evaluation modes. We introduce MELD, an 8B-parameter evaluation model designed to overcome these limitations via systematic bias mitigation and multi-protocol adaptability. MELD is trained on a comprehensive dataset covering eight categories and 50 subcategories, each with tailored evaluation criteria. It supports both pointwise grading and pairwise comparison through model merging, achieving robust performance across protocols. Experiments show MELD consistently outperforms open-source baselines and matches or surpasses GPT-4 in human alignment. Notably, MELD reduces bias in position, length, and content. The framework includes a lightweight quantized deployment option, enabling high-quality evaluation in resource-constrained settings. This work provides a practical, cost-effective solution for LLM evaluation. Resources are available at: https://github.com/Bound2-2/MELD-Eval.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.