{"title":"ComplexCodeEval: A Benchmark for Evaluating Large Code Models on More Complex Code","authors":"Jia Feng, Jiachen Liu, Cuiyun Gao, Chun Yong Chong, Chaozheng Wang, Shan Gao, Xin Xia","doi":"arxiv-2409.10280","DOIUrl":null,"url":null,"abstract":"In recent years, the application of large language models (LLMs) to\ncode-related tasks has gained significant attention. However, existing\nevaluation benchmarks often focus on limited scenarios, such as code generation\nor completion, which do not reflect the diverse challenges developers face in\nreal-world contexts. To address this, we introduce ComplexCodeEval, a benchmark\ndesigned to assess LCMs in various development tasks, including code\ngeneration, completion, API recommendation, and test case generation. It\nincludes 3,897 Java samples and 7,184 Python samples from high-star GitHub\nrepositories, each annotated with function signatures, docstrings, and API\nreferences to simulate real development environments. Our experiments across\nten LCMs reveal that context improves performance and that data leakage can\nlead to overestimation, highlighting the need for more accurate evaluations.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10280","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, the application of large language models (LLMs) to
code-related tasks has gained significant attention. However, existing
evaluation benchmarks often focus on limited scenarios, such as code generation
or completion, which do not reflect the diverse challenges developers face in
real-world contexts. To address this, we introduce ComplexCodeEval, a benchmark
designed to assess LCMs in various development tasks, including code
generation, completion, API recommendation, and test case generation. It
includes 3,897 Java samples and 7,184 Python samples from high-star GitHub
repositories, each annotated with function signatures, docstrings, and API
references to simulate real development environments. Our experiments across
ten LCMs reveal that context improves performance and that data leakage can
lead to overestimation, highlighting the need for more accurate evaluations.