Bria Persaud, Ying-Chiang Jeffrey Lee, Jordan Despanie, Helin Hernandez, Henry Alexander Bradley, Sarah L Gebauer, Greg McKelvey
{"title":"有效评估大型语言模型的双重用途生物能力的自动分级。","authors":"Bria Persaud, Ying-Chiang Jeffrey Lee, Jordan Despanie, Helin Hernandez, Henry Alexander Bradley, Sarah L Gebauer, Greg McKelvey","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Advances in the biological knowledge and reasoning capabilities of large language models (LLMs) have sparked interest in assessing the potential of LLMs to facilitate emerging biological risks. The authors evaluated LLMs' abilities to answer knowledge-based questions and generate protocols that explain how to perform common laboratory techniques that could be used in the creation of proxies for biological threats. Because LLM evaluation approaches that rely on human subject-matter experts are often costly and time-intensive, the authors introduced an automated systematic and scalable method for evaluating the ability of LLMs to generate protocols for laboratory techniques. The results presented confirm prior work indicating that LLMs possess knowledge of the biological sciences. This study is intended to inform evaluators of artificial intelligence systems, academics, technical experts, and policymakers on techniques for examining the risks of the convergence of LLMs and biological threats.</p>","PeriodicalId":74637,"journal":{"name":"Rand health quarterly","volume":"12 4","pages":"12"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12479004/pdf/","citationCount":"0","resultStr":"{\"title\":\"Automated Grading for Efficiently Evaluating the Dual-Use Biological Capabilities of Large Language Models.\",\"authors\":\"Bria Persaud, Ying-Chiang Jeffrey Lee, Jordan Despanie, Helin Hernandez, Henry Alexander Bradley, Sarah L Gebauer, Greg McKelvey\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Advances in the biological knowledge and reasoning capabilities of large language models (LLMs) have sparked interest in assessing the potential of LLMs to facilitate emerging biological risks. The authors evaluated LLMs' abilities to answer knowledge-based questions and generate protocols that explain how to perform common laboratory techniques that could be used in the creation of proxies for biological threats. Because LLM evaluation approaches that rely on human subject-matter experts are often costly and time-intensive, the authors introduced an automated systematic and scalable method for evaluating the ability of LLMs to generate protocols for laboratory techniques. The results presented confirm prior work indicating that LLMs possess knowledge of the biological sciences. This study is intended to inform evaluators of artificial intelligence systems, academics, technical experts, and policymakers on techniques for examining the risks of the convergence of LLMs and biological threats.</p>\",\"PeriodicalId\":74637,\"journal\":{\"name\":\"Rand health quarterly\",\"volume\":\"12 4\",\"pages\":\"12\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12479004/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Rand health quarterly\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/9/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Rand health quarterly","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/9/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
Automated Grading for Efficiently Evaluating the Dual-Use Biological Capabilities of Large Language Models.
Advances in the biological knowledge and reasoning capabilities of large language models (LLMs) have sparked interest in assessing the potential of LLMs to facilitate emerging biological risks. The authors evaluated LLMs' abilities to answer knowledge-based questions and generate protocols that explain how to perform common laboratory techniques that could be used in the creation of proxies for biological threats. Because LLM evaluation approaches that rely on human subject-matter experts are often costly and time-intensive, the authors introduced an automated systematic and scalable method for evaluating the ability of LLMs to generate protocols for laboratory techniques. The results presented confirm prior work indicating that LLMs possess knowledge of the biological sciences. This study is intended to inform evaluators of artificial intelligence systems, academics, technical experts, and policymakers on techniques for examining the risks of the convergence of LLMs and biological threats.