Kevin Leyton-Brown , Mausam , Yatin Nandwani , Hedayat Zarkoob , Chris Cameron , Neil Newman , Dinesh Raghu
{"title":"在大型会议上为论文和审稿人牵线搭桥","authors":"Kevin Leyton-Brown , Mausam , Yatin Nandwani , Hedayat Zarkoob , Chris Cameron , Neil Newman , Dinesh Raghu","doi":"10.1016/j.artint.2024.104119","DOIUrl":null,"url":null,"abstract":"<div><p>Peer-reviewed conferences, the main publication venues in CS, rely critically on matching highly qualified reviewers for each paper. Because of the growing scale of these conferences, the tight timelines on which they operate, and a recent surge in explicitly dishonest behavior, there is now no alternative to performing this matching in an automated way. This paper introduces <em>Large Conference Matching (LCM)</em>, a novel reviewer–paper matching approach that was recently deployed in the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), and has since been adopted (wholly or partially) by other conferences including ICML 2022, AAAI 2022-2024, and IJCAI 2022-2024. LCM has three main elements: (1) collecting and processing input data to identify problematic matches and generate reviewer–paper scores; (2) formulating and solving an optimization problem to find good reviewer–paper matchings; and (3) a two-phase reviewing process that shifts reviewing resources away from papers likely to be rejected and towards papers closer to the decision boundary. This paper also describes an evaluation of these innovations based on an extensive post-hoc analysis on real data—including a comparison with the matching algorithm used in AAAI's previous (2020) iteration—and supplements this with additional numerical experimentation.<span><sup>2</sup></span></p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104119"},"PeriodicalIF":5.1000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224000559/pdfft?md5=fb8e284a4c8e25a00c2339ca22f7ea3a&pid=1-s2.0-S0004370224000559-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Matching papers and reviewers at large conferences\",\"authors\":\"Kevin Leyton-Brown , Mausam , Yatin Nandwani , Hedayat Zarkoob , Chris Cameron , Neil Newman , Dinesh Raghu\",\"doi\":\"10.1016/j.artint.2024.104119\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Peer-reviewed conferences, the main publication venues in CS, rely critically on matching highly qualified reviewers for each paper. Because of the growing scale of these conferences, the tight timelines on which they operate, and a recent surge in explicitly dishonest behavior, there is now no alternative to performing this matching in an automated way. This paper introduces <em>Large Conference Matching (LCM)</em>, a novel reviewer–paper matching approach that was recently deployed in the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), and has since been adopted (wholly or partially) by other conferences including ICML 2022, AAAI 2022-2024, and IJCAI 2022-2024. LCM has three main elements: (1) collecting and processing input data to identify problematic matches and generate reviewer–paper scores; (2) formulating and solving an optimization problem to find good reviewer–paper matchings; and (3) a two-phase reviewing process that shifts reviewing resources away from papers likely to be rejected and towards papers closer to the decision boundary. This paper also describes an evaluation of these innovations based on an extensive post-hoc analysis on real data—including a comparison with the matching algorithm used in AAAI's previous (2020) iteration—and supplements this with additional numerical experimentation.<span><sup>2</sup></span></p></div>\",\"PeriodicalId\":8434,\"journal\":{\"name\":\"Artificial Intelligence\",\"volume\":\"331 \",\"pages\":\"Article 104119\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2024-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0004370224000559/pdfft?md5=fb8e284a4c8e25a00c2339ca22f7ea3a&pid=1-s2.0-S0004370224000559-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0004370224000559\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370224000559","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Matching papers and reviewers at large conferences
Peer-reviewed conferences, the main publication venues in CS, rely critically on matching highly qualified reviewers for each paper. Because of the growing scale of these conferences, the tight timelines on which they operate, and a recent surge in explicitly dishonest behavior, there is now no alternative to performing this matching in an automated way. This paper introduces Large Conference Matching (LCM), a novel reviewer–paper matching approach that was recently deployed in the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), and has since been adopted (wholly or partially) by other conferences including ICML 2022, AAAI 2022-2024, and IJCAI 2022-2024. LCM has three main elements: (1) collecting and processing input data to identify problematic matches and generate reviewer–paper scores; (2) formulating and solving an optimization problem to find good reviewer–paper matchings; and (3) a two-phase reviewing process that shifts reviewing resources away from papers likely to be rejected and towards papers closer to the decision boundary. This paper also describes an evaluation of these innovations based on an extensive post-hoc analysis on real data—including a comparison with the matching algorithm used in AAAI's previous (2020) iteration—and supplements this with additional numerical experimentation.2
期刊介绍:
The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.