{"title":"链接预测中评价指标的不一致性。","authors":"Yilin Bi, Xinshan Jiao, Yan-Li Lee, Tao Zhou","doi":"10.1093/pnasnexus/pgae498","DOIUrl":null,"url":null,"abstract":"<p><p>Link prediction is a paradigmatic and challenging problem in network science, which aims to predict missing links, future links, and temporal links based on known topology. Along with the increasing number of link prediction algorithms, a critical yet previously ignored risk is that the evaluation metrics for algorithm performance are usually chosen at will. This paper implements extensive experiments on hundreds of real networks and 26 well-known algorithms, revealing significant inconsistency among evaluation metrics, namely different metrics probably produce remarkably different rankings of algorithms. Therefore, we conclude that any single metric cannot comprehensively or credibly evaluate algorithm performance. In terms of information content, we suggest the usage of at least two metrics: one is the area under the receiver operating characteristic curve, and the other is one of the following three candidates, say the area under the precision-recall curve, the area under the precision curve, and the normalized discounted cumulative gain. When the data are imbalanced, say the number of negative samples significantly outweighs the number of positive samples, the area under the generalized Receiver Operating Characteristic curve should also be used. In addition, as we have proved the essential equivalence of threshold-dependent metrics, if in a link prediction task, some specific thresholds are meaningful, we can consider any one threshold-dependent metric with those thresholds. This work completes a missing part in the landscape of link prediction, and provides a starting point toward a well-accepted criterion or standard to select proper evaluation metrics for link prediction.</p>","PeriodicalId":74468,"journal":{"name":"PNAS nexus","volume":"3 11","pages":"pgae498"},"PeriodicalIF":2.2000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11574622/pdf/","citationCount":"0","resultStr":"{\"title\":\"Inconsistency among evaluation metrics in link prediction.\",\"authors\":\"Yilin Bi, Xinshan Jiao, Yan-Li Lee, Tao Zhou\",\"doi\":\"10.1093/pnasnexus/pgae498\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Link prediction is a paradigmatic and challenging problem in network science, which aims to predict missing links, future links, and temporal links based on known topology. Along with the increasing number of link prediction algorithms, a critical yet previously ignored risk is that the evaluation metrics for algorithm performance are usually chosen at will. This paper implements extensive experiments on hundreds of real networks and 26 well-known algorithms, revealing significant inconsistency among evaluation metrics, namely different metrics probably produce remarkably different rankings of algorithms. Therefore, we conclude that any single metric cannot comprehensively or credibly evaluate algorithm performance. In terms of information content, we suggest the usage of at least two metrics: one is the area under the receiver operating characteristic curve, and the other is one of the following three candidates, say the area under the precision-recall curve, the area under the precision curve, and the normalized discounted cumulative gain. When the data are imbalanced, say the number of negative samples significantly outweighs the number of positive samples, the area under the generalized Receiver Operating Characteristic curve should also be used. In addition, as we have proved the essential equivalence of threshold-dependent metrics, if in a link prediction task, some specific thresholds are meaningful, we can consider any one threshold-dependent metric with those thresholds. This work completes a missing part in the landscape of link prediction, and provides a starting point toward a well-accepted criterion or standard to select proper evaluation metrics for link prediction.</p>\",\"PeriodicalId\":74468,\"journal\":{\"name\":\"PNAS nexus\",\"volume\":\"3 11\",\"pages\":\"pgae498\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11574622/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PNAS nexus\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/pnasnexus/pgae498\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/11/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PNAS nexus","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/pnasnexus/pgae498","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/11/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
Inconsistency among evaluation metrics in link prediction.
Link prediction is a paradigmatic and challenging problem in network science, which aims to predict missing links, future links, and temporal links based on known topology. Along with the increasing number of link prediction algorithms, a critical yet previously ignored risk is that the evaluation metrics for algorithm performance are usually chosen at will. This paper implements extensive experiments on hundreds of real networks and 26 well-known algorithms, revealing significant inconsistency among evaluation metrics, namely different metrics probably produce remarkably different rankings of algorithms. Therefore, we conclude that any single metric cannot comprehensively or credibly evaluate algorithm performance. In terms of information content, we suggest the usage of at least two metrics: one is the area under the receiver operating characteristic curve, and the other is one of the following three candidates, say the area under the precision-recall curve, the area under the precision curve, and the normalized discounted cumulative gain. When the data are imbalanced, say the number of negative samples significantly outweighs the number of positive samples, the area under the generalized Receiver Operating Characteristic curve should also be used. In addition, as we have proved the essential equivalence of threshold-dependent metrics, if in a link prediction task, some specific thresholds are meaningful, we can consider any one threshold-dependent metric with those thresholds. This work completes a missing part in the landscape of link prediction, and provides a starting point toward a well-accepted criterion or standard to select proper evaluation metrics for link prediction.