Markus Plass, Gheorghe-Emilian Olteanu, Sanja Dacic, Izidor Kern, Martin Zacharias, Helmut Popper, Junya Fukuoka, Sosuke Ishijima, Michaela Kargl, Christoph Murauer, Lipika Kalson, Luka Brcic
{"title":"Comparative performance of PD-L1 scoring by pathologists and AI algorithms.","authors":"Markus Plass, Gheorghe-Emilian Olteanu, Sanja Dacic, Izidor Kern, Martin Zacharias, Helmut Popper, Junya Fukuoka, Sosuke Ishijima, Michaela Kargl, Christoph Murauer, Lipika Kalson, Luka Brcic","doi":"10.1111/his.15432","DOIUrl":null,"url":null,"abstract":"<p><strong>Aim: </strong>This study evaluates the comparative effectiveness of pathologists versus artificial intelligence (AI) algorithms in scoring PD-L1 expression in non-small cell lung carcinoma (NSCLC). Immune-checkpoint inhibitors have revolutionized NSCLC treatment, with PD-L1 expression, measured as the tumour proportion score (TPS), serving as a critical predictive biomarker for therapeutic response.</p><p><strong>Methods and results: </strong>In our analysis, 51 SP263-stained NSCLC cases were scored by six pathologists using light microscopy and whole-slide images (WSI), alongside evaluations by two commercially available software tools: uPath software (Roche) and the PD-L1 Lung Cancer TME application (Visiopharm). The study examined intra- and interobserver agreement among pathologists at TPS cutoffs of 1% and 50%, revealing moderate interobserver agreement (Fleiss' kappa 0.558) for TPS <1% and almost perfect agreement (Fleiss' kappa 0.873) for TPS ≥50%. Intraobserver consistency was high, with Cohen's kappa ranging from 0.726 to 1.0. Comparisons between the AI algorithms and the median pathologist scores showed fair agreement for uPath (Fleiss' kappa 0.354) and substantial agreement for the Visiopharm application (Fleiss' kappa 0.672) at the 50% TPS cutoff.</p><p><strong>Conclusion: </strong>These results indicate that while there is strong interobserver concordance among pathologists at higher TPS levels, the performance of AI algorithms is less consistent. The study underscores the need for further refinement of AI tools to match the reliability of expert human evaluation, particularly in critical clinical decision-making contexts.</p>","PeriodicalId":13219,"journal":{"name":"Histopathology","volume":" ","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Histopathology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/his.15432","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CELL BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Aim: This study evaluates the comparative effectiveness of pathologists versus artificial intelligence (AI) algorithms in scoring PD-L1 expression in non-small cell lung carcinoma (NSCLC). Immune-checkpoint inhibitors have revolutionized NSCLC treatment, with PD-L1 expression, measured as the tumour proportion score (TPS), serving as a critical predictive biomarker for therapeutic response.
Methods and results: In our analysis, 51 SP263-stained NSCLC cases were scored by six pathologists using light microscopy and whole-slide images (WSI), alongside evaluations by two commercially available software tools: uPath software (Roche) and the PD-L1 Lung Cancer TME application (Visiopharm). The study examined intra- and interobserver agreement among pathologists at TPS cutoffs of 1% and 50%, revealing moderate interobserver agreement (Fleiss' kappa 0.558) for TPS <1% and almost perfect agreement (Fleiss' kappa 0.873) for TPS ≥50%. Intraobserver consistency was high, with Cohen's kappa ranging from 0.726 to 1.0. Comparisons between the AI algorithms and the median pathologist scores showed fair agreement for uPath (Fleiss' kappa 0.354) and substantial agreement for the Visiopharm application (Fleiss' kappa 0.672) at the 50% TPS cutoff.
Conclusion: These results indicate that while there is strong interobserver concordance among pathologists at higher TPS levels, the performance of AI algorithms is less consistent. The study underscores the need for further refinement of AI tools to match the reliability of expert human evaluation, particularly in critical clinical decision-making contexts.
期刊介绍:
Histopathology is an international journal intended to be of practical value to surgical and diagnostic histopathologists, and to investigators of human disease who employ histopathological methods. Our primary purpose is to publish advances in pathology, in particular those applicable to clinical practice and contributing to the better understanding of human disease.