Marit Bernhardt, Leonie Weinhold, Christine Sanders, Oliver Hommerding, Jan-Frederic Lau, Marieta Toma, Verena Tischler, Matthias Schmid, Tomasz Zienkiewicz, Ralf Hildenbrand, Peter Gerlach, Hui Zhou, Martin Braun, Gunnar Müller, Erich Sieber, Christian Marko, Glen Kristiansen
{"title":"将病理质量圈中的 Ki-67 评分作为评估观察者间变异性的工具进行点对点验证:我们比想象的更好吗?","authors":"Marit Bernhardt, Leonie Weinhold, Christine Sanders, Oliver Hommerding, Jan-Frederic Lau, Marieta Toma, Verena Tischler, Matthias Schmid, Tomasz Zienkiewicz, Ralf Hildenbrand, Peter Gerlach, Hui Zhou, Martin Braun, Gunnar Müller, Erich Sieber, Christian Marko, Glen Kristiansen","doi":"10.1111/apm.13451","DOIUrl":null,"url":null,"abstract":"<p>Ki-67, a nuclear protein expressed in all stages of cellular proliferation, is a valuable tool to assess tumor proliferation and has been linked to more aggressive tumor behavior. However, interlaboratory staining heterogeneity and inter-observer variability challenge its reproducibility. Round Robin tests are a suitable tool to standardize and harmonize immunohistochemical and molecular analyses in histopathology. The study investigates the interrater and interlaboratory reproducibility of Ki-67-scoring using both manual and automated approaches. Unstained TMA slides comprising diverse tumor types (breast cancer, neuroendocrine tumors, lymphomas, and head and neck squamous cell carcinoma) were distributed to six pathology laboratories, each employing their routine staining protocols. Manual and automated scoring methods were applied, and interrater and interlaboratory agreement assessed using intraclass correlation coefficients (ICC). The results highlight good-to-excellent reliability overall, with automated scoring demonstrating higher consistency (ICC 0.955) than manual scoring (ICC 0.871). Results were more variable when looking at the individual entities. Reliability remained good for lymphomas (ICC 0.878) and breast cancer (ICC 0.784) and was poor in well-differentiated neuroendocrine tumors (ICC 0.354). This study clearly advocates standardized practices and training to ensure consistency in Ki-67-assessment, and it demonstrates that this can be achieved in a peer-to-peer approach in local quality-circles.</p>","PeriodicalId":8167,"journal":{"name":"Apmis","volume":"132 10","pages":"718-727"},"PeriodicalIF":2.2000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/apm.13451","citationCount":"0","resultStr":"{\"title\":\"Peer-to-peer validation of Ki-67 scoring in a pathology quality circle as a tool to assess interobserver variability: are we better than we thought?\",\"authors\":\"Marit Bernhardt, Leonie Weinhold, Christine Sanders, Oliver Hommerding, Jan-Frederic Lau, Marieta Toma, Verena Tischler, Matthias Schmid, Tomasz Zienkiewicz, Ralf Hildenbrand, Peter Gerlach, Hui Zhou, Martin Braun, Gunnar Müller, Erich Sieber, Christian Marko, Glen Kristiansen\",\"doi\":\"10.1111/apm.13451\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Ki-67, a nuclear protein expressed in all stages of cellular proliferation, is a valuable tool to assess tumor proliferation and has been linked to more aggressive tumor behavior. However, interlaboratory staining heterogeneity and inter-observer variability challenge its reproducibility. Round Robin tests are a suitable tool to standardize and harmonize immunohistochemical and molecular analyses in histopathology. The study investigates the interrater and interlaboratory reproducibility of Ki-67-scoring using both manual and automated approaches. Unstained TMA slides comprising diverse tumor types (breast cancer, neuroendocrine tumors, lymphomas, and head and neck squamous cell carcinoma) were distributed to six pathology laboratories, each employing their routine staining protocols. Manual and automated scoring methods were applied, and interrater and interlaboratory agreement assessed using intraclass correlation coefficients (ICC). The results highlight good-to-excellent reliability overall, with automated scoring demonstrating higher consistency (ICC 0.955) than manual scoring (ICC 0.871). Results were more variable when looking at the individual entities. Reliability remained good for lymphomas (ICC 0.878) and breast cancer (ICC 0.784) and was poor in well-differentiated neuroendocrine tumors (ICC 0.354). This study clearly advocates standardized practices and training to ensure consistency in Ki-67-assessment, and it demonstrates that this can be achieved in a peer-to-peer approach in local quality-circles.</p>\",\"PeriodicalId\":8167,\"journal\":{\"name\":\"Apmis\",\"volume\":\"132 10\",\"pages\":\"718-727\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/apm.13451\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Apmis\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/apm.13451\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"IMMUNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Apmis","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/apm.13451","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"IMMUNOLOGY","Score":null,"Total":0}
Peer-to-peer validation of Ki-67 scoring in a pathology quality circle as a tool to assess interobserver variability: are we better than we thought?
Ki-67, a nuclear protein expressed in all stages of cellular proliferation, is a valuable tool to assess tumor proliferation and has been linked to more aggressive tumor behavior. However, interlaboratory staining heterogeneity and inter-observer variability challenge its reproducibility. Round Robin tests are a suitable tool to standardize and harmonize immunohistochemical and molecular analyses in histopathology. The study investigates the interrater and interlaboratory reproducibility of Ki-67-scoring using both manual and automated approaches. Unstained TMA slides comprising diverse tumor types (breast cancer, neuroendocrine tumors, lymphomas, and head and neck squamous cell carcinoma) were distributed to six pathology laboratories, each employing their routine staining protocols. Manual and automated scoring methods were applied, and interrater and interlaboratory agreement assessed using intraclass correlation coefficients (ICC). The results highlight good-to-excellent reliability overall, with automated scoring demonstrating higher consistency (ICC 0.955) than manual scoring (ICC 0.871). Results were more variable when looking at the individual entities. Reliability remained good for lymphomas (ICC 0.878) and breast cancer (ICC 0.784) and was poor in well-differentiated neuroendocrine tumors (ICC 0.354). This study clearly advocates standardized practices and training to ensure consistency in Ki-67-assessment, and it demonstrates that this can be achieved in a peer-to-peer approach in local quality-circles.
期刊介绍:
APMIS, formerly Acta Pathologica, Microbiologica et Immunologica Scandinavica, has been published since 1924 by the Scandinavian Societies for Medical Microbiology and Pathology as a non-profit-making scientific journal.