{"title":"Non-diagnostic time in digital pathology: An empirical study over 10 years","authors":"Aleksandar Vodovnik","doi":"10.1016/j.jpi.2024.100393","DOIUrl":"10.1016/j.jpi.2024.100393","url":null,"abstract":"","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100393"},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000324/pdfft?md5=215132a8d517d7691de823ffcf6bf232&pid=1-s2.0-S2153353924000324-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Tarquino , Jhonathan Rodríguez , David Becerra , Lucia Roa-Peña , Eduardo Romero
{"title":"Engineered feature embeddings meet deep learning: A novel strategy to improve bone marrow cell classification and model transparency","authors":"Jonathan Tarquino , Jhonathan Rodríguez , David Becerra , Lucia Roa-Peña , Eduardo Romero","doi":"10.1016/j.jpi.2024.100390","DOIUrl":"10.1016/j.jpi.2024.100390","url":null,"abstract":"<div><p>Cytomorphology evaluation of bone marrow cell is the initial step to diagnose different hematological diseases. This assessment is still manually performed by trained specialists, who may be a bottleneck within the clinical process. Deep learning algorithms are a promising approach to automate this bone marrow cell evaluation. These artificial intelligence models have focused on limited cell subtypes, mainly associated to a particular disease, and are frequently presented as black boxes. The herein introduced strategy presents an engineered feature representation, the region-attention embedding, which improves the deep learning classification performance of a cytomorphology with 21 bone marrow cell subtypes. This embedding is built upon a specific organization of cytology features within a squared matrix by distributing them after pre-segmented cell regions, i.e., cytoplasm, nucleus, and whole-cell. This novel cell image representation, aimed to preserve spatial/regional relations, is used as input of the network. Combination of region-attention embedding and deep learning networks (Xception and ResNet50) provides local relevance associated to image regions, adding up interpretable information to the prediction. Additionally, this approach is evaluated in a public database with the largest number of cell subtypes (21) by a thorough evaluation scheme with three iterations of a 3-fold cross-validation, performed in 80% of the images (<em>n</em> = 89,484), and a testing process in an unseen set of images composed by the remaining 20% of the images (<em>n</em> = 22,371). This evaluation process demonstrates the introduced strategy outperforms previously published approaches in an equivalent validation set, with a f1-score of 0.82, and presented competitive results on the unseen data partition with a f1-score of 0.56.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100390"},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000294/pdfft?md5=87a5b2e97447248282a9f8d40bb281e3&pid=1-s2.0-S2153353924000294-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard L. Cantley , Xin Jing , Brian Smola , Wei Hao , Sarah Harrington , Liron Pantanowitz
{"title":"Validation of AI-assisted ThinPrep® Pap test screening using the GeniusTM Digital Diagnostics System","authors":"Richard L. Cantley , Xin Jing , Brian Smola , Wei Hao , Sarah Harrington , Liron Pantanowitz","doi":"10.1016/j.jpi.2024.100391","DOIUrl":"10.1016/j.jpi.2024.100391","url":null,"abstract":"<div><p>Advances in whole-slide imaging and artificial intelligence present opportunities for improvement in Pap test screening. To date, there have been limited studies published regarding how best to validate newer AI-based digital systems for screening Pap tests in clinical practice. In this study, we validated the Genius™ Digital Diagnostics System (Hologic) by comparing the performance to traditional manual light microscopic diagnosis of ThinPrep<strong>®</strong> Pap test slides. A total of 319 ThinPrep<strong>®</strong> Pap test cases were prospectively assessed by six cytologists and three cytopathologists by light microscopy and digital evaluation and the results compared to the original ground truth Pap test diagnosis. Concordance with the original diagnosis was significantly different by digital and manual light microscopy review when comparing across: (i) exact Bethesda System diagnostic categories (62.1% vs 55.8%, respectively, <em>p</em> = 0.014), (ii) condensed diagnostic categories (76.8% vs 71.5%, respectively, <em>p</em> = 0.027), and (iii) condensed diagnoses based on clinical management (71.5% vs 65.2%, respectively, <em>p</em> = 0.017). Time to evaluate cases was shorter for digital (M = 3.2 min, SD = 2.2) compared to manual (M = 5.9 min, SD = 3.1) review (t(352) = 19.44, <em>p</em> < 0.001, Cohen's d = 1.035, 95% CI [0.905, 1.164]). Not only did our validation study demonstrate that AI-based digital Pap test evaluation had improved diagnostic accuracy and reduced screening time compared to light microscopy, but that participants reported a positive experience using this system.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100391"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000300/pdfft?md5=f678b76ba4ddf0bb5fbfba56b65df94c&pid=1-s2.0-S2153353924000300-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An explainable AI-based blood cell classification using optimized convolutional neural network","authors":"Oahidul Islam , Md Assaduzzaman , Md Zahid Hasan","doi":"10.1016/j.jpi.2024.100389","DOIUrl":"10.1016/j.jpi.2024.100389","url":null,"abstract":"<div><p>White blood cells (WBCs) are a vital component of the immune system. The efficient and precise classification of WBCs is crucial for medical professionals to diagnose diseases accurately. This study presents an enhanced convolutional neural network (CNN) for detecting blood cells with the help of various image pre-processing techniques. Various image pre-processing techniques, such as padding, thresholding, erosion, dilation, and masking, are utilized to minimize noise and improve feature enhancement. Additionally, performance is further enhanced by experimenting with various architectural structures and hyperparameters to optimize the proposed model. A comparative evaluation is conducted to compare the performance of the proposed model with three transfer learning models, including Inception V3, MobileNetV2, and DenseNet201.The results indicate that the proposed model outperforms existing models, achieving a testing accuracy of 99.12%, precision of 99%, and F1-score of 99%. In addition, We utilized SHAP (Shapley Additive explanations) and LIME (Local Interpretable Model-agnostic Explanations) techniques in our study to improve the interpretability of the proposed model, providing valuable insights into how the model makes decisions. Furthermore, the proposed model has been further explained using the Grad-CAM and Grad-CAM++ techniques, which is a class-discriminative localization approach, to improve trust and transparency. Grad-CAM++ performed slightly better than Grad-CAM in identifying the predicted area's location. Finally, the most efficient model has been integrated into an end-to-end (E2E) system, accessible through both web and Android platforms for medical professionals to classify blood cell.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100389"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000282/pdfft?md5=357d6d2314681f04709e94998615c5a1&pid=1-s2.0-S2153353924000282-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141708134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Petr Kuritcyn , Rosalie Kletzander , Sophia Eisenberg , Thomas Wittenberg , Volker Bruns , Katja Evert , Felix Keil , Paul K. Ziegler , Katrin Bankov , Peter Wild , Markus Eckstein , Arndt Hartmann , Carol I. Geppert , Michaela Benz
{"title":"Towards interactive AI-authoring with prototypical few-shot classifiers in histopathology","authors":"Petr Kuritcyn , Rosalie Kletzander , Sophia Eisenberg , Thomas Wittenberg , Volker Bruns , Katja Evert , Felix Keil , Paul K. Ziegler , Katrin Bankov , Peter Wild , Markus Eckstein , Arndt Hartmann , Carol I. Geppert , Michaela Benz","doi":"10.1016/j.jpi.2024.100388","DOIUrl":"10.1016/j.jpi.2024.100388","url":null,"abstract":"<div><p>A vast multitude of tasks in histopathology could potentially benefit from the support of artificial intelligence (AI). Many examples have been shown in the literature and first commercial products with FDA or CE-IVDR clearance are available. However, two key challenges remain: (1) a scarcity of thoroughly annotated images, respectively the laboriousness of this task, and (2) the creation of robust models that can cope with the data heterogeneity in the field (domain generalization). In this work, we investigate how the combination of prototypical few-shot classification models and data augmentation can address both of these challenges. Based on annotated data sets that include multiple centers, multiple scanners, and two tumor entities, we examine the robustness and the adaptability of few-shot classifiers in multiple scenarios. We demonstrate that data from one scanner and one site are sufficient to train robust few-shot classification models by applying domain-specific data augmentation. The models achieved classification performance of around 90% on a multiscanner and multicenter database, which is on par with the accuracy achieved on the primary single-center single-scanner data. Various convolutional neural network (CNN) architectures can be used for feature extraction in the few-shot model. A comparison of nine state-of-the-art architectures yielded that EfficientNet B0 provides the best trade-off between accuracy and inference time. The classification of prototypical few-shot models directly relies on class prototypes derived from example images of each class. Therefore, we investigated the influence of prototypes originating from images from different scanners and evaluated their performance also on the multiscanner database. Again, our few-shot model showed a stable performance with an average absolute deviation in accuracy compared to the primary prototypes of 1.8% points. Finally, we examined the adaptability to a new tumor entity: classification of tissue sections containing urothelial carcinoma into normal, tumor, and necrotic regions. Only three annotations per subclass (e.g., muscle and adipose tissue are subclasses of normal tissue) were provided to adapt the few-shot model, which obtained an overall accuracy of 93.6%. These results demonstrate that prototypical few-shot classification is an ideal technology for realizing an interactive AI authoring system as it only requires few annotations and can be adapted to new tasks without involving retraining of the underlying feature extraction CNN, which would in turn require a selection of hyper-parameters based on data science expert knowledge. Similarly, it can be regarded as a guided annotation system. To this end, we realized a workflow and user interface that targets non-technical users.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100388"},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000270/pdfft?md5=05adcd36f07ac4f905fe1929289c6160&pid=1-s2.0-S2153353924000270-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141415124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Masked pre-training of transformers for histology image analysis","authors":"Shuai Jiang , Liesbeth Hondelink , Arief A. Suriawinata , Saeed Hassanpour","doi":"10.1016/j.jpi.2024.100386","DOIUrl":"https://doi.org/10.1016/j.jpi.2024.100386","url":null,"abstract":"<div><p>In digital pathology, whole-slide images (WSIs) are widely used for applications such as cancer diagnosis and prognosis prediction. Vision transformer (ViT) models have recently emerged as a promising method for encoding large regions of WSIs while preserving spatial relationships among patches. However, due to the large number of model parameters and limited labeled data, applying transformer models to WSIs remains challenging. In this study, we propose a pretext task to train the transformer model in a self-supervised manner. Our model, MaskHIT, uses the transformer output to reconstruct masked patches, measured by contrastive loss. We pre-trained MaskHIT model using over 7000 WSIs from TCGA and extensively evaluated its performance in multiple experiments, covering survival prediction, cancer subtype classification, and grade prediction tasks. Our experiments demonstrate that the pre-training procedure enables context-aware understanding of WSIs, facilitates the learning of representative histological features based on patch positions and visual patterns, and is essential for the ViT model to achieve optimal results on WSI-level tasks. The pre-trained MaskHIT surpasses various multiple instance learning approaches by 3% and 2% on survival prediction and cancer subtype classification tasks, and also outperforms recent state-of-the-art transformer-based methods. Finally, a comparison between the attention maps generated by the MaskHIT model with pathologist's annotations indicates that the model can accurately identify clinically relevant histological structures on the whole slide for each task.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100386"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000257/pdfft?md5=3dfddd9f11d8384fd0c39d65dbfab6b4&pid=1-s2.0-S2153353924000257-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141434521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Odianosen J. Eigbire-Molen , Clarissa A. Cassol , Daniel J. Kenan , Johnathan O.H. Napier , Lyle J. Burdine , Shana M. Coley , Shree G. Sharma
{"title":"Smartphone-based machine learning model for real-time assessment of medical kidney biopsy","authors":"Odianosen J. Eigbire-Molen , Clarissa A. Cassol , Daniel J. Kenan , Johnathan O.H. Napier , Lyle J. Burdine , Shana M. Coley , Shree G. Sharma","doi":"10.1016/j.jpi.2024.100385","DOIUrl":"https://doi.org/10.1016/j.jpi.2024.100385","url":null,"abstract":"<div><h3>Background</h3><p>Kidney biopsy is the gold-standard for diagnosing medical renal diseases, but the accuracy of the diagnosis greatly depends on the quality of the biopsy specimen, particularly the amount of renal cortex obtained. Inadequate biopsies, characterized by insufficient cortex or predominant medulla, can lead to inconclusive or incorrect diagnoses, and repeat biopsy. Unfortunately, there has been a concerning increase in the rate of inadequate kidney biopsies, and not all medical centers have access to trained professionals who can assess biopsy adequacy in real time. In response to this challenge, we aimed to develop a machine learning model capable of assessing the percentage cortex of each biopsy pass using smartphone images of the kidney biopsy tissue at the time of biopsy.</p></div><div><h3>Methods</h3><p>747 kidney biopsy cores and corresponding smartphone macro images were collected from five unused deceased donor kidneys. Each core was imaged, formalin-fixed, sectioned, and stained with Periodic acid–Schiff (PAS) to determine cortex percentage. The fresh unfixed core images were captured using the macro camera on an iPhone 13 Pro. Two experienced renal pathologists independently reviewed the PAS-stained sections to determine the cortex percentage. For the purpose of this study, the biopsies with less than 30% cortex were labeled as inadequate, while those with 30% or more cortex were classified as adequate. The dataset was divided into training (<em>n</em>=643), validation (<em>n</em>=30), and test (<em>n</em>=74) sets. Preprocessing steps involved converting High-Efficiency Image Container iPhone format images to JPEG, normalization, and renal tissue segmentation using a U-Net deep learning model. Subsequently, a classification deep learning model was trained on the renal tissue region of interest and corresponding class label.</p></div><div><h3>Results</h3><p>The deep learning model achieved an accuracy of 85% on the training data. On the independent test dataset, the model exhibited an accuracy of 81%. For inadequate samples in the test dataset, the model showed a sensitivity of 71%, suggesting its capability to identify cases with inadequate cortical representation. The area under the receiver-operating curve (AUC-ROC) on the test dataset was 0.80.</p></div><div><h3>Conclusion</h3><p>We successfully developed and tested a machine learning model for classifying smartphone images of kidney biopsies as either adequate or inadequate, based on the amount of cortex determined by expert renal pathologists. The model's promising results suggest its potential as a smartphone application to assist real-time assessment of kidney biopsy tissue, particularly in settings with limited access to trained personnel. Further refinements and validations are warranted to optimize the model's performance.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100385"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000245/pdfft?md5=aa0cdf6fbf647b60d197599f7a7fc32d&pid=1-s2.0-S2153353924000245-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norman Zerbe , Lars Ole Schwen , Christian Geißler , Katja Wiesemann , Tom Bisson , Peter Boor , Rita Carvalho , Michael Franz , Christoph Jansen , Tim-Rasmus Kiehl , Björn Lindequist , Nora Charlotte Pohlan , Sarah Schmell , Klaus Strohmenger , Falk Zakrzewski , Markus Plass , Michael Takla , Tobias Küster , André Homeyer , Peter Hufnagl
{"title":"Joining forces for pathology diagnostics with AI assistance: The EMPAIA initiative","authors":"Norman Zerbe , Lars Ole Schwen , Christian Geißler , Katja Wiesemann , Tom Bisson , Peter Boor , Rita Carvalho , Michael Franz , Christoph Jansen , Tim-Rasmus Kiehl , Björn Lindequist , Nora Charlotte Pohlan , Sarah Schmell , Klaus Strohmenger , Falk Zakrzewski , Markus Plass , Michael Takla , Tobias Küster , André Homeyer , Peter Hufnagl","doi":"10.1016/j.jpi.2024.100387","DOIUrl":"https://doi.org/10.1016/j.jpi.2024.100387","url":null,"abstract":"<div><p>Over the past decade, artificial intelligence (AI) methods in pathology have advanced substantially. However, integration into routine clinical practice has been slow due to numerous challenges, including technical and regulatory hurdles in translating research results into clinical diagnostic products and the lack of standardized interfaces.</p><p>The open and vendor-neutral EMPAIA initiative addresses these challenges. Here, we provide an overview of EMPAIA's achievements and lessons learned. EMPAIA integrates various stakeholders of the pathology AI ecosystem, i.e., pathologists, computer scientists, and industry. In close collaboration, we developed technical interoperability standards, recommendations for AI testing and product development, and explainability methods. We implemented the modular and open-source EMPAIA Platform and successfully integrated 14 AI-based image analysis apps from eight different vendors, demonstrating how different apps can use a single standardized interface. We prioritized requirements and evaluated the use of AI in real clinical settings with 14 different pathology laboratories in Europe and Asia. In addition to technical developments, we created a forum for all stakeholders to share information and experiences on digital pathology and AI. Commercial, clinical, and academic stakeholders can now adopt EMPAIA's common open-source interfaces, providing a unique opportunity for large-scale standardization and streamlining of processes.</p><p>Further efforts are needed to effectively and broadly establish AI assistance in routine laboratory use. To this end, a sustainable infrastructure, the non-profit association EMPAIA International, has been established to continue standardization and support broad implementation and advocacy for an AI-assisted digital pathology future.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100387"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000269/pdfft?md5=93cff7c5dd94e55a015f5beb1d21f7eb&pid=1-s2.0-S2153353924000269-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Th.I. Götz , X. Cong , S. Rauber , M. Angeli , E.W. Lang , A. Ramming , C. Schmidkonz
{"title":"A novel Slide-seq based image processing software to identify gene expression at the single cell level","authors":"Th.I. Götz , X. Cong , S. Rauber , M. Angeli , E.W. Lang , A. Ramming , C. Schmidkonz","doi":"10.1016/j.jpi.2024.100384","DOIUrl":"https://doi.org/10.1016/j.jpi.2024.100384","url":null,"abstract":"<div><p>Analysis of gene expression at the single-cell level could help predict the effectiveness of therapies in the field of chronic inflammatory diseases such as arthritis. Here, we demonstrate an adopted approach for processing images from the Slide-seq method. Using a puck, which consists of about 50,000 DNA barcode beads, an RNA sequence of a cell is to be read. The pucks are repeatedly brought into contact with liquids and then recorded with a conventional epifluorescence microscope. The image analysis initially consists of stitching the partial images of a sequence recording, registering images from different sequences, and finally reading out the bases. The new method enables the use of an inexpensive epifluorescence microscope instead of a confocal microscope.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100384"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000233/pdfft?md5=4839c565f8920eeea61c5ef01d5bb248&pid=1-s2.0-S2153353924000233-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye tracking in digital pathology: A comprehensive literature review","authors":"Alana Lopes , Aaron D. Ward , Matthew Cecchini","doi":"10.1016/j.jpi.2024.100383","DOIUrl":"10.1016/j.jpi.2024.100383","url":null,"abstract":"<div><p>Eye tracking has been used for decades in attempt to understand the cognitive processes of individuals. From memory access to problem-solving to decision-making, such insight has the potential to improve workflows and the education of students to become experts in relevant fields. Until recently, the traditional use of microscopes in pathology made eye tracking exceptionally difficult. However, the digital revolution of pathology from conventional microscopes to digital whole slide images allows for new research to be conducted and information to be learned with regards to pathologist visual search patterns and learning experiences. This has the promise to make pathology education more efficient and engaging, ultimately creating stronger and more proficient generations of pathologists to come. The goal of this review on eye tracking in pathology is to characterize and compare the visual search patterns of pathologists. The PubMed and Web of Science databases were searched using ‘pathology’ AND ‘eye tracking’ synonyms. A total of 22 relevant full-text articles published up to and including 2023 were identified and included in this review. Thematic analysis was conducted to organize each study into one or more of the 10 themes identified to characterize the visual search patterns of pathologists: (1) effect of experience, (2) fixations, (3) zooming, (4) panning, (5) saccades, (6) pupil diameter, (7) interpretation time, (8) strategies, (9) machine learning, and (10) education. Expert pathologists were found to have higher diagnostic accuracy, fewer fixations, and shorter interpretation times than pathologists with less experience. Further, literature on eye tracking in pathology indicates that there are several visual strategies for diagnostic interpretation of digital pathology images, but no evidence of a superior strategy exists. The educational implications of eye tracking in pathology have also been explored but the effect of teaching novices how to search as an expert remains unclear. In this article, the main challenges and prospects of eye tracking in pathology are briefly discussed along with their implications to the field.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100383"},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353924000221/pdfft?md5=01458aa9d7539a3f8a155a98d18ad8ba&pid=1-s2.0-S2153353924000221-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141024415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}