{"title":"基于表面触觉成像的合成数据增强可解释视觉转换器用于结直肠癌诊断","authors":"Siddhartha Kapuria , Naruhiko Ikoma , Sandeep Chinchali , Farshid Alambeigi","doi":"10.1016/j.engappai.2025.110633","DOIUrl":null,"url":null,"abstract":"<div><div>In this work, we present a synthetic data augmented explainable Vision Transformer (ViT) framework designed for the informed and intuitive early diagnosis of colorectal cancer (CRC) polyps. The framework uses textural images — generated by our recently developed vision-based tactile sensor (called HySenSe) and augmented by synthetically generated images from a diffusion model pipeline, to output class-based probabilities of potential CRC polyp types. Additionally, it provides local relevancy-based heatmaps to assist clinicians by highlighting key areas of interest in the tactile images representing CRC polyp textures. We benchmark each aspect of this framework through: (i) Inception Scores for the synthetic images generated by the diffusion pipeline, (ii) Performance evaluation and sensitivity analyses on the effects of synthetic data addition on model generalizability compared with other state-of-the-art architectures, (iii) Dimensionality reduction techniques to confirm the suitability of synthetically generated images, and (iv) Comparison of two independent approaches visualizing explainability.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"151 ","pages":"Article 110633"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Synthetic data-augmented explainable Vision Transformer for colorectal cancer diagnosis via surface tactile imaging\",\"authors\":\"Siddhartha Kapuria , Naruhiko Ikoma , Sandeep Chinchali , Farshid Alambeigi\",\"doi\":\"10.1016/j.engappai.2025.110633\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In this work, we present a synthetic data augmented explainable Vision Transformer (ViT) framework designed for the informed and intuitive early diagnosis of colorectal cancer (CRC) polyps. The framework uses textural images — generated by our recently developed vision-based tactile sensor (called HySenSe) and augmented by synthetically generated images from a diffusion model pipeline, to output class-based probabilities of potential CRC polyp types. Additionally, it provides local relevancy-based heatmaps to assist clinicians by highlighting key areas of interest in the tactile images representing CRC polyp textures. We benchmark each aspect of this framework through: (i) Inception Scores for the synthetic images generated by the diffusion pipeline, (ii) Performance evaluation and sensitivity analyses on the effects of synthetic data addition on model generalizability compared with other state-of-the-art architectures, (iii) Dimensionality reduction techniques to confirm the suitability of synthetically generated images, and (iv) Comparison of two independent approaches visualizing explainability.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"151 \",\"pages\":\"Article 110633\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-03-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625006335\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625006335","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Synthetic data-augmented explainable Vision Transformer for colorectal cancer diagnosis via surface tactile imaging
In this work, we present a synthetic data augmented explainable Vision Transformer (ViT) framework designed for the informed and intuitive early diagnosis of colorectal cancer (CRC) polyps. The framework uses textural images — generated by our recently developed vision-based tactile sensor (called HySenSe) and augmented by synthetically generated images from a diffusion model pipeline, to output class-based probabilities of potential CRC polyp types. Additionally, it provides local relevancy-based heatmaps to assist clinicians by highlighting key areas of interest in the tactile images representing CRC polyp textures. We benchmark each aspect of this framework through: (i) Inception Scores for the synthetic images generated by the diffusion pipeline, (ii) Performance evaluation and sensitivity analyses on the effects of synthetic data addition on model generalizability compared with other state-of-the-art architectures, (iii) Dimensionality reduction techniques to confirm the suitability of synthetically generated images, and (iv) Comparison of two independent approaches visualizing explainability.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.