Zongkai Lian, Haiqiong Yang, Fan Wu, Mingxin Li, Shancheng Jiang
{"title":"C-Algl网:病理图像产生诊断结果","authors":"Zongkai Lian, Haiqiong Yang, Fan Wu, Mingxin Li, Shancheng Jiang","doi":"10.1109/ISBIWorkshops50223.2020.9153419","DOIUrl":null,"url":null,"abstract":"The lack of a clear correspondence between feature of lesion areas and corresponding pathological characteristics and the scarcity of high-quality histopathological image sets pose a great challenge to the establishment of interpretable computer-aided diagnostic systems. Therefore, we propose a new deep learning-based model, named as C-ALGL model (CNN-AttendLSTM-GenerateLSTM), which is able to generate visual image results with diagnostic descriptions from input histopathological images in one pass. We use an improved recurrent neural network-based structure that incorporates attentional mechanisms in the LSTM interlayer with altered LSTM parameter delivery pathways. The structure generates visualization results at the attentional mechanism and diagnostic text at the end-connected full-connected layer. We conducted a large number of experiments on the PATHOLOGY-11 skin pathology image dataset and the experimental results proved that the C-ALGL model performed better than benchmark models on this task.","PeriodicalId":329356,"journal":{"name":"2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"C-Algl Net: Pathological Images Generate Diagnostic Results\",\"authors\":\"Zongkai Lian, Haiqiong Yang, Fan Wu, Mingxin Li, Shancheng Jiang\",\"doi\":\"10.1109/ISBIWorkshops50223.2020.9153419\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The lack of a clear correspondence between feature of lesion areas and corresponding pathological characteristics and the scarcity of high-quality histopathological image sets pose a great challenge to the establishment of interpretable computer-aided diagnostic systems. Therefore, we propose a new deep learning-based model, named as C-ALGL model (CNN-AttendLSTM-GenerateLSTM), which is able to generate visual image results with diagnostic descriptions from input histopathological images in one pass. We use an improved recurrent neural network-based structure that incorporates attentional mechanisms in the LSTM interlayer with altered LSTM parameter delivery pathways. The structure generates visualization results at the attentional mechanism and diagnostic text at the end-connected full-connected layer. We conducted a large number of experiments on the PATHOLOGY-11 skin pathology image dataset and the experimental results proved that the C-ALGL model performed better than benchmark models on this task.\",\"PeriodicalId\":329356,\"journal\":{\"name\":\"2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISBIWorkshops50223.2020.9153419\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBIWorkshops50223.2020.9153419","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The lack of a clear correspondence between feature of lesion areas and corresponding pathological characteristics and the scarcity of high-quality histopathological image sets pose a great challenge to the establishment of interpretable computer-aided diagnostic systems. Therefore, we propose a new deep learning-based model, named as C-ALGL model (CNN-AttendLSTM-GenerateLSTM), which is able to generate visual image results with diagnostic descriptions from input histopathological images in one pass. We use an improved recurrent neural network-based structure that incorporates attentional mechanisms in the LSTM interlayer with altered LSTM parameter delivery pathways. The structure generates visualization results at the attentional mechanism and diagnostic text at the end-connected full-connected layer. We conducted a large number of experiments on the PATHOLOGY-11 skin pathology image dataset and the experimental results proved that the C-ALGL model performed better than benchmark models on this task.