Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao
{"title":"薄壁细胞学检查中鳞状上皮内病变病理形态识别的多任务学习","authors":"Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao","doi":"10.1145/3285996.3286013","DOIUrl":null,"url":null,"abstract":"This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"89 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Multitask Learning for Pathomorphology Recognition of Squamous Intraepithelial Lesion in Thinprep Cytologic Test\",\"authors\":\"Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao\",\"doi\":\"10.1145/3285996.3286013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.\",\"PeriodicalId\":287756,\"journal\":{\"name\":\"International Symposium on Image Computing and Digital Medicine\",\"volume\":\"89 3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Symposium on Image Computing and Digital Medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3285996.3286013\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposium on Image Computing and Digital Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3285996.3286013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multitask Learning for Pathomorphology Recognition of Squamous Intraepithelial Lesion in Thinprep Cytologic Test
This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.