Tongue image segmentation and tongue color classification based on deep learning

Q3 Medicine
L.I.U. Wei, C.H.E.N. Jinming, L.I.U. Bo, H.U. Wei, W.U. Xingjin, Z.H.O.U. Hui
{"title":"Tongue image segmentation and tongue color classification based on deep learning","authors":"L.I.U. Wei,&nbsp;C.H.E.N. Jinming,&nbsp;L.I.U. Bo,&nbsp;H.U. Wei,&nbsp;W.U. Xingjin,&nbsp;Z.H.O.U. Hui","doi":"10.1016/j.dcmed.2022.10.002","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><p>To propose two novel methods based on deep learning for computer-aided tongue diagnosis, including tongue image segmentation and tongue color classification, improving their diagnostic accuracy.</p></div><div><h3>Methods</h3><p>LabelMe was used to label the tongue mask and Snake model to optimize the labeling results. A new dataset was constructed for tongue image segmentation. Tongue color was marked to build a classified dataset for network training. In this research, the Inception + Atrous Spatial Pyramid Pooling (ASPP) + UNet (IAUNet) method was proposed for tongue image segmentation, based on the existing UNet, Inception, and atrous convolution. Moreover, the Tongue Color Classification Net (TCCNet) was constructed with reference to ResNet, Inception, and Triple-Loss. Several important measurement indexes were selected to evaluate and compare the effects of the novel and existing methods for tongue segmentation and tongue color classification. IAUNet was compared with existing mainstream methods such as UNet and DeepLabV3+ for tongue segmentation. TCCNet for tongue color classification was compared with VGG16 and GoogLeNet.</p></div><div><h3>Results</h3><p>IAUNet can accurately segment the tongue from original images. The results showed that the Mean Intersection over Union (MIoU) of IAUNet reached 96.30%, and its Mean Pixel Accuracy (MPA), mean Average Precision (mAP), F1-Score, G-Score, and Area Under Curve (AUC) reached 97.86%, 99.18%, 96.71%, 96.82%, and 99.71%, respectively, suggesting IAUNet produced better segmentation than other methods, with fewer parameters. Triplet-Loss was applied in the proposed TCCNet to separate different embedded colors. The experiment yielded ideal results, with F1-Score and mAP of the TCCNet reached 88.86% and 93.49%, respectively.</p></div><div><h3>Conclusion</h3><p>IAUNet based on deep learning for tongue segmentation is better than traditional ones. IAUNet can not only produce ideal tongue segmentation, but have better effects than those of PSPNet, SegNet, UNet, and DeepLabV3+, the traditional networks. As for tongue color classification, the proposed network, TCCNet, had better F1-Score and mAP values as compared with other neural networks such as VGG16 and GoogLeNet.</p></div>","PeriodicalId":33578,"journal":{"name":"Digital Chinese Medicine","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589377722000490/pdfft?md5=e87638a6b039111b86fa36dba4769d4f&pid=1-s2.0-S2589377722000490-main.pdf","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Chinese Medicine","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589377722000490","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 3

Abstract

Objective

To propose two novel methods based on deep learning for computer-aided tongue diagnosis, including tongue image segmentation and tongue color classification, improving their diagnostic accuracy.

Methods

LabelMe was used to label the tongue mask and Snake model to optimize the labeling results. A new dataset was constructed for tongue image segmentation. Tongue color was marked to build a classified dataset for network training. In this research, the Inception + Atrous Spatial Pyramid Pooling (ASPP) + UNet (IAUNet) method was proposed for tongue image segmentation, based on the existing UNet, Inception, and atrous convolution. Moreover, the Tongue Color Classification Net (TCCNet) was constructed with reference to ResNet, Inception, and Triple-Loss. Several important measurement indexes were selected to evaluate and compare the effects of the novel and existing methods for tongue segmentation and tongue color classification. IAUNet was compared with existing mainstream methods such as UNet and DeepLabV3+ for tongue segmentation. TCCNet for tongue color classification was compared with VGG16 and GoogLeNet.

Results

IAUNet can accurately segment the tongue from original images. The results showed that the Mean Intersection over Union (MIoU) of IAUNet reached 96.30%, and its Mean Pixel Accuracy (MPA), mean Average Precision (mAP), F1-Score, G-Score, and Area Under Curve (AUC) reached 97.86%, 99.18%, 96.71%, 96.82%, and 99.71%, respectively, suggesting IAUNet produced better segmentation than other methods, with fewer parameters. Triplet-Loss was applied in the proposed TCCNet to separate different embedded colors. The experiment yielded ideal results, with F1-Score and mAP of the TCCNet reached 88.86% and 93.49%, respectively.

Conclusion

IAUNet based on deep learning for tongue segmentation is better than traditional ones. IAUNet can not only produce ideal tongue segmentation, but have better effects than those of PSPNet, SegNet, UNet, and DeepLabV3+, the traditional networks. As for tongue color classification, the proposed network, TCCNet, had better F1-Score and mAP values as compared with other neural networks such as VGG16 and GoogLeNet.

基于深度学习的舌图像分割和舌色分类
目的提出基于深度学习的舌头图像分割和舌头颜色分类两种新的计算机辅助舌头诊断方法,提高其诊断准确率。方法采用labelme软件对舌膜和Snake模型进行标记,优化标记效果。构建了一个新的舌图像分割数据集。对舌色进行标记,建立分类数据集用于网络训练。本研究在现有UNet、Inception和Atrous卷积的基础上,提出了Inception + Atrous空间金字塔池(ASPP) + UNet (IAUNet)舌头图像分割方法。此外,参考ResNet、Inception和Triple-Loss构建了舌色分类网(TCCNet)。选择了几个重要的测量指标来评价和比较新的和现有的舌头分割和舌头颜色分类方法的效果。将iunet与UNet、DeepLabV3+等现有主流舌头分割方法进行比较。将TCCNet与VGG16和GoogLeNet进行舌色分类比较。结果该方法能准确地从原始图像中分割出舌形。结果表明,IAUNet的平均相交度(MIoU)达到96.30%,平均像素精度(MPA)、平均平均精度(mAP)、F1-Score、G-Score和曲线下面积(AUC)分别达到97.86%、99.18%、96.71%、96.82%和99.71%,表明IAUNet的分割效果优于其他方法,且参数较少。在TCCNet中应用三重损耗来分离不同的嵌入颜色。实验取得了理想的结果,TCCNet的F1-Score和mAP分别达到88.86%和93.49%。结论基于深度学习的iaunet对舌语的分割效果优于传统方法。IAUNet不仅可以产生理想的舌段分割,而且比PSPNet、SegNet、UNet、DeepLabV3+等传统网络的舌段分割效果更好。在舌色分类方面,与VGG16、GoogLeNet等神经网络相比,本文提出的网络TCCNet具有更好的F1-Score和mAP值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Digital Chinese Medicine
Digital Chinese Medicine Medicine-Complementary and Alternative Medicine
CiteScore
1.80
自引率
0.00%
发文量
126
审稿时长
63 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信