基于迁移学习技术的腹腔镜视频分类增强型高效网络

Divya Acharya, Guda Ramachandra Kaladhara Sarma, Kameshwar Raovenkatajammalamadaka
{"title":"基于迁移学习技术的腹腔镜视频分类增强型高效网络","authors":"Divya Acharya, Guda Ramachandra Kaladhara Sarma, Kameshwar Raovenkatajammalamadaka","doi":"10.1109/IJCNN55064.2022.9891989","DOIUrl":null,"url":null,"abstract":"Recent days have seen a lot of interest in surgical data science (SDS) methods and imaging technologies. As a result of these developments, surgeons may execute less invasive procedures. Using pathology and no pathology situations to classify laparoscopic video pictures of surgical activities, in this research work authors conducted their investigation using a transfer learning technique named enhanced ENet (eENet) network based on enhanced EfficientNet network. Two base versions of the EfficientNet model named ENetB0 and ENetB7 along with the two proposed versions of the EfficientNet network as enhanced EfficientNetB0 (eENetB0) and enhanced EfficientnetB7 (eENetB7) are implemented in the proposed framework using publicly available GLENDA [1] dataset. The proposed eENetB0 and eENetB7 models have classified the features extracted using the transfer learning technique into binary classification. For 70–30 and 10-fold Cross-Validation (10-fold CV), the data splitting eENetB0 model has achieved maximum classification accuracy as 88.43% and 97.59%, and the eENetB7 model has achieved 97.72% and 98.78% accuracy. We also compared the performance of our proposed enhanced version of EfficientNet (eENetB0 and eENetB7) with the base version of the models (ENetB0 and ENetB7) it shows that among these four models eENetB7 performed well. For GUI-based visualization purposes, we also created a platform named IAS.ai that detects the surgical video clips having blood and dry scenarios and uses explainable AI for unboxing the deep learning model's performance. IAS.ai is a real-time application of our approach. For further validation, we compared our framework's performance with other leading approaches cited in the literature [2]–[4]. We can see how well the proposed eENet model does compare to existing models, as well as the current best practices.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Enhanced EfficientNet Network for Classifying Laparoscopy Videos using Transfer Learning Technique\",\"authors\":\"Divya Acharya, Guda Ramachandra Kaladhara Sarma, Kameshwar Raovenkatajammalamadaka\",\"doi\":\"10.1109/IJCNN55064.2022.9891989\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent days have seen a lot of interest in surgical data science (SDS) methods and imaging technologies. As a result of these developments, surgeons may execute less invasive procedures. Using pathology and no pathology situations to classify laparoscopic video pictures of surgical activities, in this research work authors conducted their investigation using a transfer learning technique named enhanced ENet (eENet) network based on enhanced EfficientNet network. Two base versions of the EfficientNet model named ENetB0 and ENetB7 along with the two proposed versions of the EfficientNet network as enhanced EfficientNetB0 (eENetB0) and enhanced EfficientnetB7 (eENetB7) are implemented in the proposed framework using publicly available GLENDA [1] dataset. The proposed eENetB0 and eENetB7 models have classified the features extracted using the transfer learning technique into binary classification. For 70–30 and 10-fold Cross-Validation (10-fold CV), the data splitting eENetB0 model has achieved maximum classification accuracy as 88.43% and 97.59%, and the eENetB7 model has achieved 97.72% and 98.78% accuracy. We also compared the performance of our proposed enhanced version of EfficientNet (eENetB0 and eENetB7) with the base version of the models (ENetB0 and ENetB7) it shows that among these four models eENetB7 performed well. For GUI-based visualization purposes, we also created a platform named IAS.ai that detects the surgical video clips having blood and dry scenarios and uses explainable AI for unboxing the deep learning model's performance. IAS.ai is a real-time application of our approach. For further validation, we compared our framework's performance with other leading approaches cited in the literature [2]–[4]. We can see how well the proposed eENet model does compare to existing models, as well as the current best practices.\",\"PeriodicalId\":106974,\"journal\":{\"name\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN55064.2022.9891989\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9891989","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

近年来,人们对外科数据科学(SDS)方法和成像技术产生了浓厚的兴趣。由于这些发展,外科医生可以进行侵入性较小的手术。在本研究工作中,作者采用基于增强型effentnet网络的增强型ENet (eENet)网络迁移学习技术,采用病理和无病理情况对腹腔镜手术活动视频图像进行分类。效率网模型的两个基本版本ENetB0和ENetB7,以及效率网网络的两个建议版本,即增强的效率网(eENetB0)和增强的效率网(eENetB7),在建议的框架中使用公开可用的GLENDA[1]数据集实现。提出的eENetB0和eENetB7模型将使用迁移学习技术提取的特征分类为二值分类。对于70-30和10倍交叉验证(10-fold CV),数据分割eENetB0模型的分类准确率最高,分别为88.43%和97.59%,eENetB7模型的分类准确率最高,分别为97.72%和98.78%。我们还比较了我们提出的增强版本的效率网络(eENetB0和eENetB7)与基本版本的模型(ENetB0和ENetB7)的性能,结果表明,在这四个模型中,eENetB7表现良好。为了基于gui的可视化目的,我们还创建了一个名为IAS的平台。人工智能可以检测到有血迹和干燥场景的手术视频片段,并使用可解释的人工智能来打开深度学习模型的性能。IAS。人工智能是我们方法的实时应用。为了进一步验证,我们将我们的框架的性能与文献[2]-[4]中引用的其他领先方法进行了比较。我们可以看到提议的eENet模型与现有模型以及当前的最佳实践相比有多好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhanced EfficientNet Network for Classifying Laparoscopy Videos using Transfer Learning Technique
Recent days have seen a lot of interest in surgical data science (SDS) methods and imaging technologies. As a result of these developments, surgeons may execute less invasive procedures. Using pathology and no pathology situations to classify laparoscopic video pictures of surgical activities, in this research work authors conducted their investigation using a transfer learning technique named enhanced ENet (eENet) network based on enhanced EfficientNet network. Two base versions of the EfficientNet model named ENetB0 and ENetB7 along with the two proposed versions of the EfficientNet network as enhanced EfficientNetB0 (eENetB0) and enhanced EfficientnetB7 (eENetB7) are implemented in the proposed framework using publicly available GLENDA [1] dataset. The proposed eENetB0 and eENetB7 models have classified the features extracted using the transfer learning technique into binary classification. For 70–30 and 10-fold Cross-Validation (10-fold CV), the data splitting eENetB0 model has achieved maximum classification accuracy as 88.43% and 97.59%, and the eENetB7 model has achieved 97.72% and 98.78% accuracy. We also compared the performance of our proposed enhanced version of EfficientNet (eENetB0 and eENetB7) with the base version of the models (ENetB0 and ENetB7) it shows that among these four models eENetB7 performed well. For GUI-based visualization purposes, we also created a platform named IAS.ai that detects the surgical video clips having blood and dry scenarios and uses explainable AI for unboxing the deep learning model's performance. IAS.ai is a real-time application of our approach. For further validation, we compared our framework's performance with other leading approaches cited in the literature [2]–[4]. We can see how well the proposed eENet model does compare to existing models, as well as the current best practices.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信