基于空间关系特征提取的胸部x射线图像标注

Q2 Computer Science
Mohd Nizam Saad, Mohamad Farhan Mohamad Mohsin, Hamzaini Abdul Hamid, Zurina Muda
{"title":"基于空间关系特征提取的胸部x射线图像标注","authors":"Mohd Nizam Saad, Mohamad Farhan Mohamad Mohsin, Hamzaini Abdul Hamid, Zurina Muda","doi":"10.33166/aetic.2023.05.007","DOIUrl":null,"url":null,"abstract":"Digital imaging has become an essential element in every medical institution. Therefore, medical image retrieval such as chest X-ray (CXR) must be improved via novel feature extraction and annotation activities before they are stored into image databases. To date, many methods have been introduced to annotate medical images using spatial relationships after these features are extracted. However, the annotation performance for each method is inconsistent and does not show promising achievement to retrieve images. It is noticed that each method is still struggling with at least two big problems. Firstly, the recommended annotation model is weak because the method does not consider the object shape and rely on gross object shape estimation. Secondly, the suggested annotation model can only be functional for simple object placement. As a result, it is difficult to determine the spatial relationship feature after they are extracted to annotate images accurately. Hence, this study aims to propose a new model to annotate nodule location within lung zone for CXR image with extracted spatial relationship feature to improve image retrieval. In order to achieve the aim, a methodology that consists of six phases of CXR image annotation using the extracted spatial relationship features is introduced. This comprehensive methodology covers all cycles for image annotation tasks starting from image pre-processing until determination of spatial relationship features for the lung zone in the CXR. The outcome from applying the methodology also enables us to produce a new semi-automatic annotation system named CHEXRIARS which acts as a tool to annotate the extracted spatial relationship features in CXR images. The CHEXRIARS performance is tested using a retrieval test with two common tests namely the precision and recall (PNR). Apart from CHEXRIARS, three other annotation methods that are object slope, object projection and comparison of region boundaries are also included in the retrieval performance test. Overall, the CHEXRIARS interpolated PNR curve shows the best shape because it is the closest curve approaching the value of 1 on the X-axis and Y-axis. Meanwhile the value of area under curve for CHEXRIARS also revealed that this system attained the highest score at 0.856 as compared to the other three annotation methods. The outcome from the retrieval performance test indicated that the proposed annotation model has produced outstanding outcome and improved the image retrieval.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Chest X-Ray Image Annotation based on Spatial Relationship Feature Extraction\",\"authors\":\"Mohd Nizam Saad, Mohamad Farhan Mohamad Mohsin, Hamzaini Abdul Hamid, Zurina Muda\",\"doi\":\"10.33166/aetic.2023.05.007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Digital imaging has become an essential element in every medical institution. Therefore, medical image retrieval such as chest X-ray (CXR) must be improved via novel feature extraction and annotation activities before they are stored into image databases. To date, many methods have been introduced to annotate medical images using spatial relationships after these features are extracted. However, the annotation performance for each method is inconsistent and does not show promising achievement to retrieve images. It is noticed that each method is still struggling with at least two big problems. Firstly, the recommended annotation model is weak because the method does not consider the object shape and rely on gross object shape estimation. Secondly, the suggested annotation model can only be functional for simple object placement. As a result, it is difficult to determine the spatial relationship feature after they are extracted to annotate images accurately. Hence, this study aims to propose a new model to annotate nodule location within lung zone for CXR image with extracted spatial relationship feature to improve image retrieval. In order to achieve the aim, a methodology that consists of six phases of CXR image annotation using the extracted spatial relationship features is introduced. This comprehensive methodology covers all cycles for image annotation tasks starting from image pre-processing until determination of spatial relationship features for the lung zone in the CXR. The outcome from applying the methodology also enables us to produce a new semi-automatic annotation system named CHEXRIARS which acts as a tool to annotate the extracted spatial relationship features in CXR images. The CHEXRIARS performance is tested using a retrieval test with two common tests namely the precision and recall (PNR). Apart from CHEXRIARS, three other annotation methods that are object slope, object projection and comparison of region boundaries are also included in the retrieval performance test. Overall, the CHEXRIARS interpolated PNR curve shows the best shape because it is the closest curve approaching the value of 1 on the X-axis and Y-axis. Meanwhile the value of area under curve for CHEXRIARS also revealed that this system attained the highest score at 0.856 as compared to the other three annotation methods. The outcome from the retrieval performance test indicated that the proposed annotation model has produced outstanding outcome and improved the image retrieval.\",\"PeriodicalId\":36440,\"journal\":{\"name\":\"Annals of Emerging Technologies in Computing\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Emerging Technologies in Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.33166/aetic.2023.05.007\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Emerging Technologies in Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33166/aetic.2023.05.007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

数字成像已经成为每个医疗机构的基本要素。因此,医学图像检索,如胸部x射线(CXR),必须通过新的特征提取和注释活动来改进,然后将其存储到图像数据库中。迄今为止,已经引入了许多方法来利用提取这些特征后的空间关系对医学图像进行注释。然而,每种方法的标注性能不一致,在检索图像方面表现不佳。值得注意的是,每种方法仍在努力解决至少两个大问题。首先,推荐的标注模型不考虑物体形状,依赖于物体粗形状估计,存在一定的缺陷。其次,建议的注释模型只能用于简单的对象放置。因此,提取空间关系特征后,难以确定空间关系特征,难以对图像进行准确标注。因此,本研究旨在提出一种新的模型,利用提取的空间关系特征对CXR图像进行肺区结节位置标注,以提高图像检索能力。为了实现这一目标,介绍了一种利用提取的空间关系特征对CXR图像进行六阶段标注的方法。这种全面的方法涵盖了从图像预处理到确定CXR中肺区空间关系特征的图像注释任务的所有周期。应用该方法的结果还使我们能够生成一个名为CHEXRIARS的新型半自动注释系统,该系统可作为注释CXR图像中提取的空间关系特征的工具。CHEXRIARS的性能测试采用两个常见的检索测试,即准确性和召回率(PNR)。除CHEXRIARS外,检索性能测试还包括物体斜率、物体投影和区域边界比较三种标注方法。总的来说,CHEXRIARS插值的PNR曲线显示出最好的形状,因为它是x轴和y轴上最接近1的曲线。同时,CHEXRIARS的曲线下面积值也显示,与其他三种标注方法相比,该系统的得分最高,为0.856。检索性能测试结果表明,所提出的标注模型取得了较好的效果,提高了图像检索的效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Chest X-Ray Image Annotation based on Spatial Relationship Feature Extraction
Digital imaging has become an essential element in every medical institution. Therefore, medical image retrieval such as chest X-ray (CXR) must be improved via novel feature extraction and annotation activities before they are stored into image databases. To date, many methods have been introduced to annotate medical images using spatial relationships after these features are extracted. However, the annotation performance for each method is inconsistent and does not show promising achievement to retrieve images. It is noticed that each method is still struggling with at least two big problems. Firstly, the recommended annotation model is weak because the method does not consider the object shape and rely on gross object shape estimation. Secondly, the suggested annotation model can only be functional for simple object placement. As a result, it is difficult to determine the spatial relationship feature after they are extracted to annotate images accurately. Hence, this study aims to propose a new model to annotate nodule location within lung zone for CXR image with extracted spatial relationship feature to improve image retrieval. In order to achieve the aim, a methodology that consists of six phases of CXR image annotation using the extracted spatial relationship features is introduced. This comprehensive methodology covers all cycles for image annotation tasks starting from image pre-processing until determination of spatial relationship features for the lung zone in the CXR. The outcome from applying the methodology also enables us to produce a new semi-automatic annotation system named CHEXRIARS which acts as a tool to annotate the extracted spatial relationship features in CXR images. The CHEXRIARS performance is tested using a retrieval test with two common tests namely the precision and recall (PNR). Apart from CHEXRIARS, three other annotation methods that are object slope, object projection and comparison of region boundaries are also included in the retrieval performance test. Overall, the CHEXRIARS interpolated PNR curve shows the best shape because it is the closest curve approaching the value of 1 on the X-axis and Y-axis. Meanwhile the value of area under curve for CHEXRIARS also revealed that this system attained the highest score at 0.856 as compared to the other three annotation methods. The outcome from the retrieval performance test indicated that the proposed annotation model has produced outstanding outcome and improved the image retrieval.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Annals of Emerging Technologies in Computing
Annals of Emerging Technologies in Computing Computer Science-Computer Science (all)
CiteScore
3.50
自引率
0.00%
发文量
26
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信