Mohd Nizam Saad, Mohamad Farhan Mohamad Mohsin, Hamzaini Abdul Hamid, Zurina Muda
{"title":"基于空间关系特征提取的胸部x射线图像标注","authors":"Mohd Nizam Saad, Mohamad Farhan Mohamad Mohsin, Hamzaini Abdul Hamid, Zurina Muda","doi":"10.33166/aetic.2023.05.007","DOIUrl":null,"url":null,"abstract":"Digital imaging has become an essential element in every medical institution. Therefore, medical image retrieval such as chest X-ray (CXR) must be improved via novel feature extraction and annotation activities before they are stored into image databases. To date, many methods have been introduced to annotate medical images using spatial relationships after these features are extracted. However, the annotation performance for each method is inconsistent and does not show promising achievement to retrieve images. It is noticed that each method is still struggling with at least two big problems. Firstly, the recommended annotation model is weak because the method does not consider the object shape and rely on gross object shape estimation. Secondly, the suggested annotation model can only be functional for simple object placement. As a result, it is difficult to determine the spatial relationship feature after they are extracted to annotate images accurately. Hence, this study aims to propose a new model to annotate nodule location within lung zone for CXR image with extracted spatial relationship feature to improve image retrieval. In order to achieve the aim, a methodology that consists of six phases of CXR image annotation using the extracted spatial relationship features is introduced. This comprehensive methodology covers all cycles for image annotation tasks starting from image pre-processing until determination of spatial relationship features for the lung zone in the CXR. The outcome from applying the methodology also enables us to produce a new semi-automatic annotation system named CHEXRIARS which acts as a tool to annotate the extracted spatial relationship features in CXR images. The CHEXRIARS performance is tested using a retrieval test with two common tests namely the precision and recall (PNR). Apart from CHEXRIARS, three other annotation methods that are object slope, object projection and comparison of region boundaries are also included in the retrieval performance test. Overall, the CHEXRIARS interpolated PNR curve shows the best shape because it is the closest curve approaching the value of 1 on the X-axis and Y-axis. Meanwhile the value of area under curve for CHEXRIARS also revealed that this system attained the highest score at 0.856 as compared to the other three annotation methods. The outcome from the retrieval performance test indicated that the proposed annotation model has produced outstanding outcome and improved the image retrieval.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Chest X-Ray Image Annotation based on Spatial Relationship Feature Extraction\",\"authors\":\"Mohd Nizam Saad, Mohamad Farhan Mohamad Mohsin, Hamzaini Abdul Hamid, Zurina Muda\",\"doi\":\"10.33166/aetic.2023.05.007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Digital imaging has become an essential element in every medical institution. Therefore, medical image retrieval such as chest X-ray (CXR) must be improved via novel feature extraction and annotation activities before they are stored into image databases. To date, many methods have been introduced to annotate medical images using spatial relationships after these features are extracted. However, the annotation performance for each method is inconsistent and does not show promising achievement to retrieve images. It is noticed that each method is still struggling with at least two big problems. Firstly, the recommended annotation model is weak because the method does not consider the object shape and rely on gross object shape estimation. Secondly, the suggested annotation model can only be functional for simple object placement. As a result, it is difficult to determine the spatial relationship feature after they are extracted to annotate images accurately. Hence, this study aims to propose a new model to annotate nodule location within lung zone for CXR image with extracted spatial relationship feature to improve image retrieval. In order to achieve the aim, a methodology that consists of six phases of CXR image annotation using the extracted spatial relationship features is introduced. This comprehensive methodology covers all cycles for image annotation tasks starting from image pre-processing until determination of spatial relationship features for the lung zone in the CXR. The outcome from applying the methodology also enables us to produce a new semi-automatic annotation system named CHEXRIARS which acts as a tool to annotate the extracted spatial relationship features in CXR images. The CHEXRIARS performance is tested using a retrieval test with two common tests namely the precision and recall (PNR). Apart from CHEXRIARS, three other annotation methods that are object slope, object projection and comparison of region boundaries are also included in the retrieval performance test. Overall, the CHEXRIARS interpolated PNR curve shows the best shape because it is the closest curve approaching the value of 1 on the X-axis and Y-axis. Meanwhile the value of area under curve for CHEXRIARS also revealed that this system attained the highest score at 0.856 as compared to the other three annotation methods. The outcome from the retrieval performance test indicated that the proposed annotation model has produced outstanding outcome and improved the image retrieval.\",\"PeriodicalId\":36440,\"journal\":{\"name\":\"Annals of Emerging Technologies in Computing\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Emerging Technologies in Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.33166/aetic.2023.05.007\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Emerging Technologies in Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33166/aetic.2023.05.007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
Chest X-Ray Image Annotation based on Spatial Relationship Feature Extraction
Digital imaging has become an essential element in every medical institution. Therefore, medical image retrieval such as chest X-ray (CXR) must be improved via novel feature extraction and annotation activities before they are stored into image databases. To date, many methods have been introduced to annotate medical images using spatial relationships after these features are extracted. However, the annotation performance for each method is inconsistent and does not show promising achievement to retrieve images. It is noticed that each method is still struggling with at least two big problems. Firstly, the recommended annotation model is weak because the method does not consider the object shape and rely on gross object shape estimation. Secondly, the suggested annotation model can only be functional for simple object placement. As a result, it is difficult to determine the spatial relationship feature after they are extracted to annotate images accurately. Hence, this study aims to propose a new model to annotate nodule location within lung zone for CXR image with extracted spatial relationship feature to improve image retrieval. In order to achieve the aim, a methodology that consists of six phases of CXR image annotation using the extracted spatial relationship features is introduced. This comprehensive methodology covers all cycles for image annotation tasks starting from image pre-processing until determination of spatial relationship features for the lung zone in the CXR. The outcome from applying the methodology also enables us to produce a new semi-automatic annotation system named CHEXRIARS which acts as a tool to annotate the extracted spatial relationship features in CXR images. The CHEXRIARS performance is tested using a retrieval test with two common tests namely the precision and recall (PNR). Apart from CHEXRIARS, three other annotation methods that are object slope, object projection and comparison of region boundaries are also included in the retrieval performance test. Overall, the CHEXRIARS interpolated PNR curve shows the best shape because it is the closest curve approaching the value of 1 on the X-axis and Y-axis. Meanwhile the value of area under curve for CHEXRIARS also revealed that this system attained the highest score at 0.856 as compared to the other three annotation methods. The outcome from the retrieval performance test indicated that the proposed annotation model has produced outstanding outcome and improved the image retrieval.