{"title":"通过融合 RGB 和深度图像,利用堆叠 CoAtNets 检测天体","authors":"Chinnala Balakrishna, Shepuri Srinivasulu","doi":"10.30574/ijsra.2024.12.2.1234","DOIUrl":null,"url":null,"abstract":"Space situational awareness (SSA) system requires detection of space objects that are varied in sizes, shapes, and types. The space images are difficult because of various factors such as illumination and noise and as a result make the recognition task complex. Image fusion is an important area in image processing for a variety of applications including RGB-D sensor fusion, remote sensing, medical diagnostics, and infrared and visible image fusion. In recent times, various image fusion algorithms have been developed and they showed a superior performance to explore more information that is not available in single images. In this paper I compared various methods of RGB and Depth image fusion for space object classification task. The experiments were carried out, and the performance was evaluated using fusion performance metrics. It was found that the guided filter context enhancement (GFCE) outperformed other image fusion methods in terms of average gradient, spatial frequency, and entropy. Additionally, due to its ability to balance between good performance and inference speed, GFCE was selected for RGB and Depth image fusion stage before feature extraction and classification stage. The outcome of fusion method is merged images that were used to train a deep assembly of CoAtNets to classify space objects into ten categories. The deep ensemble learning methods including bagging, boosting, and stacking were trained and evaluated for classification purposes. It was found that combination of fusion and stacking was able to improve classification accuracy.","PeriodicalId":14366,"journal":{"name":"International Journal of Science and Research Archive","volume":"5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Astronomical bodies detection with stacking of CoAtNets by fusion of RGB and depth Images\",\"authors\":\"Chinnala Balakrishna, Shepuri Srinivasulu\",\"doi\":\"10.30574/ijsra.2024.12.2.1234\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Space situational awareness (SSA) system requires detection of space objects that are varied in sizes, shapes, and types. The space images are difficult because of various factors such as illumination and noise and as a result make the recognition task complex. Image fusion is an important area in image processing for a variety of applications including RGB-D sensor fusion, remote sensing, medical diagnostics, and infrared and visible image fusion. In recent times, various image fusion algorithms have been developed and they showed a superior performance to explore more information that is not available in single images. In this paper I compared various methods of RGB and Depth image fusion for space object classification task. The experiments were carried out, and the performance was evaluated using fusion performance metrics. It was found that the guided filter context enhancement (GFCE) outperformed other image fusion methods in terms of average gradient, spatial frequency, and entropy. Additionally, due to its ability to balance between good performance and inference speed, GFCE was selected for RGB and Depth image fusion stage before feature extraction and classification stage. The outcome of fusion method is merged images that were used to train a deep assembly of CoAtNets to classify space objects into ten categories. The deep ensemble learning methods including bagging, boosting, and stacking were trained and evaluated for classification purposes. It was found that combination of fusion and stacking was able to improve classification accuracy.\",\"PeriodicalId\":14366,\"journal\":{\"name\":\"International Journal of Science and Research Archive\",\"volume\":\"5 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Science and Research Archive\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.30574/ijsra.2024.12.2.1234\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Science and Research Archive","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30574/ijsra.2024.12.2.1234","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Astronomical bodies detection with stacking of CoAtNets by fusion of RGB and depth Images
Space situational awareness (SSA) system requires detection of space objects that are varied in sizes, shapes, and types. The space images are difficult because of various factors such as illumination and noise and as a result make the recognition task complex. Image fusion is an important area in image processing for a variety of applications including RGB-D sensor fusion, remote sensing, medical diagnostics, and infrared and visible image fusion. In recent times, various image fusion algorithms have been developed and they showed a superior performance to explore more information that is not available in single images. In this paper I compared various methods of RGB and Depth image fusion for space object classification task. The experiments were carried out, and the performance was evaluated using fusion performance metrics. It was found that the guided filter context enhancement (GFCE) outperformed other image fusion methods in terms of average gradient, spatial frequency, and entropy. Additionally, due to its ability to balance between good performance and inference speed, GFCE was selected for RGB and Depth image fusion stage before feature extraction and classification stage. The outcome of fusion method is merged images that were used to train a deep assembly of CoAtNets to classify space objects into ten categories. The deep ensemble learning methods including bagging, boosting, and stacking were trained and evaluated for classification purposes. It was found that combination of fusion and stacking was able to improve classification accuracy.