{"title":"MVI-DCGAN对异质EO和被动射频融合的见解","authors":"A. Vakil, E. Blasch, Robert Ewing, Jia Li","doi":"10.1109/ISCMI56532.2022.10068480","DOIUrl":null,"url":null,"abstract":"As technology trends towards automation, deep neural network (DNN) based methods become more and more desirable from a technological, economical, and societal standpoint. However, owing to the way that these black box technologies operate, it can be difficult to troubleshoot potential errors, especially when dealing with data that the human mind cannot intuitively understand. For this reason, the use of explainable artificial intelligence (XAI) is integral to obtaining interpretability and understanding of these systems' techniques. The paper explores some of the known uses of XAI in Generative Adversarial Networks (GANs); i.e., in processing electro-optical (EO) and passive radiofrequency (Passive RF) data to achieve heterogenous sensor fusion. GANs are capable of generating realistic images, music text, and other forms of data, and the use of deep convolutional generative adversarial networks (DCGANs) to process such information provides “richer” corrective feedback from which the model can train from. Using the DCGAN approach, tone can provide visualizations from different types of neural networks and use them as a training source for the multiple visualizations input (MVI) DCGAN. The MVI-DCGAN uses these visualizations in order to track the vehicle target and further differentiate between other overlay visualization data and the generated overlay input visualizations. The paper demonstrates multiple sources of visualization input from different neural networks for the training of the MVI-DCGAN for a more robust training and directing the discriminator towards focusing on the P-RF aspects of the visualizations.","PeriodicalId":340397,"journal":{"name":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MVI-DCGAN Insights into Heterogenous EO and Passive RF Fusion\",\"authors\":\"A. Vakil, E. Blasch, Robert Ewing, Jia Li\",\"doi\":\"10.1109/ISCMI56532.2022.10068480\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As technology trends towards automation, deep neural network (DNN) based methods become more and more desirable from a technological, economical, and societal standpoint. However, owing to the way that these black box technologies operate, it can be difficult to troubleshoot potential errors, especially when dealing with data that the human mind cannot intuitively understand. For this reason, the use of explainable artificial intelligence (XAI) is integral to obtaining interpretability and understanding of these systems' techniques. The paper explores some of the known uses of XAI in Generative Adversarial Networks (GANs); i.e., in processing electro-optical (EO) and passive radiofrequency (Passive RF) data to achieve heterogenous sensor fusion. GANs are capable of generating realistic images, music text, and other forms of data, and the use of deep convolutional generative adversarial networks (DCGANs) to process such information provides “richer” corrective feedback from which the model can train from. Using the DCGAN approach, tone can provide visualizations from different types of neural networks and use them as a training source for the multiple visualizations input (MVI) DCGAN. The MVI-DCGAN uses these visualizations in order to track the vehicle target and further differentiate between other overlay visualization data and the generated overlay input visualizations. The paper demonstrates multiple sources of visualization input from different neural networks for the training of the MVI-DCGAN for a more robust training and directing the discriminator towards focusing on the P-RF aspects of the visualizations.\",\"PeriodicalId\":340397,\"journal\":{\"name\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCMI56532.2022.10068480\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI56532.2022.10068480","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MVI-DCGAN Insights into Heterogenous EO and Passive RF Fusion
As technology trends towards automation, deep neural network (DNN) based methods become more and more desirable from a technological, economical, and societal standpoint. However, owing to the way that these black box technologies operate, it can be difficult to troubleshoot potential errors, especially when dealing with data that the human mind cannot intuitively understand. For this reason, the use of explainable artificial intelligence (XAI) is integral to obtaining interpretability and understanding of these systems' techniques. The paper explores some of the known uses of XAI in Generative Adversarial Networks (GANs); i.e., in processing electro-optical (EO) and passive radiofrequency (Passive RF) data to achieve heterogenous sensor fusion. GANs are capable of generating realistic images, music text, and other forms of data, and the use of deep convolutional generative adversarial networks (DCGANs) to process such information provides “richer” corrective feedback from which the model can train from. Using the DCGAN approach, tone can provide visualizations from different types of neural networks and use them as a training source for the multiple visualizations input (MVI) DCGAN. The MVI-DCGAN uses these visualizations in order to track the vehicle target and further differentiate between other overlay visualization data and the generated overlay input visualizations. The paper demonstrates multiple sources of visualization input from different neural networks for the training of the MVI-DCGAN for a more robust training and directing the discriminator towards focusing on the P-RF aspects of the visualizations.