{"title":"Detecting Subject-Weapon Visual Relationships","authors":"Thomas Truong, S. Yanushkevich","doi":"10.1109/SSCI47803.2020.9308574","DOIUrl":null,"url":null,"abstract":"Computer vision-based weapon detection method- ologies and applications in safety and security have fallen behind when compared to state-of-the-art computer vision applications problems in other areas. In this paper we propose a novel visual relationship detection model trained on the Open Images V6 dataset to detect the visual relationships of “holds” and “wears” between people and objects. We also introduce an application of the proposed model to detect if weapons are being held. Weapons are an unseen object class to the network. The best proposed model achieves an accuracy of 90.01% ±2.05 % on the test set of the Open Images V6 dataset for classifying the “holds” and “wears” visual relationships.","PeriodicalId":413489,"journal":{"name":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI47803.2020.9308574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Computer vision-based weapon detection method- ologies and applications in safety and security have fallen behind when compared to state-of-the-art computer vision applications problems in other areas. In this paper we propose a novel visual relationship detection model trained on the Open Images V6 dataset to detect the visual relationships of “holds” and “wears” between people and objects. We also introduce an application of the proposed model to detect if weapons are being held. Weapons are an unseen object class to the network. The best proposed model achieves an accuracy of 90.01% ±2.05 % on the test set of the Open Images V6 dataset for classifying the “holds” and “wears” visual relationships.