J. Heneghan, S. Shaikh, J. Bryans, Madeline Cheah, P. Wooderson
{"title":"Enabling Security Checking of Automotive ECUs with Formal CSP Models","authors":"J. Heneghan, S. Shaikh, J. Bryans, Madeline Cheah, P. Wooderson","doi":"10.1109/DSN-W.2019.00025","DOIUrl":"https://doi.org/10.1109/DSN-W.2019.00025","url":null,"abstract":"This paper presents an approach, using the process-algebra CSP, that aims to support systematic security testing of ECU components. An example use case regarding Over-The-Air software updates demonstrates the potential of our approach. Initial results confirm application code implemented in a typical automotive development environment can be translated into machine-readable format for the FDR refinement checker to formally verify security functions and identify any existing security flaws. Although still early stage work, the potential contribution towards automatically model-checking ECU components and, by composing several CSP models, larger systems is encouraging.","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127065997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenbo Xu, Alexander Willecke, M. Wegner, L. Wolf, R. Kapitza
{"title":"Autonomous Maneuver Coordination Via Vehicular Communication","authors":"Wenbo Xu, Alexander Willecke, M. Wegner, L. Wolf, R. Kapitza","doi":"10.1109/DSN-W.2019.00022","DOIUrl":"https://doi.org/10.1109/DSN-W.2019.00022","url":null,"abstract":"Autonomous vehicles can make local decisions based on the perception of their environments. However, individual decisions do not always result in the most efficient solution from a global point of view, because of a lack of coordination. To improve the traffic efficiency, multiple autonomous vehicles can be orchestrated in a spontaneous group. This is achieved by a recently proposed Maneuver Coordination service utilizing the vehicular communication. Vehicles equipped with the Maneuver Coordination service can share their planned driving trajectories with others close to them, detect potential conflicts and negotiate a new plan of trajectories. If the negotiation is successful, the vehicles can end up with a cooperative and more efficient joint plan, compared to the individual plans. In this work, we extend the Maneuver Coordination service by designing a coordination protocol for the negotiation. Our protocol can ensure an agreement and prevent conflicting maneuver changes. The negotiation procedure also takes potential network failures into account and keeps the impact of such failures at a relatively low level. We tested our approach on a V2X simulation framework. The evaluation results of a lane-join scenario show that the coordinated plan after negotiation has 50% less accumulated time loss compared to the default right-of-way rules.","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129409345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linara Adilova, Livin Natious, Siming Chen, Olivier Thonnard, Michael Kamp
{"title":"System Misuse Detection Via Informed Behavior Clustering and Modeling","authors":"Linara Adilova, Livin Natious, Siming Chen, Olivier Thonnard, Michael Kamp","doi":"10.1109/DSN-W.2019.00011","DOIUrl":"https://doi.org/10.1109/DSN-W.2019.00011","url":null,"abstract":"One of the main tasks of cybersecurity is recognizing malicious interactions with an arbitrary system. Currently, the logging information from each interaction can be collected in almost unrestricted amounts, but identification of attacks requires a lot of effort and time of security experts.We propose an approach for identifying fraud activity through modeling normal behavior in interactions with a system via machine learning methods, in particular LSTM neural networks. In order to enrich the modeling with system specific knowledge, we propose to use an interactive visual interface that allows security experts to identify semantically meaningful clusters of interactions. These clusters incorporate domain knowledge and lead to more precise behavior modeling via informed machine learning. We evaluate the proposed approach on a dataset containing logs of interactions with an administrative interface of login and security server. Our empirical results indicate that the informed modeling is capable of capturing normal behavior, which can then be used to detect abnormal behavior.","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125532868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Copyright notice]","authors":"","doi":"10.1109/dsn-w.2019.00003","DOIUrl":"https://doi.org/10.1109/dsn-w.2019.00003","url":null,"abstract":"","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130584449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emilia Cioroaica, D. Schneider, Hanna AlZughbi, Jan Reich, R. Adler, T. Braun
{"title":"Predictive Runtime Simulation for Building Trust in Cooperative Autonomous Systems","authors":"Emilia Cioroaica, D. Schneider, Hanna AlZughbi, Jan Reich, R. Adler, T. Braun","doi":"10.1109/DSN-W.2019.00024","DOIUrl":"https://doi.org/10.1109/DSN-W.2019.00024","url":null,"abstract":"Future autonomous systems will also be cooperative systems. They will interact with each other, with traffic infrastructure, with cloud services and with other systems. In such an open ecosystem trust is of fundamental importance, because cooperation between systems is key for many innovation applications and services. Without an adequate notion of trust, as well as means to maintain and use it, the full potential of autonomous systems thus cannot be unlocked. In this paper, we discuss what constitutes trust in autonomous cooperative systems and sketch out a corresponding multifaceted notion of trust. We then go on to discuss a predictive runtime simulation approach as a building block for trust and elaborate on means to secure this approach.","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114422548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suman Kalyan Adari, Washington Garcia, Kevin R. B. Butler
{"title":"Adversarial Video Captioning","authors":"Suman Kalyan Adari, Washington Garcia, Kevin R. B. Butler","doi":"10.1109/DSN-W.2019.00012","DOIUrl":"https://doi.org/10.1109/DSN-W.2019.00012","url":null,"abstract":"In recent years, developments in the field of computer vision have allowed deep learning-based techniques to surpass human-level performance. However, these advances have also culminated in the advent of adversarial machine learning techniques, capable of launching targeted image captioning attacks that easily fool deep learning models. Although attacks in the image domain are well studied, little work has been done in the video domain. In this paper, we show it is possible to extend prior attacks in the image domain to the video captioning task, without heavily affecting the video's playback quality. We demonstrate our attack against a state-of-the-art video captioning model, by extending a prior image captioning attack known as Show and Fool. To the best of our knowledge, this is the first successful method for targeted attacks against a video captioning model, which is able to inject 'subliminal' perturbations into the video stream, and force the model to output a chosen caption with up to 0.981 cosine similarity, achieving near-perfect similarity to chosen target captions.","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124567994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Welcome from the SSIV 2019 Organizers","authors":"","doi":"10.1109/dsn-w.2019.00008","DOIUrl":"https://doi.org/10.1109/dsn-w.2019.00008","url":null,"abstract":"","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126354309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mixed Strategy Game Model Against Data Poisoning Attacks","authors":"Yi-Tsen Ou, Reza Samavi","doi":"10.1109/DSN-W.2019.00015","DOIUrl":"https://doi.org/10.1109/DSN-W.2019.00015","url":null,"abstract":"In this paper we use game theory to model poisoning attack scenarios. We prove the non-existence of pure strategy Nash Equilibrium in the attacker and defender game. We then propose a mixed extension of our game model and an algorithm to approximate the Nash Equilibrium strategy for the defender. We then demonstrate the effectiveness of the mixed defence strategy generated by the algorithm, in an experiment.","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127584444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"N-Version Machine Learning Models for Safety Critical Systems","authors":"F. Machida","doi":"10.1109/DSN-W.2019.00017","DOIUrl":"https://doi.org/10.1109/DSN-W.2019.00017","url":null,"abstract":"Quality control of machine learning systems is a fundamental challenge in industries to provide intelligent services or products using machine learning. While recent advances in machine learning algorithms substantially improve the performance of intelligent tasks such as object recognition, their outputs are essentially stochastic and very sensitive to input data. Such an output uncertainty is a big obstacle to ensure the quality of safety critical applications like autonomous vehicle and hence architectural design to mitigate the impact of error output becomes a great importance. In this paper, we propose N-version machine learning architecture that aims to improve system reliability against probabilistic outputs of individual machine learning modules. The key idea of this architecture is exploiting two kinds of diversities; input diversity and model diversity. Our study first formally defines these diversity metrics and analytically shows the improved reliability by N-version machine learning architecture. Since we treat a machine learning module as a black-box, the proposed architecture and the reliability property are generally applicable to any machine learning algorithms and applications.","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128810141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Intuition from Empirical Properties to Simplify Adversarial Training Defense","authors":"Guanxiong Liu, Issa M. Khalil, Abdallah Khreishah","doi":"10.1109/DSN-W.2019.00020","DOIUrl":"https://doi.org/10.1109/DSN-W.2019.00020","url":null,"abstract":"Due to the surprisingly good representation power of complex distributions, neural network (NN) classifiers are widely used in many tasks which include natural language processing, computer vision and cyber security. In recent works, people noticed the existence of adversarial examples. These adversarial examples break the NN classifiers' underlying assumption that the environment is attack free and can easily mislead fully trained NN classifier without noticeable changes. Among defensive methods, adversarial training is a popular choice. However, original adversarial training with single-step adversarial examples (Single-Adv) can not defend against iterative adversarial examples. Although adversarial training with iterative adversarial examples (Iter-Adv) can defend against iterative adversarial examples, it consumes too much computational power and hence is not scalable. In this paper, we analyze Iter-Adv techniques and identify two of their empirical properties. Based on these properties, we propose modifications which enhance Single-Adv to perform competitively as Iter-Adv. Through preliminary evaluation, we show that the proposed method enhances the test accuracy of state-of-the-art (SOTA) Single-Adv defensive method against iterative adversarial examples by up to 16.93% while reducing its training cost by 28.75%.","PeriodicalId":285649,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115083479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}