J. Athavale, Andrea Baldovin, Ralf Graefe, M. Paulitsch, Rafael Rosales
{"title":"AI and Reliability Trends in Safety-Critical Autonomous Systems on Ground and Air","authors":"J. Athavale, Andrea Baldovin, Ralf Graefe, M. Paulitsch, Rafael Rosales","doi":"10.1109/DSN-W50199.2020.00024","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00024","url":null,"abstract":"Safety-critical autonomous systems are becoming more powerful and more integrated to enable higher-level functionality. Modern multi-core SOCs are often the computing backbone in such systems for which safety and associated certification tasks are one of the key challenges, which can become more costly and difficult to achieve. Hence, modeling and assessment of these systems can be a formidable task. In addition, Artificial Intelligence (AI) is already being deployed in safety critical autonomous systems and Machine Learning (ML) enables the achievement of tasks in a cost-effective way.Compliance to Soft Error Rate (SER) requirements is an important element to be successful in these markets. When considering SER performance for functional safety, we need to focus on accurately modeling vulnerability factors for transient analysis based on AI and Deep Learning workloads. We also need to consider the reliability implications due to long mission times leading to high utilization factors for autonomous transport. The reliability risks due to these new use cases also need to be comprehended for modeling and mitigation and would directly impact the safety analysis for these systems. Finally, the need for telemetry for reliability, including capabilities for anomaly detection and prognostics techniques to minimize field failures is of paramount importance.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"12 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116399149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reward Tuning for self-adaptive Policy in MDP based Distributed Decision-Making to ensure a Safe Mission Planning","authors":"M. Hamadouche, C. Dezan, K. Branco","doi":"10.1109/DSN-W50199.2020.00025","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00025","url":null,"abstract":"Markov Decision Process (MDP) becomes a standard model for sequential decision making under uncertainty. This planning gives the appropriate sequence of actions to perform the goal of the mission in an efficient way. Often a single agent makes decisions and performs a single action. However, in several fields such as robotics several actions can be executed simultaneously. Moreover, with the increase of the complexity of missions, the decomposition of an MDP into several sub-MDPs becomes necessary. The decomposition involves parallel decisions between different agents, but the execution of concurrent actions can lead to conflicts. In addition, problems due to the system and to sensor failures may appear during the mission; these can lead to negative consequences (e.g. crash of a UAV caused by the drop in battery charge). In this article, we present a new method to prevent behavior conflicts that can appear within distributed decision-making and to emphasize the action selection if needed to ensure the safety and the various requirements of the system. This method takes into consideration the different constraints due to antagonist actions and wile additionally considering some thresholds on transition functions to promote specific actions that guarantee the safety of the system. Then it automatically computes the rewards of the different MDPs related to the mission in order to establish a safe planning. We validate this method on a case study of UAV mission such as a tracking mission. From the list of the constraints identified for the mission, the rewards of the MDPs are recomputed in order to avoid all potential conflicts and violation of constraints related to the safety of the system, thereby ensuring a safe specification of the mission.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134417244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flexible Deployment and Enforcement of Flight and Privacy Restrictions for Drone Applications","authors":"Nasos Grigoropoulos, S. Lalis","doi":"10.1109/DSN-W50199.2020.00029","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00029","url":null,"abstract":"As drones gradually become a key component of next-generation cyber-physical systems, it is important to manage them in a flexible and efficient way. At the same time, it is crucial to enforce certain restrictions, which may not only concern no-fly zones but may also limit the usage of specific sensors, especially in urban areas. To this end, we propose an open system that enables the flexible deployment and controlled execution of drone applications. On the one hand, applications come in the form of independently executable software bundles that can be deployed on whichever drones are available and satisfy the corresponding resource and flight requirements. On the other hand, suitable mechanisms are used to monitor the execution of the applications at runtime in order to check conformance to the restrictions posed by the authorities, as well as to handle related violations in an automated way. In this paper, we present the key elements of the proposed approach and describe a proof-of-concept implementation that supports most of the envisioned functionality. We also provide a validation of our system prototype using both a software-in-the-loop setup and a real drone in the open.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"46 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129357183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence","authors":"J. Hernández-Orallo","doi":"10.1109/DSN-W50199.2020.00023","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00023","url":null,"abstract":"AI Safety is an emerging area that integrates very different perspectives from mainstream AI, critical system engineering, dependable autonomous systems, artificial general intelligence, and many other areas concerned and occupied with building AI systems that are safe. Because of this diversity, there is an important level of disagreement in the terminology, the ontologies and the priorities of the field. The Consortium on the Landscape of AI Safety (CLAIS) is an international initiative to create a worldwide, consensus-based and generally-accepted knowledge base (online, interactive and constantly evolving) of structured subareas in AI Safety, including terminology, technologies, research gaps and opportunities, resources, people and groups working in the area, and connection with other subareas and disciplines. In this note we summarise early discussions around the initiative, the associated workshops, its current state and activities, including the body of knowledge, and how to contribute. On a more technical side, I will cover a few spots in the landscape, from very specific and short-term safety engineering issues appearing in specialised systems, to more long-term hazards emerging from more general and powerful intelligent systems.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"34 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114113225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Approaching certification of complex systems","authors":"Nicholas Mc Guire, Imanol Allende","doi":"10.1109/DSN-W50199.2020.00022","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00022","url":null,"abstract":"Safety being a system property and not an element property means that novel systems need to be treated as ”oneof”. Only after we gained adequate experience in context of a few (probably dozen) such complex system will common ”baseline” argument emerge. Trying to build ”out-of-context” elements certainly is either not feasible at all or would, if feasible, not simplify anything, since all possible states would need to be considered. In the case of, for example, the Linux kernel, the sheer amount of such states would completely overstrain such an approach. Applying route 3S assessment of non-compliant development while managing the extensive tailoring of measures, techniques and processes, seems to us to be the most promising path towards for safe complex systems.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115539112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On The Generation of Unrestricted Adversarial Examples","authors":"Mehrgan Khoshpasand, A. Ghorbani","doi":"10.1109/DSN-W50199.2020.00012","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00012","url":null,"abstract":"Adversarial examples are inputs designed by an adversary with the goal of fooling the machine learning models. Most of the research about adversarial examples have focused on perturbing the natural inputs with the assumption that the true label remains unchanged. Even in this limited setting and despite extensive studies in recent years, there is no defence against adversarial examples for complex tasks (e.g., ImageNet). However, for simpler tasks like handwritten digit classification, a robust model seems to be within reach. Unlike perturbation-based adversarial examples, the adversary is not limited to small norm-based perturbations in unrestricted adversarial examples. Hence, defending against unrestricted adversarial examples is a more challenging task.In this paper, we show that previous methods for generating unrestricted adversarial examples ignored a large part of the adversarial subspace. In particular, we demonstrate the bias of previous methods towards generating samples that are far inside the decision boundaries of an auxiliary classifier. We also show the similarity of the decision boundaries of an auxiliary classifier and baseline CNNs. By putting these two evidence together, we explain why adversarial examples generated by the previous approaches lack the desired transferability. Additionally, we present an efficient technique to create adversarial examples using generative adversarial networks to address this issue. We demonstrate that even the state-of-the-art MNIST classifiers are vulnerable to the adversarial examples generated with this technique. Additionally, we show that examples generated with our method are transferable. Accordingly, we hope that new proposed defences use this attack to evaluate the robustness of their models against unrestricted attacks.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127717799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fredrik Warg, Martin A. Skoglund, Anders Thorsén, Rolf Johansson, M. Brännström, Magnus Gyllenhammar, Martin Sanfridson
{"title":"The Quantitative Risk Norm - A Proposed Tailoring of HARA for ADS","authors":"Fredrik Warg, Martin A. Skoglund, Anders Thorsén, Rolf Johansson, M. Brännström, Magnus Gyllenhammar, Martin Sanfridson","doi":"10.1109/DSN-W50199.2020.00026","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00026","url":null,"abstract":"One of the major challenges of automated driving systems (ADS) is showing that they drive safely. Key to ensuring safety is eliciting a complete set of top-level safety requirements (safety goals). This is typically done with an activity called hazard analysis and risk assessment (HARA). In this paper we argue that the HARA of ISO 26262:2018 is not directly suitable for an ADS, both because the number of relevant operational situations may be vast, and because the ability of the ADS to make decisions in order to reduce risks will affect the analysis of exposure and hazards. Instead we propose a tailoring using a quantitative risk norm (QRN) with consequence classes, where each class has a limit for the frequency within which the consequences may occur. Incident types are then defined and assigned to the consequence classes; the requirements prescribing the limits of these incident types are used as safety goals to fulfil in the implementation. The main benefits of the QRN approach are the ability to show completeness of safety goals, and make sure that the safety strategy is not limited by safety goals which are not formulated in a way suitable for an ADS.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127961751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Martins, R. Barbosa, Nuno Lourenço, Jacques Robin, H. Madeira
{"title":"Online Verification through Model Checking of Medical Critical Intelligent Systems","authors":"J. Martins, R. Barbosa, Nuno Lourenço, Jacques Robin, H. Madeira","doi":"10.1109/DSN-W50199.2020.00015","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00015","url":null,"abstract":"Software systems based on Artificial Intelligence (AI) and Machine Learning (ML) are being widely adopted in various scenarios, from online shopping to medical applications. When developing these systems, one needs to take into account that they should be verifiable to make sure that they are in accordance with their requirements. In this work we propose a framework to perform online verification of ML models, through the use of model checking. In order to validate the proposal, we apply it to the medical domain to help qualify medical risk. The results reveal that we can efficiently use the framework to determine if a patient is close to the multidimensional decision boundary of a risk score model. This is particularly relevant since patients in these circumstances are the ones more likely to be misclassified. As such, our framework can be used to help medical teams make better informed decisions.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130935262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the Workshops Chairs - DSN 2020","authors":"Domenico Cotroneo, C. Rotaru","doi":"10.1109/DSN-W50199.2020.00005","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00005","url":null,"abstract":"","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130682702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of a NOEL-V RISC-V SoC Targeting Space Applications","authors":"J. Andersson","doi":"10.1109/DSN-W50199.2020.00020","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00020","url":null,"abstract":"This extended abstract describes the development of a RISC-V-based System-on-Chip design targeting space applications.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128576869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}