2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)最新文献

筛选
英文 中文
AI and Reliability Trends in Safety-Critical Autonomous Systems on Ground and Air 地面和空中安全关键自主系统中的人工智能和可靠性趋势
J. Athavale, Andrea Baldovin, Ralf Graefe, M. Paulitsch, Rafael Rosales
{"title":"AI and Reliability Trends in Safety-Critical Autonomous Systems on Ground and Air","authors":"J. Athavale, Andrea Baldovin, Ralf Graefe, M. Paulitsch, Rafael Rosales","doi":"10.1109/DSN-W50199.2020.00024","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00024","url":null,"abstract":"Safety-critical autonomous systems are becoming more powerful and more integrated to enable higher-level functionality. Modern multi-core SOCs are often the computing backbone in such systems for which safety and associated certification tasks are one of the key challenges, which can become more costly and difficult to achieve. Hence, modeling and assessment of these systems can be a formidable task. In addition, Artificial Intelligence (AI) is already being deployed in safety critical autonomous systems and Machine Learning (ML) enables the achievement of tasks in a cost-effective way.Compliance to Soft Error Rate (SER) requirements is an important element to be successful in these markets. When considering SER performance for functional safety, we need to focus on accurately modeling vulnerability factors for transient analysis based on AI and Deep Learning workloads. We also need to consider the reliability implications due to long mission times leading to high utilization factors for autonomous transport. The reliability risks due to these new use cases also need to be comprehended for modeling and mitigation and would directly impact the safety analysis for these systems. Finally, the need for telemetry for reliability, including capabilities for anomaly detection and prognostics techniques to minimize field failures is of paramount importance.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"12 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116399149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Reward Tuning for self-adaptive Policy in MDP based Distributed Decision-Making to ensure a Safe Mission Planning 基于MDP的分布式决策中自适应策略的奖励调优,以确保任务规划的安全性
M. Hamadouche, C. Dezan, K. Branco
{"title":"Reward Tuning for self-adaptive Policy in MDP based Distributed Decision-Making to ensure a Safe Mission Planning","authors":"M. Hamadouche, C. Dezan, K. Branco","doi":"10.1109/DSN-W50199.2020.00025","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00025","url":null,"abstract":"Markov Decision Process (MDP) becomes a standard model for sequential decision making under uncertainty. This planning gives the appropriate sequence of actions to perform the goal of the mission in an efficient way. Often a single agent makes decisions and performs a single action. However, in several fields such as robotics several actions can be executed simultaneously. Moreover, with the increase of the complexity of missions, the decomposition of an MDP into several sub-MDPs becomes necessary. The decomposition involves parallel decisions between different agents, but the execution of concurrent actions can lead to conflicts. In addition, problems due to the system and to sensor failures may appear during the mission; these can lead to negative consequences (e.g. crash of a UAV caused by the drop in battery charge). In this article, we present a new method to prevent behavior conflicts that can appear within distributed decision-making and to emphasize the action selection if needed to ensure the safety and the various requirements of the system. This method takes into consideration the different constraints due to antagonist actions and wile additionally considering some thresholds on transition functions to promote specific actions that guarantee the safety of the system. Then it automatically computes the rewards of the different MDPs related to the mission in order to establish a safe planning. We validate this method on a case study of UAV mission such as a tracking mission. From the list of the constraints identified for the mission, the rewards of the MDPs are recomputed in order to avoid all potential conflicts and violation of constraints related to the safety of the system, thereby ensuring a safe specification of the mission.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134417244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Flexible Deployment and Enforcement of Flight and Privacy Restrictions for Drone Applications 无人机应用的飞行和隐私限制的灵活部署和执行
Nasos Grigoropoulos, S. Lalis
{"title":"Flexible Deployment and Enforcement of Flight and Privacy Restrictions for Drone Applications","authors":"Nasos Grigoropoulos, S. Lalis","doi":"10.1109/DSN-W50199.2020.00029","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00029","url":null,"abstract":"As drones gradually become a key component of next-generation cyber-physical systems, it is important to manage them in a flexible and efficient way. At the same time, it is crucial to enforce certain restrictions, which may not only concern no-fly zones but may also limit the usage of specific sensors, especially in urban areas. To this end, we propose an open system that enables the flexible deployment and controlled execution of drone applications. On the one hand, applications come in the form of independently executable software bundles that can be deployed on whichever drones are available and satisfy the corresponding resource and flight requirements. On the other hand, suitable mechanisms are used to monitor the execution of the applications at runtime in order to check conformance to the restrictions posed by the authorities, as well as to handle related violations in an automated way. In this paper, we present the key elements of the proposed approach and describe a proof-of-concept implementation that supports most of the envisioned functionality. We also provide a validation of our system prototype using both a software-in-the-loop setup and a real drone in the open.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"46 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129357183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence 从短期的特定系统工程到长期的人工通用智能
J. Hernández-Orallo
{"title":"AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence","authors":"J. Hernández-Orallo","doi":"10.1109/DSN-W50199.2020.00023","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00023","url":null,"abstract":"AI Safety is an emerging area that integrates very different perspectives from mainstream AI, critical system engineering, dependable autonomous systems, artificial general intelligence, and many other areas concerned and occupied with building AI systems that are safe. Because of this diversity, there is an important level of disagreement in the terminology, the ontologies and the priorities of the field. The Consortium on the Landscape of AI Safety (CLAIS) is an international initiative to create a worldwide, consensus-based and generally-accepted knowledge base (online, interactive and constantly evolving) of structured subareas in AI Safety, including terminology, technologies, research gaps and opportunities, resources, people and groups working in the area, and connection with other subareas and disciplines. In this note we summarise early discussions around the initiative, the associated workshops, its current state and activities, including the body of knowledge, and how to contribute. On a more technical side, I will cover a few spots in the landscape, from very specific and short-term safety engineering issues appearing in specialised systems, to more long-term hazards emerging from more general and powerful intelligent systems.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"34 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114113225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Approaching certification of complex systems 接近复杂系统的认证
Nicholas Mc Guire, Imanol Allende
{"title":"Approaching certification of complex systems","authors":"Nicholas Mc Guire, Imanol Allende","doi":"10.1109/DSN-W50199.2020.00022","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00022","url":null,"abstract":"Safety being a system property and not an element property means that novel systems need to be treated as ”oneof”. Only after we gained adequate experience in context of a few (probably dozen) such complex system will common ”baseline” argument emerge. Trying to build ”out-of-context” elements certainly is either not feasible at all or would, if feasible, not simplify anything, since all possible states would need to be considered. In the case of, for example, the Linux kernel, the sheer amount of such states would completely overstrain such an approach. Applying route 3S assessment of non-compliant development while managing the extensive tailoring of measures, techniques and processes, seems to us to be the most promising path towards for safe complex systems.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115539112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On The Generation of Unrestricted Adversarial Examples 关于无限制对抗性例子的生成
Mehrgan Khoshpasand, A. Ghorbani
{"title":"On The Generation of Unrestricted Adversarial Examples","authors":"Mehrgan Khoshpasand, A. Ghorbani","doi":"10.1109/DSN-W50199.2020.00012","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00012","url":null,"abstract":"Adversarial examples are inputs designed by an adversary with the goal of fooling the machine learning models. Most of the research about adversarial examples have focused on perturbing the natural inputs with the assumption that the true label remains unchanged. Even in this limited setting and despite extensive studies in recent years, there is no defence against adversarial examples for complex tasks (e.g., ImageNet). However, for simpler tasks like handwritten digit classification, a robust model seems to be within reach. Unlike perturbation-based adversarial examples, the adversary is not limited to small norm-based perturbations in unrestricted adversarial examples. Hence, defending against unrestricted adversarial examples is a more challenging task.In this paper, we show that previous methods for generating unrestricted adversarial examples ignored a large part of the adversarial subspace. In particular, we demonstrate the bias of previous methods towards generating samples that are far inside the decision boundaries of an auxiliary classifier. We also show the similarity of the decision boundaries of an auxiliary classifier and baseline CNNs. By putting these two evidence together, we explain why adversarial examples generated by the previous approaches lack the desired transferability. Additionally, we present an efficient technique to create adversarial examples using generative adversarial networks to address this issue. We demonstrate that even the state-of-the-art MNIST classifiers are vulnerable to the adversarial examples generated with this technique. Additionally, we show that examples generated with our method are transferable. Accordingly, we hope that new proposed defences use this attack to evaluate the robustness of their models against unrestricted attacks.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127717799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Quantitative Risk Norm - A Proposed Tailoring of HARA for ADS 定量风险规范——针对ADS的HARA裁剪建议
Fredrik Warg, Martin A. Skoglund, Anders Thorsén, Rolf Johansson, M. Brännström, Magnus Gyllenhammar, Martin Sanfridson
{"title":"The Quantitative Risk Norm - A Proposed Tailoring of HARA for ADS","authors":"Fredrik Warg, Martin A. Skoglund, Anders Thorsén, Rolf Johansson, M. Brännström, Magnus Gyllenhammar, Martin Sanfridson","doi":"10.1109/DSN-W50199.2020.00026","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00026","url":null,"abstract":"One of the major challenges of automated driving systems (ADS) is showing that they drive safely. Key to ensuring safety is eliciting a complete set of top-level safety requirements (safety goals). This is typically done with an activity called hazard analysis and risk assessment (HARA). In this paper we argue that the HARA of ISO 26262:2018 is not directly suitable for an ADS, both because the number of relevant operational situations may be vast, and because the ability of the ADS to make decisions in order to reduce risks will affect the analysis of exposure and hazards. Instead we propose a tailoring using a quantitative risk norm (QRN) with consequence classes, where each class has a limit for the frequency within which the consequences may occur. Incident types are then defined and assigned to the consequence classes; the requirements prescribing the limits of these incident types are used as safety goals to fulfil in the implementation. The main benefits of the QRN approach are the ability to show completeness of safety goals, and make sure that the safety strategy is not limited by safety goals which are not formulated in a way suitable for an ADS.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127961751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Online Verification through Model Checking of Medical Critical Intelligent Systems 基于模型检测的医疗关键智能系统在线验证
J. Martins, R. Barbosa, Nuno Lourenço, Jacques Robin, H. Madeira
{"title":"Online Verification through Model Checking of Medical Critical Intelligent Systems","authors":"J. Martins, R. Barbosa, Nuno Lourenço, Jacques Robin, H. Madeira","doi":"10.1109/DSN-W50199.2020.00015","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00015","url":null,"abstract":"Software systems based on Artificial Intelligence (AI) and Machine Learning (ML) are being widely adopted in various scenarios, from online shopping to medical applications. When developing these systems, one needs to take into account that they should be verifiable to make sure that they are in accordance with their requirements. In this work we propose a framework to perform online verification of ML models, through the use of model checking. In order to validate the proposal, we apply it to the medical domain to help qualify medical risk. The results reveal that we can efficiently use the framework to determine if a patient is close to the multidimensional decision boundary of a risk score model. This is particularly relevant since patients in these circumstances are the ones more likely to be misclassified. As such, our framework can be used to help medical teams make better informed decisions.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130935262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Message from the Workshops Chairs - DSN 2020 研讨会主席致辞- DSN 2020
Domenico Cotroneo, C. Rotaru
{"title":"Message from the Workshops Chairs - DSN 2020","authors":"Domenico Cotroneo, C. Rotaru","doi":"10.1109/DSN-W50199.2020.00005","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00005","url":null,"abstract":"","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130682702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a NOEL-V RISC-V SoC Targeting Space Applications 面向空间应用的NOEL-V RISC-V SoC的开发
J. Andersson
{"title":"Development of a NOEL-V RISC-V SoC Targeting Space Applications","authors":"J. Andersson","doi":"10.1109/DSN-W50199.2020.00020","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00020","url":null,"abstract":"This extended abstract describes the development of a RISC-V-based System-on-Chip design targeting space applications.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128576869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信