On safety, assurance, and reliability: a software engineering perspective (keynote)

M. Chechik
{"title":"On safety, assurance, and reliability: a software engineering perspective (keynote)","authors":"M. Chechik","doi":"10.1145/3540250.3569443","DOIUrl":null,"url":null,"abstract":"From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from test cases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Building safety arguments for traditional software systems is difficult — they are lengthy and expensive to maintain, especially as software undergoes change. Safety is also notoriously noncompositional — each subsystem might be safe but together they may create unsafe behaviors. It is also easy to miss cases, which in the simplest case would mean developing an argument for when a condition is true but missing arguing for a false condition. Furthermore, many ML-based systems are becoming safety-critical. For example, recent Tesla self-driving cars misclassified emergency vehicles and caused multiple crashes. ML-based systems typically do not have precisely specified and machine-verifiable requirements. While some safety requirements can be stated clearly: “the system should detect all pedestrians at a crossing”, these requirements are for the entire system, making them too high-level for safety analysis of individual components. Thus, systems with ML components (MLCs) add a significant layer of complexity for safety assurance. I argue that safety assurance should be an integral part of building safe and reliable software systems, but this process needs support from advanced software engineering and software analysis. In this talk, I outline a few approaches for development of principled, tool-supported methodologies for creating and managing assurance arguments. I then describe some of the recent work on specifying and verifying reliability requirements for machine-learned components in safety-critical domains.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"软件产业与工程","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.1145/3540250.3569443","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from test cases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Building safety arguments for traditional software systems is difficult — they are lengthy and expensive to maintain, especially as software undergoes change. Safety is also notoriously noncompositional — each subsystem might be safe but together they may create unsafe behaviors. It is also easy to miss cases, which in the simplest case would mean developing an argument for when a condition is true but missing arguing for a false condition. Furthermore, many ML-based systems are becoming safety-critical. For example, recent Tesla self-driving cars misclassified emergency vehicles and caused multiple crashes. ML-based systems typically do not have precisely specified and machine-verifiable requirements. While some safety requirements can be stated clearly: “the system should detect all pedestrians at a crossing”, these requirements are for the entire system, making them too high-level for safety analysis of individual components. Thus, systems with ML components (MLCs) add a significant layer of complexity for safety assurance. I argue that safety assurance should be an integral part of building safe and reliable software systems, but this process needs support from advanced software engineering and software analysis. In this talk, I outline a few approaches for development of principled, tool-supported methodologies for creating and managing assurance arguments. I then describe some of the recent work on specifying and verifying reliability requirements for machine-learned components in safety-critical domains.
安全性、保证和可靠性:软件工程的视角(主题演讲)
从金融服务平台到社交网络再到车辆控制,软件已经成为日常生活中许多活动的中介。管理机构和标准组织已经通过创建法规和标准来应对这一趋势,以解决诸如安全、安保和隐私等问题。在这种环境中,软件开发对标准和法规的遵从性已经成为一个关键需求。合规性声明和论证通常在保证案例中被捕获,并带有合规性的相关证据。证据可以来自测试用例、验证证据、人类判断,或者这些的组合。也就是说,我们尝试根据合理的方法谨慎地构建(安全关键型)系统,并在最终由人类判断的保证案例中阐明这些理由。为传统软件系统建立安全论证是困难的——维护它们既耗时又昂贵,尤其是在软件经历变化的时候。安全性也是出了名的非组合性——每个子系统可能是安全的,但它们一起可能会产生不安全的行为。我们也很容易忽略一些情况,在最简单的情况下,这意味着我们提出了一个关于什么时候一个条件为真的论证,而忽略了对一个条件为假的论证。此外,许多基于ml的系统正变得对安全至关重要。例如,最近的特斯拉自动驾驶汽车错误地对紧急车辆进行了分类,并造成了多起车祸。基于机器学习的系统通常没有精确指定和机器可验证的需求。虽然有些安全要求可以明确说明:“系统应该检测到十字路口的所有行人”,但这些要求是针对整个系统的,对于单个组件的安全分析来说,这些要求太高了。因此,具有ML组件(mlc)的系统为安全保证增加了一个重要的复杂性层。我认为安全保证应该是构建安全和可靠的软件系统的一个组成部分,但是这个过程需要高级软件工程和软件分析的支持。在这次演讲中,我概述了一些用于开发原则性的、工具支持的方法来创建和管理保证参数的方法。然后,我描述了最近关于在安全关键领域中指定和验证机器学习组件的可靠性要求的一些工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
676
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信