{"title":"自主系统中机器学习安全保障的敏捷开发(AgileAMLAS)","authors":"Victoria J. Hodge, Matt Osborne","doi":"10.1016/j.array.2025.100482","DOIUrl":null,"url":null,"abstract":"<div><div>Recent advances in ML have enabled the development of autonomous cyber–physical systems for a broad range of applications. Using ML, these autonomous systems are able to learn, adapt, and operate with no human intervention. However, this autonomous operation poses a problem when proving that they are acceptably safe. Designers and engineers have traditionally used ‘Waterfall’ or V-model development lifecycles to develop safe systems, but ML engineering requires iteration and adaptation. Iterative development necessitates enhanced lifecycles, augmented methodologies, and the need to systematically integrate rigorous safety assurance with ML development and operation activities. In this paper, we introduce a novel lifecycle, and comprehensive methodology for safely developing, operating, and assuring autonomous systems which use ML. The lifecycle combines Agile software engineering, ML engineering, and a safety engineering framework using iterative and incremental development. This paper provides systematic step-by-step guidelines for developing and deploying ML for autonomous systems using DevOps and MLOps, and for generating compelling safety cases. We have developed and refined our methodology on a recent set of projects undertaken to develop autonomous robots across a variety of domains.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100482"},"PeriodicalIF":4.5000,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Agile Development for Safety Assurance of Machine Learning in Autonomous Systems (AgileAMLAS)\",\"authors\":\"Victoria J. Hodge, Matt Osborne\",\"doi\":\"10.1016/j.array.2025.100482\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent advances in ML have enabled the development of autonomous cyber–physical systems for a broad range of applications. Using ML, these autonomous systems are able to learn, adapt, and operate with no human intervention. However, this autonomous operation poses a problem when proving that they are acceptably safe. Designers and engineers have traditionally used ‘Waterfall’ or V-model development lifecycles to develop safe systems, but ML engineering requires iteration and adaptation. Iterative development necessitates enhanced lifecycles, augmented methodologies, and the need to systematically integrate rigorous safety assurance with ML development and operation activities. In this paper, we introduce a novel lifecycle, and comprehensive methodology for safely developing, operating, and assuring autonomous systems which use ML. The lifecycle combines Agile software engineering, ML engineering, and a safety engineering framework using iterative and incremental development. This paper provides systematic step-by-step guidelines for developing and deploying ML for autonomous systems using DevOps and MLOps, and for generating compelling safety cases. We have developed and refined our methodology on a recent set of projects undertaken to develop autonomous robots across a variety of domains.</div></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":\"27 \",\"pages\":\"Article 100482\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2025-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590005625001092\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625001092","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Agile Development for Safety Assurance of Machine Learning in Autonomous Systems (AgileAMLAS)
Recent advances in ML have enabled the development of autonomous cyber–physical systems for a broad range of applications. Using ML, these autonomous systems are able to learn, adapt, and operate with no human intervention. However, this autonomous operation poses a problem when proving that they are acceptably safe. Designers and engineers have traditionally used ‘Waterfall’ or V-model development lifecycles to develop safe systems, but ML engineering requires iteration and adaptation. Iterative development necessitates enhanced lifecycles, augmented methodologies, and the need to systematically integrate rigorous safety assurance with ML development and operation activities. In this paper, we introduce a novel lifecycle, and comprehensive methodology for safely developing, operating, and assuring autonomous systems which use ML. The lifecycle combines Agile software engineering, ML engineering, and a safety engineering framework using iterative and incremental development. This paper provides systematic step-by-step guidelines for developing and deploying ML for autonomous systems using DevOps and MLOps, and for generating compelling safety cases. We have developed and refined our methodology on a recent set of projects undertaken to develop autonomous robots across a variety of domains.