Changjoo Lee , Simon Schätzle , Stefan Andreas Lang , Timo Oksanen
{"title":"监控运行时输入数据分布,以确保感知系统中预期功能的安全性","authors":"Changjoo Lee , Simon Schätzle , Stefan Andreas Lang , Timo Oksanen","doi":"10.1016/j.atech.2025.101102","DOIUrl":null,"url":null,"abstract":"<div><div>Safe and reliable environmental perception is essential for the highly automated or even autonomous operation of agriculture machines. However, developing a functionally safe and reliable AI-powered perception system is challenging, especially in safety-critical applications, due to the nature of AI technologies. This article is motivated by the need to constrain an AI-powered perception system to work within a predefined safe envelope, ensuring that the acceptable behaviour of AI technology is maintained. The acceptable behaviour of AI technology is assessed based on the distribution of its training data. However, verifying the model’s performance becomes challenging when it encounters unseen, out-of-distribution input data. This article proposes an image quality safety model (IQSM) that estimates the confidence in the safety of the intended functionality for a runtime input image within a perception system, even when faced with unseen out-of-distribution runtime input images. If the confidence level falls below the “minimum performance threshold” required for safe operation, the IQSM detects that the intended functionality is unsafe for performing highly automated operations. On a test set of 1,592 images comprising clear, dirty, foggy, raindrop-covered, and over-exposed, IQSM classified images as safe or unsafe with accuracies ranging from 97.6 % to 98.9 %. This demonstrates its ability to effectively detect acceptable runtime input images and ensure the acceptable behaviour of an intended function in world scenarios. The IQSM can prevent malfunctions in perception systems, such as failing to detect obstacles due to adverse weather conditions. It facilitates the integration of fail-safe architectures across various applications, including highly automated agricultural machinery, thereby contributing to the safety and reliability of the intended functionality.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101102"},"PeriodicalIF":5.7000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Monitoring runtime input data distribution for the safety of the intended functionality in perception systems\",\"authors\":\"Changjoo Lee , Simon Schätzle , Stefan Andreas Lang , Timo Oksanen\",\"doi\":\"10.1016/j.atech.2025.101102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Safe and reliable environmental perception is essential for the highly automated or even autonomous operation of agriculture machines. However, developing a functionally safe and reliable AI-powered perception system is challenging, especially in safety-critical applications, due to the nature of AI technologies. This article is motivated by the need to constrain an AI-powered perception system to work within a predefined safe envelope, ensuring that the acceptable behaviour of AI technology is maintained. The acceptable behaviour of AI technology is assessed based on the distribution of its training data. However, verifying the model’s performance becomes challenging when it encounters unseen, out-of-distribution input data. This article proposes an image quality safety model (IQSM) that estimates the confidence in the safety of the intended functionality for a runtime input image within a perception system, even when faced with unseen out-of-distribution runtime input images. If the confidence level falls below the “minimum performance threshold” required for safe operation, the IQSM detects that the intended functionality is unsafe for performing highly automated operations. On a test set of 1,592 images comprising clear, dirty, foggy, raindrop-covered, and over-exposed, IQSM classified images as safe or unsafe with accuracies ranging from 97.6 % to 98.9 %. This demonstrates its ability to effectively detect acceptable runtime input images and ensure the acceptable behaviour of an intended function in world scenarios. The IQSM can prevent malfunctions in perception systems, such as failing to detect obstacles due to adverse weather conditions. It facilitates the integration of fail-safe architectures across various applications, including highly automated agricultural machinery, thereby contributing to the safety and reliability of the intended functionality.</div></div>\",\"PeriodicalId\":74813,\"journal\":{\"name\":\"Smart agricultural technology\",\"volume\":\"12 \",\"pages\":\"Article 101102\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2025-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart agricultural technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772375525003351\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURAL ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375525003351","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
Monitoring runtime input data distribution for the safety of the intended functionality in perception systems
Safe and reliable environmental perception is essential for the highly automated or even autonomous operation of agriculture machines. However, developing a functionally safe and reliable AI-powered perception system is challenging, especially in safety-critical applications, due to the nature of AI technologies. This article is motivated by the need to constrain an AI-powered perception system to work within a predefined safe envelope, ensuring that the acceptable behaviour of AI technology is maintained. The acceptable behaviour of AI technology is assessed based on the distribution of its training data. However, verifying the model’s performance becomes challenging when it encounters unseen, out-of-distribution input data. This article proposes an image quality safety model (IQSM) that estimates the confidence in the safety of the intended functionality for a runtime input image within a perception system, even when faced with unseen out-of-distribution runtime input images. If the confidence level falls below the “minimum performance threshold” required for safe operation, the IQSM detects that the intended functionality is unsafe for performing highly automated operations. On a test set of 1,592 images comprising clear, dirty, foggy, raindrop-covered, and over-exposed, IQSM classified images as safe or unsafe with accuracies ranging from 97.6 % to 98.9 %. This demonstrates its ability to effectively detect acceptable runtime input images and ensure the acceptable behaviour of an intended function in world scenarios. The IQSM can prevent malfunctions in perception systems, such as failing to detect obstacles due to adverse weather conditions. It facilitates the integration of fail-safe architectures across various applications, including highly automated agricultural machinery, thereby contributing to the safety and reliability of the intended functionality.