{"title":"从短期的特定系统工程到长期的人工通用智能","authors":"J. Hernández-Orallo","doi":"10.1109/DSN-W50199.2020.00023","DOIUrl":null,"url":null,"abstract":"AI Safety is an emerging area that integrates very different perspectives from mainstream AI, critical system engineering, dependable autonomous systems, artificial general intelligence, and many other areas concerned and occupied with building AI systems that are safe. Because of this diversity, there is an important level of disagreement in the terminology, the ontologies and the priorities of the field. The Consortium on the Landscape of AI Safety (CLAIS) is an international initiative to create a worldwide, consensus-based and generally-accepted knowledge base (online, interactive and constantly evolving) of structured subareas in AI Safety, including terminology, technologies, research gaps and opportunities, resources, people and groups working in the area, and connection with other subareas and disciplines. In this note we summarise early discussions around the initiative, the associated workshops, its current state and activities, including the body of knowledge, and how to contribute. On a more technical side, I will cover a few spots in the landscape, from very specific and short-term safety engineering issues appearing in specialised systems, to more long-term hazards emerging from more general and powerful intelligent systems.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"34 9","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence\",\"authors\":\"J. Hernández-Orallo\",\"doi\":\"10.1109/DSN-W50199.2020.00023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AI Safety is an emerging area that integrates very different perspectives from mainstream AI, critical system engineering, dependable autonomous systems, artificial general intelligence, and many other areas concerned and occupied with building AI systems that are safe. Because of this diversity, there is an important level of disagreement in the terminology, the ontologies and the priorities of the field. The Consortium on the Landscape of AI Safety (CLAIS) is an international initiative to create a worldwide, consensus-based and generally-accepted knowledge base (online, interactive and constantly evolving) of structured subareas in AI Safety, including terminology, technologies, research gaps and opportunities, resources, people and groups working in the area, and connection with other subareas and disciplines. In this note we summarise early discussions around the initiative, the associated workshops, its current state and activities, including the body of knowledge, and how to contribute. On a more technical side, I will cover a few spots in the landscape, from very specific and short-term safety engineering issues appearing in specialised systems, to more long-term hazards emerging from more general and powerful intelligent systems.\",\"PeriodicalId\":427687,\"journal\":{\"name\":\"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)\",\"volume\":\"34 9\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSN-W50199.2020.00023\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSN-W50199.2020.00023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AI Safety Landscape From short-term specific system engineering to long-term artificial general intelligence
AI Safety is an emerging area that integrates very different perspectives from mainstream AI, critical system engineering, dependable autonomous systems, artificial general intelligence, and many other areas concerned and occupied with building AI systems that are safe. Because of this diversity, there is an important level of disagreement in the terminology, the ontologies and the priorities of the field. The Consortium on the Landscape of AI Safety (CLAIS) is an international initiative to create a worldwide, consensus-based and generally-accepted knowledge base (online, interactive and constantly evolving) of structured subareas in AI Safety, including terminology, technologies, research gaps and opportunities, resources, people and groups working in the area, and connection with other subareas and disciplines. In this note we summarise early discussions around the initiative, the associated workshops, its current state and activities, including the body of knowledge, and how to contribute. On a more technical side, I will cover a few spots in the landscape, from very specific and short-term safety engineering issues appearing in specialised systems, to more long-term hazards emerging from more general and powerful intelligent systems.