Reyhaneh Jabbarvand, Saeid Tizpaz-Niari, Earl T. Barr, Satish Chandra
{"title":"第一届神经软件工程中的可解释性和鲁棒性(InteNSE 2023)概要","authors":"Reyhaneh Jabbarvand, Saeid Tizpaz-Niari, Earl T. Barr, Satish Chandra","doi":"10.1145/3635439.3635446","DOIUrl":null,"url":null,"abstract":"InteNSE is an interdisciplinary workshop for research at the intersection of Machine Learning (ML) and Software Engineering (SE) and would be a pioneer in emphasizing the implicit properties of neural software engineering and analysis. Due to recent computational advancements, ML has become an inseparable part of the SE research community. ML can indeed improve and revolutionize many SE tasks. However, most research in the AI and SE communities consider ML as a closed box, i.e., only considering the final performance of the developed models as an evaluation metric. Ignoring the implicit properties of neural models, such as interpretability and robustness, one cannot validate the model's actual performance, generalizability, and whether it is learning what it is supposed to do. Specifically, in the domain of SE, where the result of ML4SE tools is code synthesis, bug finding, or repair, interpretability and robustness are crucial to ensure the reliability of the products.","PeriodicalId":432885,"journal":{"name":"ACM SIGSOFT Software Engineering Notes","volume":"46 1","pages":"30 - 33"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Summary of the 1st Interpretability and Robustness in Neural Software Engineering (InteNSE 2023)\",\"authors\":\"Reyhaneh Jabbarvand, Saeid Tizpaz-Niari, Earl T. Barr, Satish Chandra\",\"doi\":\"10.1145/3635439.3635446\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"InteNSE is an interdisciplinary workshop for research at the intersection of Machine Learning (ML) and Software Engineering (SE) and would be a pioneer in emphasizing the implicit properties of neural software engineering and analysis. Due to recent computational advancements, ML has become an inseparable part of the SE research community. ML can indeed improve and revolutionize many SE tasks. However, most research in the AI and SE communities consider ML as a closed box, i.e., only considering the final performance of the developed models as an evaluation metric. Ignoring the implicit properties of neural models, such as interpretability and robustness, one cannot validate the model's actual performance, generalizability, and whether it is learning what it is supposed to do. Specifically, in the domain of SE, where the result of ML4SE tools is code synthesis, bug finding, or repair, interpretability and robustness are crucial to ensure the reliability of the products.\",\"PeriodicalId\":432885,\"journal\":{\"name\":\"ACM SIGSOFT Software Engineering Notes\",\"volume\":\"46 1\",\"pages\":\"30 - 33\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM SIGSOFT Software Engineering Notes\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3635439.3635446\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGSOFT Software Engineering Notes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3635439.3635446","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
InteNSE 是机器学习(ML)和软件工程(SE)交叉研究的跨学科研讨会,将成为强调神经软件工程和分析隐含特性的先驱。由于近年来计算技术的进步,机器学习已成为 SE 研究界不可分割的一部分。ML 确实可以改进和革新许多 SE 任务。然而,人工智能和 SE 界的大多数研究都将 ML 视为一个封闭的盒子,即只将所开发模型的最终性能作为评估指标。忽略了神经模型的隐含属性,如可解释性和鲁棒性,就无法验证模型的实际性能、可泛化性以及它是否在学习它应该做的事情。具体来说,在 SE 领域,ML4SE 工具的结果是代码合成、错误查找或修复,因此可解释性和鲁棒性对于确保产品的可靠性至关重要。
Summary of the 1st Interpretability and Robustness in Neural Software Engineering (InteNSE 2023)
InteNSE is an interdisciplinary workshop for research at the intersection of Machine Learning (ML) and Software Engineering (SE) and would be a pioneer in emphasizing the implicit properties of neural software engineering and analysis. Due to recent computational advancements, ML has become an inseparable part of the SE research community. ML can indeed improve and revolutionize many SE tasks. However, most research in the AI and SE communities consider ML as a closed box, i.e., only considering the final performance of the developed models as an evaluation metric. Ignoring the implicit properties of neural models, such as interpretability and robustness, one cannot validate the model's actual performance, generalizability, and whether it is learning what it is supposed to do. Specifically, in the domain of SE, where the result of ML4SE tools is code synthesis, bug finding, or repair, interpretability and robustness are crucial to ensure the reliability of the products.