Sébastien Bertrand, Silvia Ciappelloni, Pierre-Alexandre Favier, J. André
{"title":"Schnappinger基于静态代码度量的人类级别有序可维护性预测研究的复制与推广","authors":"Sébastien Bertrand, Silvia Ciappelloni, Pierre-Alexandre Favier, J. André","doi":"10.1145/3593434.3593488","DOIUrl":null,"url":null,"abstract":"As a part of a research project concerning software maintainability assessment in collaboration with the development team, we wanted to explore dissensions between developers and the confounding effect of size. To this end, this study replicated and extended a recent study from Schnappinger et al. with the public part of its dataset and the metrics extracted from the graph-based tool Javanalyser. The entire processing pipeline was automated, from metrics extraction to the training of machine learning models. The study was extended by predicting the continuous maintainability to take account of dissensions. Then, all experimental shots were duplicated to evaluate the overall influence of the class size. In the end, the original study was successfully replicated. Moreover, good performance was achieved on the continuous maintainability prediction. Finally, the class size was not sufficient for fine-grained maintainability prediction. This study shows the necessity to explore the nature of what is measured by code metrics, and is also the first step in the construction of a maintainability model.","PeriodicalId":178596,"journal":{"name":"Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Replication and Extension of Schnappinger’s Study on Human-level Ordinal Maintainability Prediction Based on Static Code Metrics\",\"authors\":\"Sébastien Bertrand, Silvia Ciappelloni, Pierre-Alexandre Favier, J. André\",\"doi\":\"10.1145/3593434.3593488\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a part of a research project concerning software maintainability assessment in collaboration with the development team, we wanted to explore dissensions between developers and the confounding effect of size. To this end, this study replicated and extended a recent study from Schnappinger et al. with the public part of its dataset and the metrics extracted from the graph-based tool Javanalyser. The entire processing pipeline was automated, from metrics extraction to the training of machine learning models. The study was extended by predicting the continuous maintainability to take account of dissensions. Then, all experimental shots were duplicated to evaluate the overall influence of the class size. In the end, the original study was successfully replicated. Moreover, good performance was achieved on the continuous maintainability prediction. Finally, the class size was not sufficient for fine-grained maintainability prediction. This study shows the necessity to explore the nature of what is measured by code metrics, and is also the first step in the construction of a maintainability model.\",\"PeriodicalId\":178596,\"journal\":{\"name\":\"Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3593434.3593488\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3593434.3593488","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Replication and Extension of Schnappinger’s Study on Human-level Ordinal Maintainability Prediction Based on Static Code Metrics
As a part of a research project concerning software maintainability assessment in collaboration with the development team, we wanted to explore dissensions between developers and the confounding effect of size. To this end, this study replicated and extended a recent study from Schnappinger et al. with the public part of its dataset and the metrics extracted from the graph-based tool Javanalyser. The entire processing pipeline was automated, from metrics extraction to the training of machine learning models. The study was extended by predicting the continuous maintainability to take account of dissensions. Then, all experimental shots were duplicated to evaluate the overall influence of the class size. In the end, the original study was successfully replicated. Moreover, good performance was achieved on the continuous maintainability prediction. Finally, the class size was not sufficient for fine-grained maintainability prediction. This study shows the necessity to explore the nature of what is measured by code metrics, and is also the first step in the construction of a maintainability model.