Alejandra Cossio Chavalier, Juan Pablo Sandoval Alcocer, Alexandre Bergel
{"title":"Assessing textual source code comparison: split or unified?","authors":"Alejandra Cossio Chavalier, Juan Pablo Sandoval Alcocer, Alexandre Bergel","doi":"10.1145/3397537.3398471","DOIUrl":null,"url":null,"abstract":"Evaluating source code differences is an important task in software engineering. Unified and split are two popular textual representations supported by clients for source code management. Whether these representations differ in supporting source code commit assessment is still unknown, despite its ubiquity in software production environments. This paper performs a controlled experiment to test the causality between the textual representation of source code differences and the performance in term of commit evaluation. Our experiment shows that no significant difference was measured. We therefore conclude that both unified and split equally support the source code commit assessment for the tasks we considered.","PeriodicalId":373173,"journal":{"name":"Companion Proceedings of the 4th International Conference on Art, Science, and Engineering of Programming","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Proceedings of the 4th International Conference on Art, Science, and Engineering of Programming","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3397537.3398471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Evaluating source code differences is an important task in software engineering. Unified and split are two popular textual representations supported by clients for source code management. Whether these representations differ in supporting source code commit assessment is still unknown, despite its ubiquity in software production environments. This paper performs a controlled experiment to test the causality between the textual representation of source code differences and the performance in term of commit evaluation. Our experiment shows that no significant difference was measured. We therefore conclude that both unified and split equally support the source code commit assessment for the tasks we considered.