{"title":"软件开发人员行为的扩展研究经验","authors":"L. Pollock","doi":"10.1145/2897022.2897838","DOIUrl":null,"url":null,"abstract":"Most of our current understanding of how programmers perform various software maintenance and evolution tasks is based on controlled studies or interviews, which are inherently limited in size, scope, and realism. Replicating controlled studies in the field can both explore the findings of these studies in wider contexts and study new factors that have not been previously encountered in the laboratory setting. While replicating controlled studies in the field seems like an obvious next step in scientific progress, it is a step that has rarely been attempted, in part due to its complexity, which requires not only the industrial knowhow to implement a robust, scalable system, but the academic knowledge of how to design rigorous studies. In this talk, I will describe a few examples of successfully scaled studies, contrast them with less successful cases (including our own), and provide lessons learned. I will share the importance of collecting targeted information instead of generic logs, the insight that automated data collection paired with followup surveys is a powerful tool, and the nuances around what researchers can and cannot expect working developers to tolerate for the sake of research.","PeriodicalId":330342,"journal":{"name":"2016 IEEE/ACM 3rd International Workshop on Software Engineering Research and Industrial Practice (SER&IP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Experiences in Scaling Field Studies of Software Developer Behavior\",\"authors\":\"L. Pollock\",\"doi\":\"10.1145/2897022.2897838\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most of our current understanding of how programmers perform various software maintenance and evolution tasks is based on controlled studies or interviews, which are inherently limited in size, scope, and realism. Replicating controlled studies in the field can both explore the findings of these studies in wider contexts and study new factors that have not been previously encountered in the laboratory setting. While replicating controlled studies in the field seems like an obvious next step in scientific progress, it is a step that has rarely been attempted, in part due to its complexity, which requires not only the industrial knowhow to implement a robust, scalable system, but the academic knowledge of how to design rigorous studies. In this talk, I will describe a few examples of successfully scaled studies, contrast them with less successful cases (including our own), and provide lessons learned. I will share the importance of collecting targeted information instead of generic logs, the insight that automated data collection paired with followup surveys is a powerful tool, and the nuances around what researchers can and cannot expect working developers to tolerate for the sake of research.\",\"PeriodicalId\":330342,\"journal\":{\"name\":\"2016 IEEE/ACM 3rd International Workshop on Software Engineering Research and Industrial Practice (SER&IP)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE/ACM 3rd International Workshop on Software Engineering Research and Industrial Practice (SER&IP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2897022.2897838\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE/ACM 3rd International Workshop on Software Engineering Research and Industrial Practice (SER&IP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2897022.2897838","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Experiences in Scaling Field Studies of Software Developer Behavior
Most of our current understanding of how programmers perform various software maintenance and evolution tasks is based on controlled studies or interviews, which are inherently limited in size, scope, and realism. Replicating controlled studies in the field can both explore the findings of these studies in wider contexts and study new factors that have not been previously encountered in the laboratory setting. While replicating controlled studies in the field seems like an obvious next step in scientific progress, it is a step that has rarely been attempted, in part due to its complexity, which requires not only the industrial knowhow to implement a robust, scalable system, but the academic knowledge of how to design rigorous studies. In this talk, I will describe a few examples of successfully scaled studies, contrast them with less successful cases (including our own), and provide lessons learned. I will share the importance of collecting targeted information instead of generic logs, the insight that automated data collection paired with followup surveys is a powerful tool, and the nuances around what researchers can and cannot expect working developers to tolerate for the sake of research.