Lawrence Chen, Peter C. Rigby, Nachiappan Nagappan
{"title":"Understanding why we cannot model how long a code review will take: an industrial case study","authors":"Lawrence Chen, Peter C. Rigby, Nachiappan Nagappan","doi":"10.1145/3540250.3558945","DOIUrl":null,"url":null,"abstract":"Code review is an effective practice for finding defects, but because it is manually intensive it can slow down the continuous integration of changes. Our goal was to understand the factors that influenced the time a change, ie a diff at Meta, would spend in review. A developer survey showed that diff reviews start to feel slow after they have been waiting for around 24 hour review. We built a review time predictor model to identify potential factors that may be causing reviews to take longer, which we could use to predict when would be the best time to nudge reviewers or to identify diff-related factors that we may need to address. The strongest feature of the time spent in review model we built was the day of the week because diffs submitted near the weekend may have to wait for Monday for review. After removing time on weekends, the remaining features, including size of diff and the number of meetings the reviewers have did not provide substantial predictive power, thereby not being able to predict how long a code review would take. We contributed to the effort to reduce stale diffs by suggesting that diffs be nudged near the start of the workday and that diffs published near the weekend be nudged sooner on Friday to avoid waiting the entire weekend. We use a nudging threshold rather than a model because we showed that TimeInReview cannot be accurately modelled. The NudgeBot has been rolled to over 30k developers at Meta.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"42 1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"软件产业与工程","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.1145/3540250.3558945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Code review is an effective practice for finding defects, but because it is manually intensive it can slow down the continuous integration of changes. Our goal was to understand the factors that influenced the time a change, ie a diff at Meta, would spend in review. A developer survey showed that diff reviews start to feel slow after they have been waiting for around 24 hour review. We built a review time predictor model to identify potential factors that may be causing reviews to take longer, which we could use to predict when would be the best time to nudge reviewers or to identify diff-related factors that we may need to address. The strongest feature of the time spent in review model we built was the day of the week because diffs submitted near the weekend may have to wait for Monday for review. After removing time on weekends, the remaining features, including size of diff and the number of meetings the reviewers have did not provide substantial predictive power, thereby not being able to predict how long a code review would take. We contributed to the effort to reduce stale diffs by suggesting that diffs be nudged near the start of the workday and that diffs published near the weekend be nudged sooner on Friday to avoid waiting the entire weekend. We use a nudging threshold rather than a model because we showed that TimeInReview cannot be accurately modelled. The NudgeBot has been rolled to over 30k developers at Meta.