{"title":"不要欺骗自己的五种方法","authors":"T. Harris","doi":"10.1145/3494656.3494668","DOIUrl":null,"url":null,"abstract":"Performance experiments are often used to show that a new system is better than an old system, and to quantify how much faster it is, or how much more efficient it is in the use of some resource. Frequently, these experiments come toward the end of a project and - at times - seem to be conducted more with the aim of selling the system rather than providing understanding of the reasons for the differences in performance or the scenarios in which similar improvements might be expected. Mistrust in published performance numbers follows from the suspicion that we measure what we have already optimized.","PeriodicalId":387985,"journal":{"name":"ACM SIGACT News","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Five Ways Not To Fool Yourself\",\"authors\":\"T. Harris\",\"doi\":\"10.1145/3494656.3494668\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Performance experiments are often used to show that a new system is better than an old system, and to quantify how much faster it is, or how much more efficient it is in the use of some resource. Frequently, these experiments come toward the end of a project and - at times - seem to be conducted more with the aim of selling the system rather than providing understanding of the reasons for the differences in performance or the scenarios in which similar improvements might be expected. Mistrust in published performance numbers follows from the suspicion that we measure what we have already optimized.\",\"PeriodicalId\":387985,\"journal\":{\"name\":\"ACM SIGACT News\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM SIGACT News\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3494656.3494668\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGACT News","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3494656.3494668","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance experiments are often used to show that a new system is better than an old system, and to quantify how much faster it is, or how much more efficient it is in the use of some resource. Frequently, these experiments come toward the end of a project and - at times - seem to be conducted more with the aim of selling the system rather than providing understanding of the reasons for the differences in performance or the scenarios in which similar improvements might be expected. Mistrust in published performance numbers follows from the suspicion that we measure what we have already optimized.