{"title":"更好的吞吐量是否需要更差的延迟?","authors":"D. Ungar, D. Kimelman, Sam S. Adams, M. Wegman","doi":"10.1145/2414729.2414736","DOIUrl":null,"url":null,"abstract":"Let throughput denote the amount of application-level work performed in unit time, normalized to the amount of work that would be accomplished with perfect linear scaling. Let latency denote the mean time required for a thread on one core to observe a change effected by a thread on another core, normalized to the best latency possible for the given platform. Might it be true that algorithms that improve application-level throughput worsen inter-core application-level latency? As techniques for improving performance have evolved from mutex-and-locks to race-and-repair, each seems to have offered more throughput at the expense of increased latency.","PeriodicalId":137547,"journal":{"name":"RACES '12","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Does better throughput require worse latency?\",\"authors\":\"D. Ungar, D. Kimelman, Sam S. Adams, M. Wegman\",\"doi\":\"10.1145/2414729.2414736\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Let throughput denote the amount of application-level work performed in unit time, normalized to the amount of work that would be accomplished with perfect linear scaling. Let latency denote the mean time required for a thread on one core to observe a change effected by a thread on another core, normalized to the best latency possible for the given platform. Might it be true that algorithms that improve application-level throughput worsen inter-core application-level latency? As techniques for improving performance have evolved from mutex-and-locks to race-and-repair, each seems to have offered more throughput at the expense of increased latency.\",\"PeriodicalId\":137547,\"journal\":{\"name\":\"RACES '12\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"RACES '12\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2414729.2414736\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"RACES '12","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2414729.2414736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Let throughput denote the amount of application-level work performed in unit time, normalized to the amount of work that would be accomplished with perfect linear scaling. Let latency denote the mean time required for a thread on one core to observe a change effected by a thread on another core, normalized to the best latency possible for the given platform. Might it be true that algorithms that improve application-level throughput worsen inter-core application-level latency? As techniques for improving performance have evolved from mutex-and-locks to race-and-repair, each seems to have offered more throughput at the expense of increased latency.