{"title":"小组讨论:小组会议一:极端规模高性能计算系统和应用程序的弹性","authors":"V. Alexandrov, Thomas Ropars, L. Strigini","doi":"10.1109/HPCSim.2016.7568304","DOIUrl":null,"url":null,"abstract":"Recent experiences in extreme scale high performance computing systems and applications indicate that failure rates continue to be on the rise, at times exponentially. The reasons are multi-faceted. The probability of errors not only grows with system size, but also with increasing architectural vulnerabilities caused by employing accelerators, such as FPGAs and GPUs, and by shrinking nanometer technologies. Ever increasing component counts and software complexities will continue to rise, while application correctness and execution efficiency will be expected to become even more critical. The gains made due to today's and future generation extreme scale designs can be diminished due to lack of fault tolerance and resiliency adequate solutions. Reactive fault tolerance technologies, such as checkpointing/restarting, are unable to handle high failure rates given the overheads associated with such approaches. Proactive resiliency technologies, such as migration, cannot cope given that random soft errors are unpredictable, may even remain undetected and thus resulting in silent data corruption and incorrect application output. This panel will address some of the resilience foundations, enabling infrastructure for resilience, exiting and projected solutions that will be required to meet this key challenge in the era of extreme scale HPC systems and applications.","PeriodicalId":227864,"journal":{"name":"International Symposium on High Performance Computing Systems and Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Panels: Panel session I: Resiliency in extreme scale high performance computing systems and applications\",\"authors\":\"V. Alexandrov, Thomas Ropars, L. Strigini\",\"doi\":\"10.1109/HPCSim.2016.7568304\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent experiences in extreme scale high performance computing systems and applications indicate that failure rates continue to be on the rise, at times exponentially. The reasons are multi-faceted. The probability of errors not only grows with system size, but also with increasing architectural vulnerabilities caused by employing accelerators, such as FPGAs and GPUs, and by shrinking nanometer technologies. Ever increasing component counts and software complexities will continue to rise, while application correctness and execution efficiency will be expected to become even more critical. The gains made due to today's and future generation extreme scale designs can be diminished due to lack of fault tolerance and resiliency adequate solutions. Reactive fault tolerance technologies, such as checkpointing/restarting, are unable to handle high failure rates given the overheads associated with such approaches. Proactive resiliency technologies, such as migration, cannot cope given that random soft errors are unpredictable, may even remain undetected and thus resulting in silent data corruption and incorrect application output. This panel will address some of the resilience foundations, enabling infrastructure for resilience, exiting and projected solutions that will be required to meet this key challenge in the era of extreme scale HPC systems and applications.\",\"PeriodicalId\":227864,\"journal\":{\"name\":\"International Symposium on High Performance Computing Systems and Applications\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Symposium on High Performance Computing Systems and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPCSim.2016.7568304\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposium on High Performance Computing Systems and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCSim.2016.7568304","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Panels: Panel session I: Resiliency in extreme scale high performance computing systems and applications
Recent experiences in extreme scale high performance computing systems and applications indicate that failure rates continue to be on the rise, at times exponentially. The reasons are multi-faceted. The probability of errors not only grows with system size, but also with increasing architectural vulnerabilities caused by employing accelerators, such as FPGAs and GPUs, and by shrinking nanometer technologies. Ever increasing component counts and software complexities will continue to rise, while application correctness and execution efficiency will be expected to become even more critical. The gains made due to today's and future generation extreme scale designs can be diminished due to lack of fault tolerance and resiliency adequate solutions. Reactive fault tolerance technologies, such as checkpointing/restarting, are unable to handle high failure rates given the overheads associated with such approaches. Proactive resiliency technologies, such as migration, cannot cope given that random soft errors are unpredictable, may even remain undetected and thus resulting in silent data corruption and incorrect application output. This panel will address some of the resilience foundations, enabling infrastructure for resilience, exiting and projected solutions that will be required to meet this key challenge in the era of extreme scale HPC systems and applications.