{"title":"文件和内存管理的演变","authors":"M. Satyanarayanan","doi":"10.1145/2830903.2830907","DOIUrl":null,"url":null,"abstract":"Mahadev Satyanarayanan (Satya) presented his thoughts on \"The Evolution of Memory and File Systems\". He observed that over a 60-year period, there have been four drivers of progress: the quests for scale, performance, transparency, and robustness. At the dawn of computing, the quest for scale was dominant. Easing the memory limitations of early computers was crucial to the growth of computing and the creation of new applications, because memory was so scarce and so expensive. That quest has been phenomenally successful. On a cost per bit basis, volatile and persistent memory technologies have improved by nearly 13 orders of magnitude. The quest for performance has been dominated by the growing gap between processor performance and memory performance. This gap has been most apparent since the use of DRAM technology by the early 1980s, but it was already a serious issue 20 years before that in the era of core memory. Over time, memory hierarchies of increasing depth have improved average case performance by exploiting temporal and spatial locality. These have been crucial in overcoming the processor-memory performance gap, with clever prefetching and write-back techniques also playing important roles. For the first decade or so, the price of improving scale and performance was the need to rewrite software as computers were replaced by new ones. By the early 1960s, this cost was becoming significant. Over time, as people costs have increased relative to hardware costs, disruptive software changes have become unacceptable. This has led to the quest for transparency. In its System/360, IBM pioneered the concept of an invariant architecture with multiple implementations at different price/performance points. The principle of transparent management of data across levels of a memory hierarchy, which we broadly term \"caching\", was pioneered at the software level by the Atlas computer in the early 1960s. At the hardware level, it was demonstrated first in the IBM System 360 Model 85 in 1968. Since then, caching has been applied at virtually every system level and is today perhaps the most ubiquitous and powerful systems technique for achieving scale, performance and transparency. By the late 1960s, as computers began to used in mission-critical contexts, the negative impact of hardware and software failures escalated. This led to he emergence of techniques to improve robustness even at the possible cost of performance or storage efficiency. The concept of separate address spaces emerged partly because it isolated the consequences of buggy software. Improved resilience to buggy sofware has also been one of the reasons that memory and file systems have remained distinct, even though systems based on the single-level storage concept have been proposed and experimentally demonstrated. In addition, to cope with hardware, software and networking failures, technqiues such as RAID, software replication, and disconnected operation emerged. The quest for robustness continues to rise in importance as the cost of failures increases relative to memory and storage costs. In closing, Satya commented on recent predictions that the classic hierarchical file system will soon be extinct. He observed that such predictions are not new. Classic file systems may be overlaid by non-hierarchical interfaces that uses different abstractions (such as the Android interface for Java applications). However, they will continue to be important for unstructured data that must be preserved for very long periods of time. Satya observed that the deep reasons for the longevity of the hierarchical file system model were articulated in broad terms by Herb Simon in his 1962 work, \"The Architecture of Complexity\". Essentially, hierarchy arises due to the cognitive limitations of the human mind. File system implementations have evolved to be a good fit for these cognitive limitations. They are likely to be with us for a very long time.","PeriodicalId":175724,"journal":{"name":"SOSP History Day 2015","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evolution of file and memory management\",\"authors\":\"M. Satyanarayanan\",\"doi\":\"10.1145/2830903.2830907\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Mahadev Satyanarayanan (Satya) presented his thoughts on \\\"The Evolution of Memory and File Systems\\\". He observed that over a 60-year period, there have been four drivers of progress: the quests for scale, performance, transparency, and robustness. At the dawn of computing, the quest for scale was dominant. Easing the memory limitations of early computers was crucial to the growth of computing and the creation of new applications, because memory was so scarce and so expensive. That quest has been phenomenally successful. On a cost per bit basis, volatile and persistent memory technologies have improved by nearly 13 orders of magnitude. The quest for performance has been dominated by the growing gap between processor performance and memory performance. This gap has been most apparent since the use of DRAM technology by the early 1980s, but it was already a serious issue 20 years before that in the era of core memory. Over time, memory hierarchies of increasing depth have improved average case performance by exploiting temporal and spatial locality. These have been crucial in overcoming the processor-memory performance gap, with clever prefetching and write-back techniques also playing important roles. For the first decade or so, the price of improving scale and performance was the need to rewrite software as computers were replaced by new ones. By the early 1960s, this cost was becoming significant. Over time, as people costs have increased relative to hardware costs, disruptive software changes have become unacceptable. This has led to the quest for transparency. In its System/360, IBM pioneered the concept of an invariant architecture with multiple implementations at different price/performance points. The principle of transparent management of data across levels of a memory hierarchy, which we broadly term \\\"caching\\\", was pioneered at the software level by the Atlas computer in the early 1960s. At the hardware level, it was demonstrated first in the IBM System 360 Model 85 in 1968. Since then, caching has been applied at virtually every system level and is today perhaps the most ubiquitous and powerful systems technique for achieving scale, performance and transparency. By the late 1960s, as computers began to used in mission-critical contexts, the negative impact of hardware and software failures escalated. This led to he emergence of techniques to improve robustness even at the possible cost of performance or storage efficiency. The concept of separate address spaces emerged partly because it isolated the consequences of buggy software. Improved resilience to buggy sofware has also been one of the reasons that memory and file systems have remained distinct, even though systems based on the single-level storage concept have been proposed and experimentally demonstrated. In addition, to cope with hardware, software and networking failures, technqiues such as RAID, software replication, and disconnected operation emerged. The quest for robustness continues to rise in importance as the cost of failures increases relative to memory and storage costs. In closing, Satya commented on recent predictions that the classic hierarchical file system will soon be extinct. He observed that such predictions are not new. Classic file systems may be overlaid by non-hierarchical interfaces that uses different abstractions (such as the Android interface for Java applications). However, they will continue to be important for unstructured data that must be preserved for very long periods of time. Satya observed that the deep reasons for the longevity of the hierarchical file system model were articulated in broad terms by Herb Simon in his 1962 work, \\\"The Architecture of Complexity\\\". Essentially, hierarchy arises due to the cognitive limitations of the human mind. File system implementations have evolved to be a good fit for these cognitive limitations. They are likely to be with us for a very long time.\",\"PeriodicalId\":175724,\"journal\":{\"name\":\"SOSP History Day 2015\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SOSP History Day 2015\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2830903.2830907\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SOSP History Day 2015","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2830903.2830907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
Mahadev Satyanarayanan (Satya)发表了他对“内存和文件系统的演变”的看法。他观察到,在过去的60年里,有四个驱动进步的因素:对规模、性能、透明度和稳健性的追求。在计算机诞生之初,对规模的追求占主导地位。缓解早期计算机的内存限制对计算机的发展和新应用程序的创建至关重要,因为内存是如此稀缺和昂贵。这一探索取得了惊人的成功。在每比特成本的基础上,易失性和持久性存储器技术已经提高了近13个数量级。对性能的追求一直被处理器性能和内存性能之间日益扩大的差距所主导。这种差距在20世纪80年代初使用DRAM技术后最为明显,但在20年前的核心内存时代就已经是一个严重的问题。随着时间的推移,深度增加的记忆层次通过利用时间和空间局部性提高了平均情况的性能。这对于克服处理器-内存性能差距至关重要,聪明的预取和回写技术也发挥了重要作用。在头十年左右的时间里,提高规模和性能的代价是,随着计算机被新计算机取代,需要重写软件。到20世纪60年代初,这一成本变得越来越高。随着时间的推移,随着人力成本相对于硬件成本的增加,破坏性的软件更改已经变得不可接受。这导致了对透明度的追求。在System/360中,IBM率先提出了具有不同价格/性能点的多种实现的不变体系结构概念。跨内存层次对数据进行透明管理的原则,我们笼统地称之为“缓存”,是20世纪60年代早期Atlas计算机在软件层面首创的。在硬件层面,它首先在1968年的IBM System 360 Model 85中得到演示。从那时起,缓存几乎应用于每个系统级别,并且可能是今天实现规模、性能和透明度的最普遍和最强大的系统技术。到20世纪60年代末,随着计算机开始在关键任务环境中使用,硬件和软件故障的负面影响不断升级。这导致了提高健壮性的技术的出现,甚至可能以性能或存储效率为代价。独立地址空间概念的出现部分是因为它隔离了有缺陷的软件的后果。尽管基于单级存储概念的系统已经被提出并通过实验证明,但内存和文件系统仍然保持不同的原因之一也是对有缺陷的软件的改进的弹性。此外,为了应对硬件、软件和网络故障,出现了RAID、软件复制和断开连接操作等技术。随着故障成本相对于内存和存储成本的增加,对健壮性的追求变得越来越重要。最后,Satya评论了最近的预测,即经典的分级文件系统将很快消失。他指出,这样的预测并不新鲜。经典文件系统可能被使用不同抽象的非分层接口覆盖(例如Java应用程序的Android接口)。然而,对于必须保存很长时间的非结构化数据,它们将继续发挥重要作用。Satya注意到,层级文件系统模型经久不衰的深层原因在Herb Simon 1962年的著作《复杂性架构》(the Architecture of Complexity)中得到了广泛阐述。从本质上讲,等级制度是由于人类思维的认知局限而产生的。文件系统实现已经演变为非常适合这些认知限制。它们可能会伴随我们很长一段时间。
Mahadev Satyanarayanan (Satya) presented his thoughts on "The Evolution of Memory and File Systems". He observed that over a 60-year period, there have been four drivers of progress: the quests for scale, performance, transparency, and robustness. At the dawn of computing, the quest for scale was dominant. Easing the memory limitations of early computers was crucial to the growth of computing and the creation of new applications, because memory was so scarce and so expensive. That quest has been phenomenally successful. On a cost per bit basis, volatile and persistent memory technologies have improved by nearly 13 orders of magnitude. The quest for performance has been dominated by the growing gap between processor performance and memory performance. This gap has been most apparent since the use of DRAM technology by the early 1980s, but it was already a serious issue 20 years before that in the era of core memory. Over time, memory hierarchies of increasing depth have improved average case performance by exploiting temporal and spatial locality. These have been crucial in overcoming the processor-memory performance gap, with clever prefetching and write-back techniques also playing important roles. For the first decade or so, the price of improving scale and performance was the need to rewrite software as computers were replaced by new ones. By the early 1960s, this cost was becoming significant. Over time, as people costs have increased relative to hardware costs, disruptive software changes have become unacceptable. This has led to the quest for transparency. In its System/360, IBM pioneered the concept of an invariant architecture with multiple implementations at different price/performance points. The principle of transparent management of data across levels of a memory hierarchy, which we broadly term "caching", was pioneered at the software level by the Atlas computer in the early 1960s. At the hardware level, it was demonstrated first in the IBM System 360 Model 85 in 1968. Since then, caching has been applied at virtually every system level and is today perhaps the most ubiquitous and powerful systems technique for achieving scale, performance and transparency. By the late 1960s, as computers began to used in mission-critical contexts, the negative impact of hardware and software failures escalated. This led to he emergence of techniques to improve robustness even at the possible cost of performance or storage efficiency. The concept of separate address spaces emerged partly because it isolated the consequences of buggy software. Improved resilience to buggy sofware has also been one of the reasons that memory and file systems have remained distinct, even though systems based on the single-level storage concept have been proposed and experimentally demonstrated. In addition, to cope with hardware, software and networking failures, technqiues such as RAID, software replication, and disconnected operation emerged. The quest for robustness continues to rise in importance as the cost of failures increases relative to memory and storage costs. In closing, Satya commented on recent predictions that the classic hierarchical file system will soon be extinct. He observed that such predictions are not new. Classic file systems may be overlaid by non-hierarchical interfaces that uses different abstractions (such as the Android interface for Java applications). However, they will continue to be important for unstructured data that must be preserved for very long periods of time. Satya observed that the deep reasons for the longevity of the hierarchical file system model were articulated in broad terms by Herb Simon in his 1962 work, "The Architecture of Complexity". Essentially, hierarchy arises due to the cognitive limitations of the human mind. File system implementations have evolved to be a good fit for these cognitive limitations. They are likely to be with us for a very long time.