{"title":"Introduction: Where Statistical Physics Mects Computation","authors":"A. Percus, Gabriel Istrate, Cristopher Moore","doi":"10.1093/oso/9780195177374.003.0007","DOIUrl":null,"url":null,"abstract":"Computer science and physics have been closely linked since the birth of modern computing. This book is about that link. John von Neumann’s original design for digital computing in the 1940s was motivated by applications in ballistics and hydrodynamics, and his model still underlies today’s hardware architectures. Within several years of the invention of the first digital computers, the Monte Carlo method was developed, putting these devices to work simulating natural processes using the principles of statistical physics. It is difficult to imagine how computing might have evolved without the physical insights that nurtured it. It is impossible to imagine how physics would have evolved without computation. While digital computers quickly became indispensable, a true theoretical understanding of the efficiency of the computation process did not occur until twenty years later. In 1965, Hartmanis and Stearns [30] as well as Edmonds [20, 21] articulated the notion of computational complexity, categorizing algorithms according to how rapidly their time and space requirements grow with input size. The qualitative distinctions that computational complexity draws between algorithms form the foundation of theoretical computer science. Chief among these distinctions is that of polynomial versus exponential time. A combinatorial problem belongs in the complexity class P (polynomial time) if there exists an algorithm guaranteeing a solution in a computation time, or number of elementary steps of the algorithm, that grows at most polynomially with input size. Loosely speaking, such problems are considered computationally feasible. An example might be sorting a list of n numbers: even a particularly naive and inefficient algorithm for this will run in a number of steps that grows as O(n), and so sorting is in the class P. A problem belongs in the complexity class NP (non-deterministic polynomial time) if it is merely possible to test, in polynomial time, whether a specific presumed solution is correct. Of course, P ⊆ NP: for any problem whose solution can be found in polynomial time, one can surely verify the validity of a presumed solution in polynomial time.","PeriodicalId":156167,"journal":{"name":"Computational Complexity and Statistical Physics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Complexity and Statistical Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/oso/9780195177374.003.0007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Computer science and physics have been closely linked since the birth of modern computing. This book is about that link. John von Neumann’s original design for digital computing in the 1940s was motivated by applications in ballistics and hydrodynamics, and his model still underlies today’s hardware architectures. Within several years of the invention of the first digital computers, the Monte Carlo method was developed, putting these devices to work simulating natural processes using the principles of statistical physics. It is difficult to imagine how computing might have evolved without the physical insights that nurtured it. It is impossible to imagine how physics would have evolved without computation. While digital computers quickly became indispensable, a true theoretical understanding of the efficiency of the computation process did not occur until twenty years later. In 1965, Hartmanis and Stearns [30] as well as Edmonds [20, 21] articulated the notion of computational complexity, categorizing algorithms according to how rapidly their time and space requirements grow with input size. The qualitative distinctions that computational complexity draws between algorithms form the foundation of theoretical computer science. Chief among these distinctions is that of polynomial versus exponential time. A combinatorial problem belongs in the complexity class P (polynomial time) if there exists an algorithm guaranteeing a solution in a computation time, or number of elementary steps of the algorithm, that grows at most polynomially with input size. Loosely speaking, such problems are considered computationally feasible. An example might be sorting a list of n numbers: even a particularly naive and inefficient algorithm for this will run in a number of steps that grows as O(n), and so sorting is in the class P. A problem belongs in the complexity class NP (non-deterministic polynomial time) if it is merely possible to test, in polynomial time, whether a specific presumed solution is correct. Of course, P ⊆ NP: for any problem whose solution can be found in polynomial time, one can surely verify the validity of a presumed solution in polynomial time.
自现代计算机诞生以来,计算机科学和物理学一直紧密相连。这本书就是关于这种联系的。约翰·冯·诺伊曼(John von Neumann)在20世纪40年代对数字计算的最初设计受到了弹道学和流体动力学应用的启发,他的模型仍然是当今硬件架构的基础。在第一台数字计算机发明的几年内,蒙特卡罗方法得到了发展,使这些设备利用统计物理原理模拟自然过程。很难想象,如果没有滋养计算机的物理洞察力,它将如何进化。如果没有计算,我们无法想象物理学会如何发展。虽然数字计算机很快变得不可或缺,但直到20年后才出现了对计算过程效率的真正理论理解。1965年,Hartmanis和Stearns bbb以及Edmonds[20,21]阐述了计算复杂性的概念,根据算法的时间和空间需求随输入大小的增长速度对算法进行了分类。计算复杂性对算法的定性区分构成了理论计算机科学的基础。这些区别中最主要的是多项式时间和指数时间的区别。如果存在一种算法,保证在一定的计算时间内或算法的基本步数内求解,且该算法的计算时间或基本步数最多与输入大小呈多项式增长,则该组合问题属于复杂度类P(多项式时间)。粗略地说,这些问题在计算上是可行的。一个例子可能是对n个数字的列表进行排序:即使是一个特别幼稚和低效的算法,也会以O(n)的速度运行,因此排序属于p类。一个问题属于复杂性类NP(非确定性多项式时间),如果它仅仅有可能在多项式时间内测试一个特定的假定解是否正确。当然,对于任何可以在多项式时间内找到解的问题,都可以在多项式时间内验证假定解的有效性。