{"title":"东海岸计算机代数日","authors":"M. Giesbrecht","doi":"10.1145/377604.569765","DOIUrl":null,"url":null,"abstract":"s of Invited Talks Subresultants revisited Joachim yon zur Gathen Fachbereich 17 Mathematik-Informatik Universit~t Paderborn Paderborn, Germany Starting in the late 1960s, Collins and Brown & Tranb invented polynomial remainder sequences (PRS) in order to apply the Euclidean algorithm to integer polynomials. Subresultants play a major role in this theory. We compare the various notions of subresultants, give a general and precise definition of PRS, and clean up some loose ends: • prove a 1971 conjecture of Brown that all results in the subresultant PRS are integer polynomials, • show an exponential lower bound on the pseudo PRS. Lastly, we show how Kronecker had, already in the 1870s, discovered many of the fundamental properties of Euclid's algorithm for polynomials. Some problems in general purpose computer algebra systems design Michael Monagan Center for Experimental and Computational Mathematics Simon Fraser University In this talk I will present three problems of interest to the computer algebra community. The first is the problem of implementing modular algorithms efficiently. Application of the Chinese remainder theorem to solve the GCD and Groebner bases problems leads to a big loss of efficiency because the data structure overhead overwhelms the cost of the modular arithmetic. The second problem is how to build a system so that all the components interact well. I will take as an example a problem of automatic differentiation from astrophysics where the function to be differentiated involves the solution of a non-linear equation. Can the CAS differentiate commands like f so lve (f--0,x--a); in a program? The third problem is a problem of trying to implement generic algorithms, efficiently. I will take as an example a linear p-adic Newton iteration. A generic version of this algorithm would work over Z mod p~ and over Fix] mod x ~ for example. Iterative solution of algebraic problems with polynomials Hans J. Stetter Technical University of Vienna Vienna, Austria In Numerical Analysis, it is standard to use an iterative solution procedure for a nonlinear problem. In Computer Algebra, one prefers exact finite manipulations which preserve the algebraic structure (like in Groebner basis computation); but often, in the end, an iterative numerical procedure can not be avoided (e.g. for zeros of a polynomial system). Furthermore, algebraic problems from Scientific Computing generally contain some \"empiric\" data so that their results are only defined to a limited accuracy. In this situation, an iterative approach may reduce to a few (or just one) step(s). We will at tempt to demonstrate how iterative procedures can be built upon the algebraic structure of a variety of problems for which such an approach has not been considered so far: After some discussion of zero clusters of univariate and systems of multivariate polynomials, we will mainly consider overdetermined problems like greatest common divisors, multivariate factorization, etc.; here the solution concept must be generalized to that of a pseudosolution which is an exact solution of a problem within the data tolerance neighborhood of the specified problem. Our iterative approach to the determination of pseudosolutions of such problems will prove computationally more flexible and efficient than recent \"classical\" approaches like [1] and [2]. [1 ] N.K. Karmarkar, Y.N. Lakshman: On Approximate GCDs of Univariate Polynomials, J. Symb.Comp. 26 (1998) 653-666 [2 ] M.A. Hitz, E. Kaltofen, Y.N. Lakshman: Efficient Algorithms for Computing the Nearest Polynomial with a Real Root and Related Problems, in: Proceed. ISSAC'99 (Ed. S.Dooley) (1999) 205-212","PeriodicalId":314801,"journal":{"name":"SIGSAM Bull.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"East Coast Computer Algebra Day\",\"authors\":\"M. Giesbrecht\",\"doi\":\"10.1145/377604.569765\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"s of Invited Talks Subresultants revisited Joachim yon zur Gathen Fachbereich 17 Mathematik-Informatik Universit~t Paderborn Paderborn, Germany Starting in the late 1960s, Collins and Brown & Tranb invented polynomial remainder sequences (PRS) in order to apply the Euclidean algorithm to integer polynomials. Subresultants play a major role in this theory. We compare the various notions of subresultants, give a general and precise definition of PRS, and clean up some loose ends: • prove a 1971 conjecture of Brown that all results in the subresultant PRS are integer polynomials, • show an exponential lower bound on the pseudo PRS. Lastly, we show how Kronecker had, already in the 1870s, discovered many of the fundamental properties of Euclid's algorithm for polynomials. Some problems in general purpose computer algebra systems design Michael Monagan Center for Experimental and Computational Mathematics Simon Fraser University In this talk I will present three problems of interest to the computer algebra community. The first is the problem of implementing modular algorithms efficiently. Application of the Chinese remainder theorem to solve the GCD and Groebner bases problems leads to a big loss of efficiency because the data structure overhead overwhelms the cost of the modular arithmetic. The second problem is how to build a system so that all the components interact well. I will take as an example a problem of automatic differentiation from astrophysics where the function to be differentiated involves the solution of a non-linear equation. Can the CAS differentiate commands like f so lve (f--0,x--a); in a program? The third problem is a problem of trying to implement generic algorithms, efficiently. I will take as an example a linear p-adic Newton iteration. A generic version of this algorithm would work over Z mod p~ and over Fix] mod x ~ for example. Iterative solution of algebraic problems with polynomials Hans J. Stetter Technical University of Vienna Vienna, Austria In Numerical Analysis, it is standard to use an iterative solution procedure for a nonlinear problem. In Computer Algebra, one prefers exact finite manipulations which preserve the algebraic structure (like in Groebner basis computation); but often, in the end, an iterative numerical procedure can not be avoided (e.g. for zeros of a polynomial system). Furthermore, algebraic problems from Scientific Computing generally contain some \\\"empiric\\\" data so that their results are only defined to a limited accuracy. In this situation, an iterative approach may reduce to a few (or just one) step(s). We will at tempt to demonstrate how iterative procedures can be built upon the algebraic structure of a variety of problems for which such an approach has not been considered so far: After some discussion of zero clusters of univariate and systems of multivariate polynomials, we will mainly consider overdetermined problems like greatest common divisors, multivariate factorization, etc.; here the solution concept must be generalized to that of a pseudosolution which is an exact solution of a problem within the data tolerance neighborhood of the specified problem. Our iterative approach to the determination of pseudosolutions of such problems will prove computationally more flexible and efficient than recent \\\"classical\\\" approaches like [1] and [2]. [1 ] N.K. Karmarkar, Y.N. Lakshman: On Approximate GCDs of Univariate Polynomials, J. Symb.Comp. 26 (1998) 653-666 [2 ] M.A. Hitz, E. Kaltofen, Y.N. Lakshman: Efficient Algorithms for Computing the Nearest Polynomial with a Real Root and Related Problems, in: Proceed. ISSAC'99 (Ed. S.Dooley) (1999) 205-212\",\"PeriodicalId\":314801,\"journal\":{\"name\":\"SIGSAM Bull.\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIGSAM Bull.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/377604.569765\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGSAM Bull.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/377604.569765","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
从20世纪60年代末开始,Collins和Brown & Tranb发明了多项式余数序列(PRS),以便将欧几里得算法应用于整数多项式。次结果在这一理论中起着重要作用。比较了子结式的各种概念,给出了子结式的一般精确定义,并澄清了一些遗留问题:•证明了Brown在1971年关于子结式的所有结果都是整数多项式的猜想;•给出了伪子结式的一个指数下界。最后,我们展示了克罗内克是如何在19世纪70年代发现欧几里得多项式算法的许多基本性质的。通用计算机代数系统设计中的一些问题西蒙弗雷泽大学实验与计算数学迈克尔莫纳根中心在这次演讲中,我将提出计算机代数社区感兴趣的三个问题。首先是有效实现模块化算法的问题。应用中国剩余定理来解决GCD和Groebner基问题会导致效率的巨大损失,因为数据结构开销超过了模块化算法的成本。第二个问题是如何构建一个系统,使所有组件相互作用良好。我将以天体物理学中的自动微分问题为例,其中要微分的函数涉及非线性方程的解。CAS是否可以区分像f这样的命令(f——0,x——a);在一个项目里?第三个问题是如何有效地实现通用算法。我将以线性p进牛顿迭代为例。例如,该算法的一般版本将在Z mod p~和Fix] mod x ~上工作。在数值分析中,对非线性问题使用迭代解是标准的。在计算机代数中,人们更喜欢保留代数结构的精确有限操作(如格罗布纳基计算);但通常,最终,一个迭代的数值过程是无法避免的(例如,对于多项式系统的零点)。此外,科学计算中的代数问题通常包含一些“经验”数据,因此它们的结果只能以有限的精度定义。在这种情况下,迭代方法可以减少到几个(或仅仅一个)步骤。我们将尝试演示迭代过程如何建立在各种问题的代数结构上,这些问题迄今为止还没有考虑过这种方法:在讨论了单变量多项式的零簇和多变量多项式系统之后,我们将主要考虑超定问题,如最大公约数、多变量分解等;在这里,解的概念必须推广到伪解的概念,伪解是问题在特定问题的数据容限邻域中的精确解。我们确定此类问题伪解的迭代方法将证明在计算上比最近的“经典”方法(如[1]和[2])更灵活和有效。[10]李建军,李建军,李建军,等。一种单变量多项式的近似gcd。M.A. Hitz, E. Kaltofen, Y.N. Lakshman:计算具有实根的最近邻多项式的有效算法及相关问题,第26期(1998):653-666。ISSAC'99 (S.Dooley主编)(1999)205-212
s of Invited Talks Subresultants revisited Joachim yon zur Gathen Fachbereich 17 Mathematik-Informatik Universit~t Paderborn Paderborn, Germany Starting in the late 1960s, Collins and Brown & Tranb invented polynomial remainder sequences (PRS) in order to apply the Euclidean algorithm to integer polynomials. Subresultants play a major role in this theory. We compare the various notions of subresultants, give a general and precise definition of PRS, and clean up some loose ends: • prove a 1971 conjecture of Brown that all results in the subresultant PRS are integer polynomials, • show an exponential lower bound on the pseudo PRS. Lastly, we show how Kronecker had, already in the 1870s, discovered many of the fundamental properties of Euclid's algorithm for polynomials. Some problems in general purpose computer algebra systems design Michael Monagan Center for Experimental and Computational Mathematics Simon Fraser University In this talk I will present three problems of interest to the computer algebra community. The first is the problem of implementing modular algorithms efficiently. Application of the Chinese remainder theorem to solve the GCD and Groebner bases problems leads to a big loss of efficiency because the data structure overhead overwhelms the cost of the modular arithmetic. The second problem is how to build a system so that all the components interact well. I will take as an example a problem of automatic differentiation from astrophysics where the function to be differentiated involves the solution of a non-linear equation. Can the CAS differentiate commands like f so lve (f--0,x--a); in a program? The third problem is a problem of trying to implement generic algorithms, efficiently. I will take as an example a linear p-adic Newton iteration. A generic version of this algorithm would work over Z mod p~ and over Fix] mod x ~ for example. Iterative solution of algebraic problems with polynomials Hans J. Stetter Technical University of Vienna Vienna, Austria In Numerical Analysis, it is standard to use an iterative solution procedure for a nonlinear problem. In Computer Algebra, one prefers exact finite manipulations which preserve the algebraic structure (like in Groebner basis computation); but often, in the end, an iterative numerical procedure can not be avoided (e.g. for zeros of a polynomial system). Furthermore, algebraic problems from Scientific Computing generally contain some "empiric" data so that their results are only defined to a limited accuracy. In this situation, an iterative approach may reduce to a few (or just one) step(s). We will at tempt to demonstrate how iterative procedures can be built upon the algebraic structure of a variety of problems for which such an approach has not been considered so far: After some discussion of zero clusters of univariate and systems of multivariate polynomials, we will mainly consider overdetermined problems like greatest common divisors, multivariate factorization, etc.; here the solution concept must be generalized to that of a pseudosolution which is an exact solution of a problem within the data tolerance neighborhood of the specified problem. Our iterative approach to the determination of pseudosolutions of such problems will prove computationally more flexible and efficient than recent "classical" approaches like [1] and [2]. [1 ] N.K. Karmarkar, Y.N. Lakshman: On Approximate GCDs of Univariate Polynomials, J. Symb.Comp. 26 (1998) 653-666 [2 ] M.A. Hitz, E. Kaltofen, Y.N. Lakshman: Efficient Algorithms for Computing the Nearest Polynomial with a Real Root and Related Problems, in: Proceed. ISSAC'99 (Ed. S.Dooley) (1999) 205-212