LISP and Functional Programming最新文献

筛选
英文 中文
Incremental reduction in the lambda calculus 演算中的增量减少
LISP and Functional Programming Pub Date : 1990-05-01 DOI: 10.1145/91556.91679
J. Field, T. Teitelbaum
{"title":"Incremental reduction in the lambda calculus","authors":"J. Field, T. Teitelbaum","doi":"10.1145/91556.91679","DOIUrl":"https://doi.org/10.1145/91556.91679","url":null,"abstract":"An <italic>incremental</italic> algorithm is one that takes advantage of the fact that the function it computes is to be evaluated repeatedly on inputs that differ only slightly from one another, avoiding unnecessary duplication of common computations.\u0000We define here a new notion of incrementality for reduction in the untyped λ-calculus and describe an incremental reduction algorithm, Λ<supscrpt>inc</supscrpt>. We show that Λ<supscrpt>inc</supscrpt> has the desirable property of performing <italic>non-overlapping</italic> reductions on related terms, yet is simple enough to allow a practical implementation. The algorithm is based on a novel λ-reduction strategy that may prove useful in a non-incremental setting as well.\u0000Incremental λ-reduction can be used to advantage in any setting where an algorithm is specified in a functional or applicative manner.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
A module system for scheme 方案模块系统
LISP and Functional Programming Pub Date : 1990-05-01 DOI: 10.1145/91556.91573
Pavel Curtis, James Rauen
{"title":"A module system for scheme","authors":"Pavel Curtis, James Rauen","doi":"10.1145/91556.91573","DOIUrl":"https://doi.org/10.1145/91556.91573","url":null,"abstract":"This paper presents a module system designed for large-scale programming in Scheme. The module system separates specifications of objects from their implementations, permitting the separate development, compilation, and testing of modules. The module system also includes a robust macro facility.\u0000We discuss our design goals, the design of the module system, implementation issues, and our future plans.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121150581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A compositional analysis of evaluation-order and its application 评价顺序的组成分析及其应用
LISP and Functional Programming Pub Date : 1990-05-01 DOI: 10.1145/91556.91658
M. Draghicescu, S. Iyer
{"title":"A compositional analysis of evaluation-order and its application","authors":"M. Draghicescu, S. Iyer","doi":"10.1145/91556.91658","DOIUrl":"https://doi.org/10.1145/91556.91658","url":null,"abstract":"We present a compositional definition of the order of evaluation of variables in a lazy first-order functional language. Unlike other published work, our analysis applies to all evaluation strategies which may use strictness information to change the normal (lazy) order of evaluation. At the same time it can be adapted to pure lazy evaluation yielding a sharper analysis in this case. It can also be adapted to take advantage of any information about the order in which primitive functions evaluate their arguments. The time complexity of the method is that of strictness analysis.\u0000We also present a compositional definition of the set of variables which denote locations where the result of an expression might be stored. This analysis yields a simple solution to the aliasing problem.\u0000Using these two analyses we develop a new algorithm for the destructive update problem.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123434127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Trap architectures for Lisp systems Lisp系统的陷阱体系结构
LISP and Functional Programming Pub Date : 1990-05-01 DOI: 10.1145/91556.91595
Douglas Johnson
{"title":"Trap architectures for Lisp systems","authors":"Douglas Johnson","doi":"10.1145/91556.91595","DOIUrl":"https://doi.org/10.1145/91556.91595","url":null,"abstract":"Recent measurements of Lisp systems show a dramatic skewing of operation frequency. For example, small integer (fix-num) arithmetic dominates most programs, but other number types can occur on almost any operation. Likewise, few memory references trigger special handling for garbage collection, but nearly all memory operations could trigger such special handling. Systems like SPARC and SPUR have shown that small amounts of special hardware can significantly reduce the need for inline software checks by trapping when an unusual condition is detected.\u0000A system's trapping architecture now becomes key to performance. In most systems, the trap architecture is intended to handle errors (e.g., address faults) or conditions requiring large amounts of processing (e.g., page faults). The requirements for Lisp traps are quite different. In particular, the trap frequency is higher, processing time per trap is shorter, and most need to be handled in the user's address space and context.\u0000This paper looks at these requirements, evaluates current trap architectures, and proposes enhancements for meeting those requirements. These enhancements increase performance for Lisp 11%-35% at a cost of about 1.6% more CPU logic. They also aid debugging in general and speed floating point exception handling.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117301193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Context information for lazy code generation 用于惰性代码生成的上下文信息
LISP and Functional Programming Pub Date : 1990-05-01 DOI: 10.1145/91556.91665
H. R. Nielson, F. Nielson
{"title":"Context information for lazy code generation","authors":"H. R. Nielson, F. Nielson","doi":"10.1145/91556.91665","DOIUrl":"https://doi.org/10.1145/91556.91665","url":null,"abstract":"Functional languages like Miranda and Haskell employ a non-strict semantics. This is important for the functional programming style as it allows one to compute with infinite data structures. However, a straightforward implementation of the language will result in a rather inefficient implementation and therefore it is often combined with strictness analysis. A sticky version of the analysis is used to collect the information and annotate the program so that the information can be used by the subsequent passes of the compiler. The strictness analysis and its correctness properties are well understood by means of abstract interpretation whereas its sticky version is more subtle. — The purpose of the present paper is therefore to investigate how far one can go without introducing a sticky version of the analysis and thereby avoid the correctness problems connected with it.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123251864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Comparing mark-and sweep and stop-and-copy garbage collection 比较标记-清除和停止-复制垃圾收集
LISP and Functional Programming Pub Date : 1990-05-01 DOI: 10.1145/91556.91597
B. Zorn
{"title":"Comparing mark-and sweep and stop-and-copy garbage collection","authors":"B. Zorn","doi":"10.1145/91556.91597","DOIUrl":"https://doi.org/10.1145/91556.91597","url":null,"abstract":"Stop-and-copy garbage collection has been preferred to mark-and-sweep collection in the last decade because its collection time is proportional to the size of reachable data and not to the memory size. This paper compares the CPU overhead and the memory requirements of the two collection algorithms extended with generations, and finds that mark-and-sweep collection requires at most a small amount of additional CPU overhead (3-6%) but, requires an average of 20% (and up to 40%) less memory to achieve the same page fault rate. The comparison is based on results obtained using trace-driven simulation with large Common Lisp programs.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132284575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Speculative computation in multilisp 多lisp的推测计算
LISP and Functional Programming Pub Date : 1989-06-05 DOI: 10.1145/91556.91644
R. Osborne
{"title":"Speculative computation in multilisp","authors":"R. Osborne","doi":"10.1145/91556.91644","DOIUrl":"https://doi.org/10.1145/91556.91644","url":null,"abstract":"We present experimental evidence that performing computations in parallel before their results are known to be required can yield performance improvements over conventional approaches to parallel computing. We call such eager computation of expressions speculative computation, as opposed to conventional mandatory computation that is used in almost all contemporary parallel programming languages and systems. The two major requirements for speculative computation are: 1) a means to control computation to favor the most promising computations and 2) a means to abort computation and reclaim computation resources.\u0000We discuss these requirements in the parallel symbolic language Multilisp and present a sponsor model for speculative computation in Multilisp which handles control and reclamation of computation in a single, elegant framework. We outline an implementation of this sponsor model and present performance results for several applications of speculative computation. The results demonstrate that our support for speculative computation adds expressive and computational power to Multilisp, with observed performance improvement as great as 26 times over conventional approaches to parallel computation.","PeriodicalId":409945,"journal":{"name":"LISP and Functional Programming","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1989-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128331434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信