arXiv - CS - Programming Languages最新文献

筛选
英文 中文
Law and Order for Typestate with Borrowing 借贷类型国家的法律和秩序
arXiv - CS - Programming Languages Pub Date : 2024-08-26 DOI: arxiv-2408.14031
Hannes Saffrich, Yuki Nishida, Peter Thiemann
{"title":"Law and Order for Typestate with Borrowing","authors":"Hannes Saffrich, Yuki Nishida, Peter Thiemann","doi":"arxiv-2408.14031","DOIUrl":"https://doi.org/arxiv-2408.14031","url":null,"abstract":"Typestate systems are notoriously complex as they require sophisticated\u0000machinery for tracking aliasing. We propose a new, transition-oriented\u0000foundation for typestate in the setting of impure functional programming. Our\u0000approach relies on ordered types for simple alias tracking and its\u0000formalization draws on work on bunched implications. Yet, we support a flexible\u0000notion of borrowing in the presence of typestate. Our core calculus comes with a notion of resource types indexed by an ordered\u0000partial monoid that models abstract state transitions. We prove syntactic type\u0000soundness with respect to a resource-instrumented semantics. We give an\u0000algorithmic version of our type system and prove its soundness. Algorithmic\u0000typing facilitates a simple surface language that does not expose tedious\u0000details of ordered types. We implemented a typechecker for the surface language\u0000along with an interpreter for the core language.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making Formulog Fast: An Argument for Unconventional Datalog Evaluation (Extended Version) 让 Formulog 更快:非常规数据模型评估论证(扩展版)
arXiv - CS - Programming Languages Pub Date : 2024-08-26 DOI: arxiv-2408.14017
Aaron BembenekUniversity of Melbourne, Michael GreenbergStevens Institute of Technology, Stephen ChongHarvard University
{"title":"Making Formulog Fast: An Argument for Unconventional Datalog Evaluation (Extended Version)","authors":"Aaron BembenekUniversity of Melbourne, Michael GreenbergStevens Institute of Technology, Stephen ChongHarvard University","doi":"arxiv-2408.14017","DOIUrl":"https://doi.org/arxiv-2408.14017","url":null,"abstract":"By combining Datalog, SMT solving, and functional programming, the language\u0000Formulog provides an appealing mix of features for implementing SMT-based\u0000static analyses (e.g., refinement type checking, symbolic execution) in a\u0000natural, declarative way. At the same time, the performance of its custom\u0000Datalog solver can be an impediment to using Formulog beyond prototyping -- a\u0000common problem for Datalog variants that aspire to solve large problem\u0000instances. In this work we speed up Formulog evaluation, with surprising\u0000results: while 2.2x speedups are obtained by using the conventional techniques\u0000for high-performance Datalog (e.g., compilation, specialized data structures),\u0000the big wins come by abandoning the central assumption in modern performant\u0000Datalog engines, semi-naive Datalog evaluation. In its place, we develop eager\u0000evaluation, a concurrent Datalog evaluation algorithm that explores the logical\u0000inference space via a depth-first traversal order. In practice, eager\u0000evaluation leads to an advantageous distribution of Formulog's SMT workload to\u0000external SMT solvers and improved SMT solving times: our eager evaluation\u0000extensions to the Formulog interpreter and Souffl'e's code generator achieve\u0000mean 5.2x and 7.6x speedups, respectively, over the optimized code generated by\u0000off-the-shelf Souffl'e on SMT-heavy Formulog benchmarks. Using compilation and eager evaluation, Formulog implementations of\u0000refinement type checking, bottom-up pointer analysis, and symbolic execution\u0000achieve speedups on 20 out of 23 benchmarks over previously published,\u0000hand-tuned analyses written in F#, Java, and C++, providing strong evidence\u0000that Formulog can be the basis of a realistic platform for SMT-based static\u0000analysis. Moreover, our experience adds nuance to the conventional wisdom that\u0000semi-naive evaluation is the one-size-fits-all best Datalog evaluation\u0000algorithm for static analysis workloads.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guard Analysis and Safe Erasure Gradual Typing: a Type System for Elixir 守护分析和安全擦除 渐进式类型:Elixir 的类型系统
arXiv - CS - Programming Languages Pub Date : 2024-08-26 DOI: arxiv-2408.14345
Giuseppe Castagna, Guillaume Duboc
{"title":"Guard Analysis and Safe Erasure Gradual Typing: a Type System for Elixir","authors":"Giuseppe Castagna, Guillaume Duboc","doi":"arxiv-2408.14345","DOIUrl":"https://doi.org/arxiv-2408.14345","url":null,"abstract":"We define several techniques to extend gradual typing with semantic\u0000subtyping, specifically targeting dynamic languages. Focusing on the Elixir\u0000programming language, we provide the theoretical foundations for its type\u0000system. Our approach demonstrates how to achieve type soundness for gradual\u0000typing in existing dynamic languages without modifying their compilation, while\u0000still maintaining high precision. This is accomplished through the static\u0000detection of \"strong functions\", which leverage runtime checks inserted by the\u0000programmer or performed by the virtual machine, and through a fine-grained type\u0000analysis of pattern-matching expressions with guards.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concurrent Data Structures Made Easy (Extended Version) 并发数据结构轻松学(扩展版)
arXiv - CS - Programming Languages Pub Date : 2024-08-25 DOI: arxiv-2408.13779
Callista Le, Kiran Gopinathan, Koon Wen Lee, Seth Gilbert, Ilya Sergey
{"title":"Concurrent Data Structures Made Easy (Extended Version)","authors":"Callista Le, Kiran Gopinathan, Koon Wen Lee, Seth Gilbert, Ilya Sergey","doi":"arxiv-2408.13779","DOIUrl":"https://doi.org/arxiv-2408.13779","url":null,"abstract":"Design of an efficient thread-safe concurrent data structure is a balancing\u0000act between its implementation complexity and performance. Lock-based\u0000concurrent data structures, which are relatively easy to derive from their\u0000sequential counterparts and to prove thread-safe, suffer from poor throughput\u0000under even light multi-threaded workload. At the same time, lock-free\u0000concurrent structures allow for high throughput, but are notoriously difficult\u0000to get right and require careful reasoning to formally establish their\u0000correctness. We explore a solution to this conundrum based on batch parallelism, an\u0000approach for designing concurrent data structures via a simple insight:\u0000efficiently processing a batch of a priori known operations in parallel is\u0000easier than optimising performance for a stream of arbitrary asynchronous\u0000requests. Alas, batch-parallel structures have not seen wide practical adoption\u0000due to (i) the inconvenience of having to structure multi-threaded programs to\u0000explicitly group operations and (ii) the lack of a systematic methodology to\u0000implement batch-parallel structures as simply as lock-based ones. We present OBatcher-an OCaml library that streamlines the design,\u0000implementation, and usage of batch-parallel structures. It solves the first\u0000challenge (how to use) by suggesting a new lightweight implicit batching design\u0000that is built on top of generic asynchronous programming mechanisms. The second\u0000challenge (how to implement) is addressed by identifying a family of strategies\u0000for converting common sequential structures into efficient batch-parallel ones.\u0000We showcase OBatcher with a diverse set of benchmarks. Our evaluation of all\u0000the implementations on large asynchronous workloads shows that (a) they\u0000consistently outperform the corresponding coarse-grained lock-based\u0000implementations and that (b) their throughput scales reasonably with the number\u0000of processors.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DOCE: Finding the Sweet Spot for Execution-Based Code Generation DOCE:寻找基于执行的代码生成的最佳点
arXiv - CS - Programming Languages Pub Date : 2024-08-25 DOI: arxiv-2408.13745
Haau-Sing Li, Patrick Fernandes, Iryna Gurevych, André F. T. Martins
{"title":"DOCE: Finding the Sweet Spot for Execution-Based Code Generation","authors":"Haau-Sing Li, Patrick Fernandes, Iryna Gurevych, André F. T. Martins","doi":"arxiv-2408.13745","DOIUrl":"https://doi.org/arxiv-2408.13745","url":null,"abstract":"Recently, a diverse set of decoding and reranking procedures have been shown\u0000effective for LLM-based code generation. However, a comprehensive framework\u0000that links and experimentally compares these methods is missing. We address\u0000this by proposing Decoding Objectives for Code Execution, a comprehensive\u0000framework that includes candidate generation, $n$-best reranking, minimum Bayes\u0000risk (MBR) decoding, and self-debugging as the core components. We then study\u0000the contributions of these components through execution-based evaluation\u0000metrics. Our findings highlight the importance of execution-based methods and\u0000the difference gap between execution-based and execution-free methods.\u0000Furthermore, we assess the impact of filtering based on trial unit tests, a\u0000simple and effective strategy that has been often overlooked in prior works. We\u0000also propose self-debugging on multiple candidates, obtaining state-of-the-art\u0000performance on reranking for code generation. We expect our framework to\u0000provide a solid guideline for future research on code generation.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Which Part of the Heap is Useful? Improving Heap Liveness Analysis 堆的哪一部分有用?改进堆有效性分析
arXiv - CS - Programming Languages Pub Date : 2024-08-23 DOI: arxiv-2408.12947
Vini Kanvar, Uday P. Khedker
{"title":"Which Part of the Heap is Useful? Improving Heap Liveness Analysis","authors":"Vini Kanvar, Uday P. Khedker","doi":"arxiv-2408.12947","DOIUrl":"https://doi.org/arxiv-2408.12947","url":null,"abstract":"With the growing sizes of data structures allocated in heap, understanding\u0000the actual use of heap memory is critically important for minimizing cache\u0000misses and reclaiming unused memory. A static analysis aimed at this is\u0000difficult because the heap locations are unnamed. Using allocation sites to\u0000name them creates very few distinctions making it difficult to identify\u0000allocated heap locations that are not used. Heap liveness analysis using access\u0000graphs solves this problem by (a) using a storeless model of heap memory by\u0000naming the locations with access paths, and (b) representing the unbounded sets\u0000of access paths (which are regular languages) as finite automata. We improve the scalability and efficiency of heap liveness analysis, and\u0000reduce the amount of computed heap liveness information by using deterministic\u0000automata and by minimizing the inclusion of aliased access paths in the\u0000language. Practically, our field-, flow-, context-sensitive liveness analysis\u0000on SPEC CPU2006 benchmarks scales to 36 kLoC (existing analysis scales to 10.5\u0000kLoC) and improves efficiency even up to 99%. For some of the benchmarks, our\u0000technique shows multifold reduction in the computed liveness information,\u0000ranging from 2 to 100 times (in terms of the number of live access paths),\u0000without compromising on soundness.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LOUD: Synthesizing Strongest and Weakest Specifications 大声:合成最强和最弱的规格
arXiv - CS - Programming Languages Pub Date : 2024-08-22 DOI: arxiv-2408.12539
Kanghee Park, Xuanyu Peng, Loris D'Antoni
{"title":"LOUD: Synthesizing Strongest and Weakest Specifications","authors":"Kanghee Park, Xuanyu Peng, Loris D'Antoni","doi":"arxiv-2408.12539","DOIUrl":"https://doi.org/arxiv-2408.12539","url":null,"abstract":"Specifications allow us to formally state and understand what programs are\u0000intended to do. To help one extract useful properties from code, Park et al.\u0000recently proposed a framework that given (i) a quantifier-free query posed\u0000about a set of function definitions, and (ii) a domain-specific language L in\u0000which each extracted property is to be expressed (we call properties in the\u0000language L-properties), synthesizes a set of L-properties such that each of the\u0000property is a strongest L-consequence for the query: the property is an\u0000over-approximation of query and there is no other L-property that\u0000over-approximates query and is strictly more precise than each property. The framework by Park et al. has two key limitations. First, it only supports\u0000quantifier-free query formulas and thus cannot synthesize specifications for\u0000queries involving nondeterminism, concurrency, etc. Second, it can only compute\u0000L-consequences, i.e., over-approximations of the program behavior. This paper addresses these two limitations and presents a framework, Loud,\u0000for synthesizing strongest L-consequences and weakest L-implicants (i.e.,\u0000under-approximations of the query) for function definitions that can involve\u0000existential quantifiers. We implemented a solver, Aspire, for problems expressed in Loud which can be\u0000used to describe and identify sources of bugs in both deterministic and\u0000nondeterministic programs, extract properties from concurrent programs, and\u0000synthesize winning strategies in two-player games.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLM4VV: Exploring LLM-as-a-Judge for Validation and Verification Testsuites LLM4VV:探索用于验证和核查测试套件的 LLM 即法官
arXiv - CS - Programming Languages Pub Date : 2024-08-21 DOI: arxiv-2408.11729
Zachariah Sollenberger, Jay Patel, Christian Munley, Aaron Jarmusch, Sunita Chandrasekaran
{"title":"LLM4VV: Exploring LLM-as-a-Judge for Validation and Verification Testsuites","authors":"Zachariah Sollenberger, Jay Patel, Christian Munley, Aaron Jarmusch, Sunita Chandrasekaran","doi":"arxiv-2408.11729","DOIUrl":"https://doi.org/arxiv-2408.11729","url":null,"abstract":"Large Language Models (LLM) are evolving and have significantly\u0000revolutionized the landscape of software development. If used well, they can\u0000significantly accelerate the software development cycle. At the same time, the\u0000community is very cautious of the models being trained on biased or sensitive\u0000data, which can lead to biased outputs along with the inadvertent release of\u0000confidential information. Additionally, the carbon footprints and the\u0000un-explainability of these black box models continue to raise questions about\u0000the usability of LLMs. With the abundance of opportunities LLMs have to offer, this paper explores\u0000the idea of judging tests used to evaluate compiler implementations of\u0000directive-based programming models as well as probe into the black box of LLMs.\u0000Based on our results, utilizing an agent-based prompting approach and setting\u0000up a validation pipeline structure drastically increased the quality of\u0000DeepSeek Coder, the LLM chosen for the evaluation purposes.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference Plans for Hybrid Particle Filtering 混合粒子滤波的推理计划
arXiv - CS - Programming Languages Pub Date : 2024-08-21 DOI: arxiv-2408.11283
Ellie Y. Cheng, Eric Atkinson, Guillaume Baudart, Louis Mandel, Michael Carbin
{"title":"Inference Plans for Hybrid Particle Filtering","authors":"Ellie Y. Cheng, Eric Atkinson, Guillaume Baudart, Louis Mandel, Michael Carbin","doi":"arxiv-2408.11283","DOIUrl":"https://doi.org/arxiv-2408.11283","url":null,"abstract":"Advanced probabilistic programming languages (PPLs) use hybrid inference\u0000systems to combine symbolic exact inference and Monte Carlo methods to improve\u0000inference performance. These systems use heuristics to partition random\u0000variables within the program into variables that are encoded symbolically and\u0000variables that are encoded with sampled values, and the heuristics are not\u0000necessarily aligned with the performance evaluation metrics used by the\u0000developer. In this work, we present inference plans, a programming interface\u0000that enables developers to control the partitioning of random variables during\u0000hybrid particle filtering. We further present Siren, a new PPL that enables\u0000developers to use annotations to specify inference plans the inference system\u0000must implement. To assist developers with statically reasoning about whether an\u0000inference plan can be implemented, we present an abstract-interpretation-based\u0000static analysis for Siren for determining inference plan satisfiability. We\u0000prove the analysis is sound with respect to Siren's semantics. Our evaluation\u0000applies inference plans to three different hybrid particle filtering algorithms\u0000on a suite of benchmarks and shows that the control provided by inference plans\u0000enables speed ups of 1.76x on average and up to 206x to reach target accuracy,\u0000compared to the inference plans implemented by default heuristics; the results\u0000also show that inference plans improve accuracy by 1.83x on average and up to\u0000595x with less or equal runtime, compared to the default inference plans. We\u0000further show that the static analysis is precise in practice, identifying all\u0000satisfiable inference plans in 27 out of the 33 benchmark-algorithm\u0000combinations.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A type system for data flow and alias analysis in ReScript 用于 ReScript 数据流和别名分析的类型系统
arXiv - CS - Programming Languages Pub Date : 2024-08-21 DOI: arxiv-2408.11954
Nicky Ask Lund, Hans Hüttel
{"title":"A type system for data flow and alias analysis in ReScript","authors":"Nicky Ask Lund, Hans Hüttel","doi":"arxiv-2408.11954","DOIUrl":"https://doi.org/arxiv-2408.11954","url":null,"abstract":"ReScript introduces a strongly typed language that targets JavaScript, as an\u0000alternative to gradually typed languages, such as TypeScript. In this paper, we\u0000present a type system for data-flow analysis for a subset of the ReScript\u0000language, more specific for a lambda-calculus with mutability and pattern\u0000matching. The type system is a local analysis that collects information about\u0000what variables are used and alias information.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信