cs.PL
20 postsarXiv:2501.00169v1 Announce Type: new Abstract: Deep Learning experiments have critical requirements regarding the careful handling of their datasets as well as the efficient and correct usage of APIs that interact with hardware accelerators. On the one hand, software mistakes during data handling can contaminate experiments and lead to incorrect results. On the other hand, poorly coded APIs that interact with the hardware can lead to sub-optimal usage and untrustworthy conclusions. In this work we investigate the use of Linear Logic for the analysis of Deep Learning experiments. We show that primitives and operators of Linear Logic can be used to express: (i) an abstract representation of the control flow of an experiment, (ii) a set of available experimental resources, such as API calls to the underlying data-structures and hardware as well as (iii) reasoning rules about the correct consumption of resources during experiments. Our proposed model is not only lightweight but also easy to comprehend having both a symbolic and a visual component. Finally, its artifacts are themselves proofs in Linear Logic that can be readily verified by off-the-shelf reasoners.
arXiv:2501.00642v1 Announce Type: new Abstract: Large Language Models (LLMs) based agents are transforming the programming language landscape by facilitating learning for beginners, enabling code generation, and optimizing documentation workflows. Hardware Description Languages (HDLs), with their smaller user community, stand to benefit significantly from the application of LLMs as tools for learning new HDLs. This paper investigates the challenges and solutions of enabling LLMs for HDLs, particularly for HDLs that LLMs have not been previously trained on. This work introduces HDLAgent, an AI agent optimized for LLMs with limited knowledge of various HDLs. It significantly enhances off-the-shelf LLMs.
arXiv:2501.00655v1 Announce Type: new Abstract: Compilers are complex, and significant effort has been expended on testing them. Techniques such as random program generation and differential testing have proved highly effective and have uncovered thousands of bugs in production compilers. The majority of effort has been expended on validating that a compiler produces correct code for a given input, while less attention has been paid to ensuring that the compiler produces performant code. In this work we adapt differential testing to the task of identifying missed optimization opportunities in compilers. We develop a novel testing approach which combines large language models (LLMs) with a series of differential testing strategies and use them to find missing code size optimizations in C / C++ compilers. The advantage of our approach is its simplicity. We offload the complex task of generating random code to an off-the-shelf LLM, and use heuristics and analyses to identify anomalous compiler behavior. Our approach requires fewer than 150 lines of code to implement. This simplicity makes it extensible. By simply changing the target compiler and initial LLM prompt we port the approach from C / C++ to Rust and Swift, finding bugs in both. To date we have reported 24 confirmed bugs in production compilers, and conclude that LLM-assisted testing is a promising avenue for detecting optimization bugs in real world compilers.
arXiv:2304.08391v2 Announce Type: replace Abstract: This paper describes a new open-source proof processing tool, mizar-rs, a wholesale reimplementation of core parts of the Mizar proof system, written in Rust. In particular, the "checker" and "analyzer" of Mizar are implemented, which together form the trusted core of Mizar. This is to our knowledge the first and only external implementation of these components. Thanks to the loose coupling of Mizar's passes, it is possible to use the checker as a drop-in replacement for the original, and we have used this to verify the entire MML in 11.8 minutes on 8 cores, a 4.8x speedup over the original Pascal implementation. Since Mizar is not designed to have a small trusted core, checking Mizar proofs entails following Mizar closely, so our ability to detect bugs is limited. Nevertheless, we were able to find multiple memory errors, four soundness bugs in the original (which were not being exploited in MML), in addition to one non-critical bug which was being exploited in 46 different MML articles. We hope to use this checker as a base for proof export tooling, as well as revitalizing development of the language.
arXiv:2412.18134v1 Announce Type: new Abstract: The correctness of computations remains a significant challenge in computer science, with traditional approaches relying on automated testing or formal verification. Self-testing/correcting programs introduce an alternative paradigm, allowing a program to verify and correct its own outputs via randomized reductions, a concept that previously required manual derivation. In this paper, we present Bitween, a method and tool for automated learning of randomized (self)-reductions and program properties in numerical programs. Bitween combines symbolic analysis and machine learning, with a surprising finding: polynomial-time linear regression, a basic optimization method, is not only sufficient but also highly effective for deriving complex randomized self-reductions and program invariants, often outperforming sophisticated mixed-integer linear programming solvers. We establish a theoretical framework for learning these reductions and introduce RSR-Bench, a benchmark suite for evaluating Bitween's capabilities on scientific and machine learning functions. Our empirical results show that Bitween surpasses state-of-the-art tools in scalability, stability, and sample efficiency when evaluated on nonlinear invariant benchmarks like NLA-DigBench. Bitween is open-source as a Python package and accessible via a web interface that supports C language programs.
arXiv:2408.07843v2 Announce Type: replace Abstract: There is a continuing interest in using standard language constructs for accelerated computing in order to avoid (sometimes vendor-specific) external APIs. For Fortran codes, the {\tt do concurrent} (DC) loop has been successfully demonstrated on the NVIDIA platform. However, support for DC on other platforms has taken longer to implement. Recently, Intel has added DC GPU offload support to its compiler, as has HPE for AMD GPUs. In this paper, we explore the current portability of using DC across GPU vendors using the in-production solar surface flux evolution code, HipFT. We discuss implementation and compilation details, including when/where using directive APIs for data movement is needed/desired compared to using a unified memory system. The performance achieved on both data center and consumer platforms is shown.
arXiv:2412.14234v2 Announce Type: replace Abstract: Despite extensive usage in high-performance, low-level systems programming applications, C is susceptible to vulnerabilities due to manual memory management and unsafe pointer operations. Rust, a modern systems programming language, offers a compelling alternative. Its unique ownership model and type system ensure memory safety without sacrificing performance. In this paper, we present Syzygy, an automated approach to translate C to safe Rust. Our technique uses a synergistic combination of LLM-driven code and test translation guided by dynamic-analysis-generated execution information. This paired translation runs incrementally in a loop over the program in dependency order of the code elements while maintaining per-step correctness. Our approach exposes novel insights on combining the strengths of LLMs and dynamic analysis in the context of scaling and combining code generation with testing. We apply our approach to successfully translate Zopfli, a high-performance compression library with ~3000 lines of code and 98 functions. We validate the translation by testing equivalence with the source C program on a set of inputs. To our knowledge, this is the largest automated and test-validated C to safe Rust code translation achieved so far.
arXiv:2412.16179v1 Announce Type: new Abstract: The ubiquity of networking infrastructure in modern life necessitates scrutiny into networking fundamentals to ensure the safety and security of that infrastructure. The formalization of concurrent algorithms, a cornerstone of networking, is a longstanding area of research in which models and frameworks describing distributed systems are established. Despite its long history of study, the challenge of concisely representing and verifying concurrent algorithms remains unresolved. Existing formalisms, while powerful, often fail to capture the dynamic nature of real-world concurrency in a manner that is both comprehensive and scalable. This paper explores the evolution of formal models of concurrency over time, investigating their generality and utility for reasoning about real-world networking programs. Four foundational papers on formal concurrency are considered: Hoare's Parallel programming: An axiomatic approach, Milner's A Calculus of Mobile Processes, O'Hearn's Resources, Concurrency and Local Reasoning, and the recent development of Coq's Iris framework.
arXiv:2307.09776v2 Announce Type: replace Abstract: Recently interest has increased in applying reactive synthesis to more practical richer-than-Boolean domains. One of the major challenges in this area is to establish when certain repeating behaviour terminates in a desired state when the number of steps is unbounded. This isolated problem, by itself, is already undecidable, and forms part of the overall difficulty of this kind of synthesis tasks. Relatively successful approaches exist for deterministic games with at most B{\"u}chi conditions. Our contribution goes beyond, being the first effective approach for solving symbolic synthesis problems with full LTL objectives, based on novel liveness refinements guided by the underlying game. Our CEGAR-based approach relies on a sound boolean abstraction of the problem, spuriousness checking of abstract counterstrategies through invariant checking, and extracting fresh safety or liveness properties of the concrete game from counterexamples. The latter are used to refine the abstraction, which is used to re-attempt synthesis. Our discrete synthesis tool outperforms the state-of-the-art on LIA benchmarks from literature. We also introduce benchmarks that are out of scope for all other approaches.
arXiv:2309.01261v4 Announce Type: replace Abstract: Worst-case input generation aims to automatically generate inputs that exhibit the worst-case performance of programs. It has several applications, and can, for example, detect vulnerabilities to denial-of-service (DoS) attacks. However, it is non-trivial to generate worst-case inputs for concurrent programs, particularly for resources like memory where the peak cost depends on how processes are scheduled. This article presents the first sound worst-case input generation algorithm for concurrent programs under non-monotone resource metrics like memory. The key insight is to leverage resource-annotated session types and symbolic execution. Session types describe communication protocols on channels in process calculi. Equipped with resource annotations, resource-annotated session types not only encode cost bounds but also indicate how many resources can be reused and transferred between processes. This information is critical for identifying a worst-case execution path during symbolic execution. The algorithm is sound: if it returns any input, it is guaranteed to be a valid worst-case input. The algorithm is also relatively complete: as long as resource-annotated session types are sufficiently expressive and the background theory for SMT solving is decidable, a worst-case input is guaranteed to be returned. A simple case study of a web server's memory usage demonstrates the utility of the worst-case input generation algorithm.
arXiv:2405.05751v2 Announce Type: replace Abstract: We introduce Mirage, the first multi-level superoptimizer for tensor programs. A key idea in Mirage is $\mu$Graphs, a uniform representation of tensor programs at the kernel, thread block, and thread levels of the GPU compute hierarchy. $\mu$Graphs enable Mirage to discover novel optimizations that combine algebraic transformations, schedule transformations, and generation of new custom kernels. To navigate the large search space, Mirage introduces a pruning technique based on abstraction that significantly reduces the search space and provides a certain optimality guarantee. To ensure that the optimized $\mu$Graph is equivalent to the input program, Mirage introduces a probabilistic equivalence verification procedure with strong theoretical guarantees. Our evaluation shows that Mirage outperforms existing approaches by 1.1-2.9$\times$ even for DNNs that are widely used and heavily optimized. Mirage is publicly available at https://github.com/mirage-project/mirage.
arXiv:2412.16161v1 Announce Type: new Abstract: In this short article I introduce the evitaicossa package which provides functionality for antiassociative algebras in the R programming language; it is available on CRAN at https://CRAN.R-project.org/package=evitaicossa.
arXiv:2412.16206v1 Announce Type: new Abstract: Can we use the flow of information to understand type systems? I present two familiar type systems in pursuit of an `Information Aware' style, using information effects to reveal data flow and help in implementing them. I also calculate a general, scoped, constraint-based representation of typechecking problems from the typing rules.
arXiv:2412.16185v1 Announce Type: new Abstract: The FRACTRAN programs $\sqrt{2}$GAME and NR$\sqrt{2}$GAME are presented, both of which compute the decimal expansion of $\sqrt{2}$. Our $\sqrt{2}$GAME is analogous to Conway's PIGAME program. In fact, our proof carries over to PIGAME to produce a simpler proof of Conway's theorem as well as highlight how the efficiency of the program can be improved. NR$\sqrt{2}$GAME encodes the canonical example of the Newton--Raphson method in FRACTRAN.
arXiv:2412.17330v1 Announce Type: new Abstract: Many approaches to program synthesis perform a combinatorial search within a large space of programs to find one that satisfies a given specification. To tame the search space blowup, previous works introduced probabilistic and neural approaches to guide this combinatorial search by inducing heuristic cost functions. Best-first search algorithms ensure to search in the exact order induced by the cost function, significantly reducing the portion of the program space to be explored. We present a new best-first search algorithm called EcoSearch, which is the first constant-delay algorithm for pre-generation cost function: the amount of compute required between outputting two programs is constant, and in particular does not increase over time. This key property yields important speedups: we observe that EcoSearch outperforms its predecessors on two classic domains.
arXiv:2203.04608v5 Announce Type: replace Abstract: Probabilistic programming languages (PPLs) allow programmers to construct statistical models and then simulate data or perform inference over them. Many PPLs restrict models to a particular instance of simulation or inference, limiting their reusability. In other PPLs, models are not readily composable. Using Haskell as the host language, we present an embedded domain specific language based on algebraic effects, where probabilistic models are modular, first-class, and reusable for both simulation and inference. We also demonstrate how simulation and inference can be expressed naturally as composable program transformations using algebraic effect handlers.
arXiv:2301.06136v4 Announce Type: replace Abstract: We present a data-driven approach to the quantitative verification of probabilistic programs and stochastic dynamical models. Our approach leverages neural networks to compute tight and sound bounds for the probability that a stochastic process hits a target condition within finite time. This problem subsumes a variety of quantitative verification questions, from the reachability and safety analysis of discrete-time stochastic dynamical models, to the study of assertion-violation and termination analysis of probabilistic programs. We rely on neural networks to represent supermartingale certificates that yield such probability bounds, which we compute using a counterexample-guided inductive synthesis loop: we train the neural certificate while tightening the probability bound over samples of the state space using stochastic optimisation, and then we formally check the certificate's validity over every possible state using satisfiability modulo theories; if we receive a counterexample, we add it to our set of samples and repeat the loop until validity is confirmed. We demonstrate on a diverse set of benchmarks that, thanks to the expressive power of neural networks, our method yields smaller or comparable probability bounds than existing symbolic methods in all cases, and that our approach succeeds on models that are entirely beyond the reach of such alternative techniques.
arXiv:2407.20002v2 Announce Type: replace Abstract: Program verification tools are often implemented as front-end translations of an input program into an intermediate verification language (IVL) such as Boogie, GIL, Viper, or Why3. The resulting IVL program is then verified using an existing back-end verifier. A soundness proof for such a translational verifier needs to relate the input program and verification logic to the semantics of the IVL, which in turn needs to be connected with the verification logic implemented in the back-end verifiers. Performing such proofs is challenging due to the large semantic gap between the input and output programs and logics, especially for complex verification logics such as separation logic. This paper presents a formal framework for reasoning about translational separation logic verifiers. At its center is a generic core IVL that captures the essence of different separation logics. We define its operational semantics and formally connect it to two different back-end verifiers, which use symbolic execution and verification condition generation, resp. Crucially, this semantics uses angelic non-determinism to enable the application of different proof search algorithms and heuristics in the back-end verifiers. An axiomatic semantics for the core IVL simplifies reasoning about the front-end translation by performing essential proof steps once and for all in the equivalence proof with the operational semantics rather than for each concrete front-end translation. We illustrate the usefulness of our formal framework by instantiating our core IVL with elements of Viper and connecting it to two Viper back-ends as well as a front-end for concurrent separation logic. All our technical results have been formalized in Isabelle/HOL, including the core IVL and its semantics, the semantics of two back-ends for a subset of Viper, and all proofs.
arXiv:2410.10908v2 Announce Type: replace Abstract: Julia has been heralded as a potential successor to Python for scientific machine learning and numerical computing, boasting ergonomic and performance improvements. Since Julia's inception in 2012 and declaration of language goals in 2017, its ecosystem and language-level features have grown tremendously. In this paper, we take a modern look at Julia's features and ecosystem, assess the current state of the language, and discuss its viability and pitfalls as a replacement for Python as the de-facto scientific machine learning language. We call for the community to address Julia's language-level issues that are preventing further adoption.
arXiv:2412.15768v1 Announce Type: new Abstract: Processing large amounts of data fast, in constant and small space is the point of stream processing and the reason for its increasing use. Alas, the most performant, imperative processing code tends to be almost impossible to read, let alone modify, reuse -- or write correctly. We present both a stream compilation theory and its implementation as a portable stream processing library Strymonas that lets us assemble complex stream pipelines just by plugging in simple combinators, and yet attain the performance of hand-written imperative loops and state machines. The library supports finite and infinite streams and offers a rich set of combinators: from map, filter, take(while) to flat-map (nesting), zip, map-accumulate and sliding windowing. The combinators may be freely composed, and yet the resulting convoluted imperative code contains no traces of combinator abstractions: no closures, intermediate objects or tuples. The high-performance is portable and statically guaranteed, without relying on compiler or black-box optimizations. We greatly exceed in performance the available stream processing libraries in OCaml. The library exists in two versions, OCaml and Scala 3, and supports pluggable backends for code generation (currently: C, OCaml and Scala). Strymonas has been developed in tandem with the equational theory of stateful streams. Our theoretical model can represent all desired pipelines and also guarantees the existence of unique normal forms, which are mappable to (fused) state machines. We describe the normalization algorithm, as a form of normalization-by-evaluation. Stream pipeline compilation and optimization are represented as normalization, and are hence deterministic and terminating, with the guaranteed outcome. The equational theory lets us state and prove the correctness of the complete fusion optimization.