当前期刊: arXiv - CS - Programming Languages Go to current issue    加入关注   
显示样式:        排序: 导出
  • Gillian: Compositional Symbolic Execution for All
    arXiv.cs.PL Pub Date : 2020-01-14
    José Fragoso Santos; Petar Maksimović; Sacha-Élie Ayoun; Philippa Gardner

    We present Gillian, a language-independent framework for the development of compositional symbolic analysis tools. Gillian supports three flavours of analysis: whole-program symbolic testing, full verification, and bi-abduction. It comes with fully parametric meta-theoretical results and a modular implementation, designed to minimise the instantiation effort required of the user. We evaluate Gillian by instantiating it to JavaScript and C, and perform its analyses on a set of data-structure libraries, obtaining results that indicate that Gillian is robust enough to reason about real-world programming languages.

  • Circular Proofs in First-Order Linear Logic with Least and Greatest Fixed Points
    arXiv.cs.PL Pub Date : 2020-01-15
    Farzaneh Derakhshan; Frank Pfenning

    Inductive and coinductive structures are everywhere in mathematics and computer science. The induction principle is well known and fully exploited to reason about inductive structures like natural numbers and finite lists. To prove theorems about coinductive structures such as infinite streams and infinite trees we can appeal to bisimulation or the coinduction principle. Pure inductive and coinductive types however are not the only data structures we are interested to reason about. In this paper we present a calculus to prove theorems about mutually defined inductive and coinductive data types. Derivations are carried out in an infinitary sequent calculus for first order intuitionistic multiplicative additive linear logic with fixed points. We enforce a condition on these derivations to ensure their cut elimination property and thus validity. Our calculus is designed to reason about linear properties but we also allow appealing to first order theories such as arithmetic, by adding an adjoint downgrade modality. We show the strength of our calculus by proving several theorems on (mutual) inductive and coinductive data types.

  • Handling polymorphic algebraic effects
    arXiv.cs.PL Pub Date : 2018-11-18
    Taro Sekiyama; Atsushi Igarashi

    Algebraic effects and handlers are a powerful abstraction mechanism to represent and implement control effects. In this work, we study their extension with parametric polymorphism that allows abstracting not only expressions but also effects and handlers. Although polymorphism makes it possible to reuse and reason about effect implementations more effectively, it has long been known that a naive combination of polymorphic effects and let-polymorphism breaks type safety. Although type safety can often be gained by restricting let-bound expressions---e.g., by adopting value restriction or weak polymorphism---we propose a complementary approach that restricts handlers instead of let-bound expressions. Our key observation is that, informally speaking, a handler is safe if resumptions from the handler do not interfere with each other. To formalize our idea, we define a call-by-value lambda calculus that supports let-polymorphism and polymorphic algebraic effects and handlers, design a type system that rejects interfering handlers, and prove type safety of our calculus.

  • Ready, set, Go! Data-race detection and the Go language
    arXiv.cs.PL Pub Date : 2019-10-28
    Daniel Schnetzer Fava; Martin Steffen

    Data races are often discussed in the context of lock acquisition and release, with race-detection algorithms routinely relying on vector clocks as a means of capturing the relative ordering of events from different threads. In this paper, we present a data-race detector for a language with channel communication as its sole synchronization primitive, and provide a semantics directly tied to the happens-before relation, thus forging the notion of vector clocks.

  • The geometry of syntax and semantics for directed file transformations
    arXiv.cs.PL Pub Date : 2020-01-14
    Steve Huntsman; Michael Robinson

    We introduce a conceptual framework that associates syntax and semantics with vertical and horizontal directions in principal bundles and related constructions. This notion of geometry corresponds to a mechanism for performing goal-directed file transformations such as "eliminate unsafe syntax" and suggests various engineering practices.

  • Atomicity Checking in Linear Time using Vector Clocks
    arXiv.cs.PL Pub Date : 2020-01-14
    Umang Mathur; Mahesh Viswanathan

    Multi-threaded programs are challenging to write. Developers often need to reason about a prohibitively large number of thread interleavings to reason about the behavior of software. A non-interference property like atomicity can reduce this interleaving space by ensuring that any execution is equivalent to an execution where all atomic blocks are executed serially. We consider the well studied notion of conflict serializability for dynamically checking atomicity. Existing algorithms detect violations of conflict serializability by detecting cycles in a graph of transactions observed in a given execution. The size of such a graph can grow quadratically with the size of the trace making the analysis not scalable. In this paper, we present AeroDrome, a novel single pass linear time algorithm that uses vector clocks to detect violations of conflict serializability in an online setting. Experiments show that AeroDrome scales to traces with a large number of events with significant speedup.

  • A Polymorphic RPC Calculus
    arXiv.cs.PL Pub Date : 2019-10-24
    Kwanghoon Choi; James Cheney; Simon Fowler; Sam Lindley

    The RPC calculus is a simple semantic foundation for multi-tier programming languages such as Links in which located functions can be written for client-server model. Subsequently, the typed RPC calculus is designed to capture the location information of functions by types and to drive location type-directed slicing compilations. However, the use of locations is currently limited to monomorphic ones, which is one of the gaps to overcome to put into practice the theory of RPC calculi for client-server model. This paper proposes a polymorphic RPC calculus to allow programmers to write succinct multi-tier programs using polymorphic location constructs. Then the polymorphic multi-tier programs can be automatically translated into programs only containing location constants amenable to the existing slicing compilation methods. We formulate a type system for the polymorphic RPC calculus, and prove its type soundness. Also, we design a monomorphization translation together with proofs on its type and semantic correctness for the translation.

  • Tabled Typeclass Resolution
    arXiv.cs.PL Pub Date : 2020-01-13
    Daniel Selsam; Sebastian Ullrich; Leonardo de Moura

    Typeclasses provide an elegant and effective way of managing ad-hoc polymorphism in both programming languages and interactive proof assistants. However, the increasingly sophisticated uses of typeclasses within proof assistants has exposed two critical problems with existing typeclass resolution procedures: the diamond problem, which causes exponential running times in both theory and practice, and the cycle problem, which causes loops in the presence of cycles and so thwarts many desired uses of typeclasses. We present a new typeclass resolution procedure, called tabled typeclass resolution, that solves these problems. We have implemented our procedure for the upcoming version (v4) of the Lean Theorem Prover, and we confirm empirically that our implementation is exponentially faster than existing systems in the presence of diamonds. Our procedure is sufficiently lightweight that it could easily be implemented in other systems. We hope our new procedure facilitates even more sophisticated uses of typeclasses in both software development and interactive theorem proving.

  • Session Types with Arithmetic Refinements and Their Application to Work Analysis
    arXiv.cs.PL Pub Date : 2020-01-13
    Ankush Das; Frank Pfenning

    Session types statically prescribe bidirectional communication protocols for message-passing processes and are in a Curry-Howard correspondence with linear logic propositions. However, simple session types cannot specify properties beyond the type of exchanged messages. In this paper we extend the type system by using index refinements from linear arithmetic capturing intrinsic attributes of data structures and algorithms so that we can express and verify amortized cost of programs using ergometric types. We show that, despite the decidability of Presburger arithmetic, type equality and therefore also type checking are now undecidable, which stands in contrast to analogous dependent refinement type systems from functional languages. We also present a practical incomplete algorithm for type equality and an algorithm for type checking which is complete relative to an oracle for type equality. Process expressions in this explicit language are rather verbose, so we also introduce an implicit form and a sound and complete algorithm for reconstructing explicit programs, borrowing ideas from the proof-theoretic technique of focusing. We conclude by illustrating our systems and algorithms with a variety of examples that have been verified in our implementation.

  • Model-View-Update-Communicate: Session Types meet the Elm Architecture
    arXiv.cs.PL Pub Date : 2019-10-24
    Simon Fowler

    Session types are a type discipline for communication channel endpoints which allow conformance to protocols to be checked statically. Safely implementing session types requires linearity, usually in the form of a linear type system. Unfortunately, linear typing is difficult to integrate with graphical user interfaces (GUIs), and to date most programs using session types are command line applications. In this paper, we propose the first principled integration of session typing and GUI development by building upon the Model-View-Update (MVU) architecture, pioneered by the Elm programming language. We introduce $\lambda_{\textsf{MVU}}$, the first formal model of the MVU architecture, and prove it sound. By extending $\lambda_{\textsf{MVU}}$ with \emph{commands} as found in Elm, along with \emph{linearity} and \emph{model transitions}, we show the first formal integration of session typing and GUI programming. We implement our approach in the Links web programming language, and show examples including a two-factor authentication workflow and multi-room chat server.

  • Monotone recursive types and recursive data representations in Cedille
    arXiv.cs.PL Pub Date : 2020-01-09
    Christopher Jenkins; Aaron Stump

    Guided by Tarksi's fixpoint theorem in order theory, we show how to derive monotone recursive types with constant-time roll and unroll operations within Cedille, an impredicative, constructive, and logically consistent pure type theory. As applications, we use monotone recursive types to generically derive two recursive representations of data in the lambda calculus, the Parigot and Scott encoding, together with constant-time destructors, a recursion scheme, and the standard induction principle.

  • Automatic generation and verification of test-stable floating-point code
    arXiv.cs.PL Pub Date : 2020-01-07
    Laura Titolo; Mariano Moscato; Cesar A. Muñoz

    Test instability in a floating-point program occurs when the control flow of the program diverges from its ideal execution assuming real arithmetic. This phenomenon is caused by the presence of round-off errors that affect the evaluation of arithmetic expressions occurring in conditional statements. Unstable tests may lead to significant errors in safety-critical applications that depend on numerical computations. Writing programs that take into consideration test instability is a difficult task that requires expertise on finite precision computations and rounding errors. This paper presents a toolchain to automatically generate and verify a provably correct test-stable floating-point program from a functional specification in real arithmetic. The input is a real-valued program written in the Prototype Verification System (PVS) specification language and the output is a transformed floating-point C program annotated with ANSI/ISO C Specification Language (ACSL) contracts. These contracts relate the floating-point program to its functional specification in real arithmetic. The transformed program detects if unstable tests may occur and, in these cases, issues a warning and terminate. An approach that combines the Frama-C analyzer, the PRECiSA round-off error estimator, and PVS is proposed to automatically verify that the generated program code is correct in the sense that, if the program terminates without a warning, it follows the same computational path as its real-valued functional specification.

  • Don't Panic! Better, Fewer, Syntax Errors for LR Parsers
    arXiv.cs.PL Pub Date : 2018-04-19
    Lukas Diekmann; Laurence Tratt

    Syntax errors are generally easy to fix for humans, but not for parsers, in general, and LR parsers, in particular. Traditional 'panic mode' error recovery, though easy to implement and applicable to any grammar, often leads to a cascading chain of errors that drown out the original. More advanced error recovery techniques suffer less from this problem but have seen little practical use because their typical performance was seen as poor, their worst case unbounded, and the repairs they reported arbitrary. In this paper we show two generic error recovery algorithms that fix all three problems. First, our algorithms are the first to report the complete set of possible repair sequences for a given location, allowing programmers to select the one that best fits their intention. Second, on a corpus of 200,000 real-world syntactically invalid Java programs, we show that our best performing algorithm is able to repair 98.71% of files within a cut-off of 0.5s. Furthermore, we are also able to use the complete set of repair sequences to reduce the cascading error problem even further than previous approaches. Our best performing algorithm reports 442,252.0 error locations in the corpus to the user, while the panic mode algorithm reports 980,848.0 error locations: in other words, our algorithms reduce the cascading error problem by well over half.

  • Deep Static Modeling of invokedynamic
    arXiv.cs.PL Pub Date : 2020-01-08
    George Fourtounis; Yannis Smaragdakis

    Java 7 introduced programmable dynamic linking in the form of the invokedynamic framework. Static analysis of code containing programmable dynamic linking has often been cited as a significant source of unsoundness in the analysis of Java programs. For example, Java lambdas, introduced in Java 8, are a very popular feature, which is, however, resistant to static analysis, since it mixes invokedynamic with dynamic code generation. These techniques invalidate static analysis assumptions: programmable linking breaks reasoning about method resolution while dynamically generated code is, by definition, not available statically. In this paper, we show that a static analysis can predictively model uses of invokedynamic while also cooperating with extra rules to handle the runtime code generation of lambdas. Our approach plugs into an existing static analysis and helps eliminate all unsoundness in the handling of lambdas (including associated features such as method references) and generic invokedynamic uses. We evaluate our technique on a benchmark suite of our own and on third-party benchmarks, uncovering all code previously unreachable due to unsoundness, highly efficiently.

  • Automatic Generation of Efficient Sparse Tensor Format Conversion Routines
    arXiv.cs.PL Pub Date : 2020-01-08
    Stephen Chou; Fredrik Kjolstad; Saman Amarasinghe

    This paper shows how to generate code that efficiently converts sparse tensors between disparate storage formats (data layouts) like CSR, DIA, ELL, and many others. We decompose sparse tensor conversion into three logical phases: coordinate remapping, analysis, and assembly. We then develop a language that precisely describes how different formats group together and order a tensor's nonzeros in memory. This enables a compiler to emit code that performs complex reorderings (remappings) of nonzeros when converting between formats. We additionally develop a query language that can extract complex statistics about sparse tensors, and we show how to emit efficient analysis code that computes such queries. Finally, we define an abstract interface that captures how data structures for storing a tensor can be efficiently assembled given specific statistics about the tensor. Disparate formats can implement this common interface, thus letting a compiler emit optimized sparse tensor conversion code for arbitrary combinations of a wide range of formats without hard-coding for any specific one. Our evaluation shows that our technique generates sparse tensor conversion routines with performance between 0.99 and 2.2$\times$ that of hand-optimized implementations in two widely used sparse linear algebra libraries, SPARSKIT and Intel MKL. By emitting code that avoids materializing temporaries, our technique also outperforms both libraries by between 1.4 and 3.4$\times$ for CSC/COO to DIA/ELL conversion.

  • Albert, an intermediate smart-contract language for the Tezos blockchain
    arXiv.cs.PL Pub Date : 2020-01-07
    Bruno Bernardo; Raphaël Cauderlier; Basile Pesin; Julien Tesson

    Tezos is a smart-contract blockchain. Tezos smart contracts are written in a low-level stack-based language called Michelson. In this article we present Albert, an intermediate language for Tezos smart contracts which abstracts Michelson stacks as linearly typed records. We also describe its compiler to Michelson, written in Coq, that targets Mi-Cho-Coq, a formal specification of Michelson implemented in Coq.

  • Stochastic probabilistic programs
    arXiv.cs.PL Pub Date : 2020-01-08
    David Tolpin; Tomer Dobkin

    We introduce the notion of a stochastic probabilistic program and present a reference implementation of a probabilistic programming facility supporting specification of stochastic probabilistic programs and inference in them. Stochastic probabilistic programs allow straightforward specification and efficient inference in models with nuisance parameters, noise, and nondeterminism. We give several examples of stochastic probabilistic programs, and compare the programs with corresponding deterministic probabilistic programs in terms of model specification and inference. We conclude with discussion of open research topics and related work.

  • An Equational Theory for Weak Bisimulation via Generalized Parameterized Coinduction
    arXiv.cs.PL Pub Date : 2020-01-08
    Yannick Zakowski; Paul He; Chung-Kil Hur; Steve Zdancewic

    Coinductive reasoning about infinitary structures such as streams is widely applicable. However, practical frameworks for developing coinductive proofs and finding reasoning principles that help structure such proofs remain a challenge, especially in the context of machine-checked formalization. This paper gives a novel presentation of an equational theory for reasoning about structures up to weak bisimulation. The theory is both compositional, making it suitable for defining general-purpose lemmas, and also incremental, meaning that the bisimulation can be created interactively. To prove the theory's soundness, this paper also introduces generalized parameterized coinduction, which addresses expressivity problems of earlier works and provides a practical framework for coinductive reasoning. The paper presents the resulting equational theory for streams, but the technique applies to other structures too. All of the results in this paper have been proved in Coq, and the generalized parameterized coinduction framework is available as a Coq library.

  • Retentive Lenses
    arXiv.cs.PL Pub Date : 2020-01-07
    Zirun Zhu; Zhixuan Yang; Hsiang-Shang Ko; Zhenjiang Hu

    Based on Foster et al.'s lenses, various bidirectional programming languages and systems have been developed for helping the user to write correct data synchronisers. The two well-behavedness laws of lenses, namely Correctness and Hippocraticness, are usually adopted as the guarantee of these systems. While lenses are designed to retain information in the source when the view is modified, well-behavedness says very little about the retaining of information: Hippocraticness only requires that the source be unchanged if the view is not modified, and nothing about information retention is guaranteed when the view is changed. To address the problem, we propose an extension of the original lenses, called retentive lenses, which satisfy a new Retentiveness law guaranteeing that if parts of the view are unchanged, then the corresponding parts of the source are retained as well. As a concrete example of retentive lenses, we present a domain-specific language for writing tree transformations; we prove that the pair of get and put functions generated from a program in our DSL forms a retentive lens. We demonstrate the practical use of retentive lenses and the DSL by presenting case studies on code refactoring, Pombrio and Krishnamurthi's resugaring, and XML synchronisation.

  • Correctness of Automatic Differentiation via Diffeologies and Categorical Gluing
    arXiv.cs.PL Pub Date : 2020-01-07
    Mathieu Huot; Sam Staton; Matthijs Vákár

    We present semantic correctness proofs of Automatic Differentiation (AD). We consider a forward-mode AD method on a higher order language with algebraic data types, and we characterise it by identifying a universal property it satisfies. We describe a rich semantics for differentiable programming, based on diffeological spaces. We show that it interprets our language, and we phrase what it means for the AD method to be correct with respect to this semantics. We show that the universal property of AD gives rise to an elegant semantic proof of its correctness based on a gluing construction on diffeological spaces. We explain how this is, in essence, a logical relations argument. Finally, we sketch how the analysis extends to other AD methods by considering a continuation-based method.

  • A Provable Defense for Deep Residual Networks
    arXiv.cs.PL Pub Date : 2019-03-29
    Matthew Mirman; Gagandeep Singh; Martin Vechev

    We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts: (i) abstract layers for fine-tuning the precision and scalability of the abstraction, (ii) a flexible domain specific language (DSL) for describing training objectives that combine abstract and concrete losses with arbitrary specifications. Our training method is implemented in the DiffAI system.

  • A Diagrammatic Calculus for Algebraic Effects
    arXiv.cs.PL Pub Date : 2020-01-05
    Ugo Dal Lago; Francesco Gavazzo

    We introduce a new, diagrammatic notation for representing the result of algebraic effectful computations. Our notation explicitly separates the effects produced during a computation from the possible values returned, this way simplifying the extension of definitions and results on pure computations to an effectful setting. We give a formal foundation for our notation in terms of Lawvere theories and generic effects.

  • A Calculus for Modular Loop Acceleration
    arXiv.cs.PL Pub Date : 2020-01-06
    Florian Frohn

    Loop acceleration can be used to prove safety, reachability, runtime bounds, and (non-)termination for programs operating on integers. To this end, a variety of acceleration techniques has been proposed. However, all of them are monolithic: Either they accelerate a loop successfully or they fail completely. In contrast, we present a calculus that allows for combining acceleration techniques in a modular way and we show how to integrate many existing acceleration techniques into our calculus. Moreover, we propose two novel acceleration techniques that can be incorporated into our calculus seamlessly. An empirical evaluation demonstrates the applicability of our approach.

  • Call-by-name Gradual Type Theory
    arXiv.cs.PL Pub Date : 2018-01-31
    Max S. New; Daniel R. Licata

    We present gradual type theory, a logic and type theory for call-by-name gradual typing. We define the central constructions of gradual typing (the dynamic type, type casts and type error) in a novel way, by universal properties relative to new judgments for gradual type and term dynamism, which were developed in blame calculi and to state the "gradual guarantee" theorem of gradual typing. Combined with the ordinary extensionality ($\eta$) principles that type theory provides, we show that most of the standard operational behavior of casts is uniquely determined by the gradual guarantee. This provides a semantic justification for the definitions of casts, and shows that non-standard definitions of casts must violate these principles. Our type theory is the internal language of a certain class of preorder categories called equipments. We give a general construction of an equipment interpreting gradual type theory from a 2-category representing non-gradual types and programs, which is a semantic analogue of Findler and Felleisen's definitions of contracts, and use it to build some concrete domain-theoretic models of gradual typing.

  • NeuroVectorizer: End-to-End Vectorization with Deep Reinforcement Learning
    arXiv.cs.PL Pub Date : 2019-09-20
    Ameer Haj-Ali; Nesreen K. Ahmed; Ted Willke; Sophia Shao; Krste Asanovic; Ion Stoica

    One of the key challenges arising when compilers vectorize loops for today's SIMD-compatible architectures is to decide if vectorization or interleaving is beneficial. Then, the compiler has to determine how many instructions to pack together and how many loop iterations to interleave. Compilers are designed today to use fixed-cost models that are based on heuristics to make vectorization decisions on loops. However, these models are unable to capture the data dependency, the computation graph, or the organization of instructions. Alternatively, software engineers often hand-write the vectorization factors of every loop. This, however, places a huge burden on them, since it requires prior experience and significantly increases the development time. In this work, we explore a novel approach for handling loop vectorization and propose an end-to-end solution using deep reinforcement learning (RL). We conjecture that deep RL can capture different instructions, dependencies, and data structures to enable learning a sophisticated model that can better predict the actual performance cost and determine the optimal vectorization factors. We develop an end-to-end framework, from code to vectorization, that integrates deep RL in the LLVM compiler. Our proposed framework takes benchmark codes as input and extracts the loop codes. These loop codes are then fed to a loop embedding generator that learns an embedding for these loops. Finally, the learned embeddings are used as input to a Deep RL agent, which determines the vectorization factors for all the loops. We further extend our framework to support multiple supervised learning methods. We evaluate our approaches against the currently used LLVM vectorizer and loop polyhedral optimization techniques. Our experiments show 1.29X-4.73X performance speedup compared to baseline and only 3% worse than the brute-force search on a wide range of benchmarks.

  • Verifying Cryptographic Security Implementations in C Using Automated Model Extraction
    arXiv.cs.PL Pub Date : 2020-01-03
    Mihhail Aizatulin

    This thesis presents an automated method for verifying security properties of protocol implementations written in the C language. We assume that each successful run of a protocol follows the same path through the C code, justified by the fact that typical security protocols have linear structure. We then perform symbolic execution of that path to extract a model expressed in a process calculus similar to the one used by the CryptoVerif tool. The symbolic execution uses a novel algorithm that allows symbolic variables to represent bitstrings of potentially unknown length to model incoming protocol messages. The extracted models do not use pointer-addressed memory, but they may still contain low-level details concerning message formats. In the next step we replace the message formatting expressions by abstract tupling and projection operators. The properties of these operators, such as the projection operation being the inverse of the tupling operation, are typically only satisfied with respect to inputs of correct types. Therefore we typecheck the model to ensure that all type-safety constraints are satisfied. The resulting model can then be verified with CryptoVerif to obtain a computational security result directly, or with ProVerif, to obtain a computational security result by invoking a computational soundness theorem. Our method achieves high automation and does not require user input beyond what is necessary to specify the properties of the cryptographic primitives and the desired security goals. We evaluated the method on several protocol implementations, totalling over 3000 lines of code. The biggest case study was a 1000-line implementation that was independently written without verification in mind. We found several flaws that were acknowledged and fixed by the authors, and were able to verify the fixed code without any further modifications to it.

  • Stateful Dataflow Multigraphs: A Data-Centric Model for Performance Portability on Heterogeneous Architectures
    arXiv.cs.PL Pub Date : 2019-02-27
    Tal Ben-Nun; Johannes de Fine Licht; Alexandros Nikolaos Ziogas; Timo Schneider; Torsten Hoefler

    The ubiquity of accelerators in high-performance computing has driven programming complexity beyond the skill-set of the average domain scientist. To maintain performance portability in the future, it is imperative to decouple architecture-specific programming paradigms from the underlying scientific computations. We present the Stateful DataFlow multiGraph (SDFG), a data-centric intermediate representation that enables separating program definition from its optimization. By combining fine-grained data dependencies with high-level control-flow, SDFGs are both expressive and amenable to program transformations, such as tiling and double-buffering. These transformations are applied to the SDFG in an interactive process, using extensible pattern matching, graph rewriting, and a graphical user interface. We demonstrate SDFGs on CPUs, GPUs, and FPGAs over various motifs --- from fundamental computational kernels to graph analytics. We show that SDFGs deliver competitive performance, allowing domain scientists to develop applications naturally and port them to approach peak hardware performance without modifying the original scientific code.

  • Pre-trained Contextual Embedding of Source Code
    arXiv.cs.PL Pub Date : 2019-12-21
    Aditya Kanade; Petros Maniatis; Gogul Balakrishnan; Kensen Shi

    The source code of a program not only serves as a formal description of an executable task, but it also serves to communicate developer intent in a human-readable form. To facilitate this, developers use meaningful identifier names and natural-language documentation. This makes it possible to successfully apply sequence-modeling approaches, shown to be effective in natural-language processing, to source code. A major advancement in natural-language understanding has been the use of pre-trained token embeddings; BERT and other works have further shown that pre-trained contextual embeddings can be extremely powerful and can be fine-tuned effectively for a variety of downstream supervised tasks. Inspired by these developments, we present the first attempt to replicate this success on source code. We curate a massive corpus of Python programs from GitHub to pre-train a BERT model, which we call Code Understanding BERT (CuBERT). We also pre-train Word2Vec embeddings on the same dataset. We create a benchmark of five classification tasks and compare fine-tuned CuBERT against sequence models trained with and without the Word2Vec embeddings. Our results show that CuBERT outperforms the baseline methods by a margin of 2.9-22%. We also show its superiority when fine-tuned with smaller datasets, and over fewer epochs. We further evaluate CuBERT's effectiveness on a joint classification, localization and repair task involving prediction of two pointers.

  • A Unified Iteration Space Transformation Framework for Sparse and Dense Tensor Algebra
    arXiv.cs.PL Pub Date : 2019-12-28
    Ryan Senanayake; Fredrik Kjolstad; Changwan Hong; Shoaib Kamil; Saman Amarasinghe

    We address the problem of optimizing mixed sparse and dense tensor algebra in a compiler. We show that standard loop transformations, such as strip-mining, tiling, collapsing, parallelization and vectorization, can be applied to irregular loops over sparse iteration spaces. We also show how these transformations can be applied to the contiguous value arrays of sparse tensor data structures, which we call their position space, to unlock load-balanced tiling and parallelism. We have prototyped these concepts in the open-source TACO system, where they are exposed as a scheduling API similar to the Halide domain-specific language for dense computations. Using this scheduling API, we show how to optimize mixed sparse/dense tensor algebra expressions, how to generate load-balanced code by scheduling sparse tensor algebra in position space, and how to generate sparse tensor algebra GPU code. Our evaluation shows that our transformations let us generate good code that is competitive with many hand-optimized implementations from the literature.

  • The Prolog debugger and declarative programming
    arXiv.cs.PL Pub Date : 2019-06-11
    Włodzimierz Drabent

    Logic programming is a declarative programming paradigm. Programming language Prolog makes logic programming possible, at least to a substantial extent. However the Prolog debugger works solely in terms of the operational semantics. So it is incompatible with declarative programming. This report discusses this issue and tries to find how the debugger may be used from the declarative point of view. The results are rather not encouraging. Also, the box model of Byrd, used by the debugger, is explained in terms of SLD-resolution.

  • IR2Vec: A Flow Analysis based Scalable Infrastructure for Program Encodings
    arXiv.cs.PL Pub Date : 2019-09-13
    Venkata Keerthy S; Rohit Aggarwal; Shalini Jain; Maunendra Sankar Desarkar; Ramakrishna Upadrasta; Y. N. Srikant

    We propose IR2Vec, a Concise and Scalable encoding infrastructure to represent programs as a distributed embedding in continuous space. This distributed embedding is obtained by combining representation learning methods with data and control flow information to capture the syntax as well as the semantics of the input programs. Our embeddings are obtained from the Intermediate Representation (IR) of the source code, and are both language as well as machine independent. The entities of the IR are modelled as relationships, and their representations are learned to form a seed embedding vocabulary. This vocabulary is used along with the flow analyses information to form a hierarchy of encodings based on various levels of program abstractions. We show the effectiveness of our methodology on a software engineering task (program classification) as well as optimization tasks (Heterogeneous device mapping and Thread coarsening). The embeddings generated by IR2Vec outperform the existing methods in all the three tasks even when using simple machine learning models. As we follow an agglomerative method of forming encodings at various levels using seed embedding vocabulary, our encoding is naturally more scalable and not data-hungry when compared to the other methods.

  • On correctness of an n queens program
    arXiv.cs.PL Pub Date : 2019-09-16
    Włodzimierz Drabent

    Thom Fr\"uhwirth presented a short, elegant and efficient Prolog program for the n queens problem. However the program may be seen as rather tricky and one may be not convinced about its correctness. This paper explains the program in a declarative way, and provides a proof of its correctness and completeness.

  • Introduction to Rank-polymorphic Programming in Remora (Draft)
    arXiv.cs.PL Pub Date : 2019-12-31
    Olin Shivers; Justin Slepak; Panagiotis Manolios

    Remora is a higher-order, rank-polymorphic array-processing programming language, in the same general class of languages as APL and J. It is intended for writing programs to be executed on parallel hardware. We provide an example-driven introduction to the language, and its general computational model, originally developed by Iverson for APL. We begin with Dynamic Remora, a variant of the language with a dynamic type system (as in Scheme or Lisp), to introduce the fundamental computational mechanisms of the language, then shift to Explicitly Typed Remora, a variant of the language with a static, dependent type system that permits the shape of the arrays being computed to be captured at compile time. This article can be considered an introduction to the general topic of the rank-polymorphic array-processing computational model, above and beyond the specific details of the Remora language. We do not address the details of type inference in Remora, that is, the assignment of explicit types to programs written without such annotations; this is ongoing research.

  • LLOV: A Fast Static Data-Race Checker for OpenMP Programs
    arXiv.cs.PL Pub Date : 2019-12-27
    Utpal Bora; Santanu Das; Pankaj Kureja; Saurabh Joshi; Ramakrishna Upadrasta; Sanjay Rajopadhye

    In the era of Exascale computing, writing efficient parallel programs is indispensable and at the same time, writing sound parallel programs is highly difficult. While parallel programming is easier with frameworks such as OpenMP, the possibility of data races in these programs still persists. In this paper, we propose a fast, lightweight, language agnostic, and static data race checker for OpenMP programs based on the LLVM compiler framework. We compare our tool with other state-of-the-art data race checkers on a variety of well-established benchmarks. We show that the precision, accuracy, and the F1 score of our tool is comparable to other checkers while being orders of magnitude faster. To the best of our knowledge, this work is the only tool among the state-of-the-art data race checkers that can verify a FORTRAN program to be datarace free.

Contents have been reproduced by permission of the publishers.
上海纽约大学William Glover