当前期刊: "软件"类期刊
显示样式:        排序: 导出
我的关注
我的收藏
您暂时未登录!
登录
  • ComDA: A common software for nonlinear and Non-Gaussian Land Data Assimilation
    Environ. Model. Softw. (IF 4.552) Pub Date : 2020-01-23
    Feng Liu; Liangxu Wang; Xin Li; Chunlin Huang

    Common software for land data assimilation is urgently needed to implement a wide variety of assimilation applications; however, a fast, easy-to-use, and multidisciplinary application-oriented assimilation platform has not been achieved. Therefore, we developed Common software for Nonlinear and non-Gaussian Land Data Assimilation (ComDA). ComDA integrates multiple algorithms (including diverse Kalman and particle filters), models and observation operators (e.g., common land model (CoLM), Advanced Integral Equation Model (AIEM)), and provides general interfaces for additional operators. Using mixed-language programming and parallel computing technologies (Open Multi-Processing (OpenMP), Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA)), ComDA can assimilate various land surface variables and remote sensing observations. High-performance computing and synthetic tests and real-world tests indicate that ComDA achieves the standard of common land data assimilation software with parallel computation, multiple operators, and assimilation algorithms and is compatible with many models. ComDA can be applied for multidisciplinary data assimilation.

    更新日期:2020-01-23
  • Review of Soil Phosphorus Routines in Ecosystem Models
    Environ. Model. Softw. (IF 4.552) Pub Date : 2020-01-21
    J. Pferdmenges; L. Breuer; S. Julich; P. Kraft

    We compiled information on 26 numerical models, which consider the terrestrial phosphorus (P) cycle and compared them regarding process description, model structure and applicability to different ecosystems and scales. We address the differences in their hydrological components and between their soil P routines, the implementation of a preferential flow component in soils, as well as whether the model performance has been tested for P transport. The comparison of the models revealed that none offers the flexibility for a realistic representation of P transport through different ecosystems and on diverging scales. Especially the transport of P through macroporous soils (e.g. forests) is deficient. Five models represent macropores accurately, but all of them lack a validated P routine. We therefore present a model blueprint to be able to incorporate a physically realistic representation of macropore flow and particulate P transport in forested systems.

    更新日期:2020-01-22
  • Boundary Value Exploration for Software Analysis
    arXiv.cs.SE Pub Date : 2020-01-18
    Felix Dobslaw; Francisco Gomes de Oliveira Neto; Robert Feldt

    For software to be reliable and resilient, it is widely accepted that tests must be created and maintained alongside the software itself. One safeguard from vulnerabilities and failures in code is to ensure correct behavior on the boundaries between sub-domains of the input space. So-called boundary value analysis (BVA) and boundary value testing (BVT) techniques aim to exercise those boundaries and increase test effectiveness. However, the concepts of BVA and BVT themselves are not clearly defined and it is not clear how to identify relevant sub-domains, and thus the boundaries delineating them, given a specification. This has limited adoption and hindered automation. We clarify BVA and BVT and introduce Boundary Value Exploration (BVE) to describe techniques that support them by helping to detect and identify boundary inputs. Additionally, we propose two concrete BVE techniques based on information-theoretic distance functions: (i) an algorithm for boundary detection and (ii) the usage of software visualization to explore the behavior of the software under test and identify its boundary behavior. As an initial evaluation, we apply these techniques on a much used and well-tested date handling library. Our results reveal questionable behavior at boundaries highlighted by our techniques. In conclusion, we argue that the boundary value exploration that our techniques enable is a step towards automated boundary value analysis and testing which can foster their wider use and improve test effectiveness and efficiency.

    更新日期:2020-01-22
  • An Interdisciplinary Guideline for the Production of Videos and Vision Videos by Software Professionals
    arXiv.cs.SE Pub Date : 2020-01-18
    Oliver Karras; Kurt Schneider

    Background and Motivation: In recent years, the topic of applying videos in requirements engineering has been discussed and its contributions are of interesting potential. In the last 35 years, several researchers proposed approaches for applying videos in requirements engineering due to their communication richness and effectiveness. However, these approaches mainly use videos but omit the details about how to produce them. This lack of guidance is one crucial reason why videos are not an established documentation option for successful requirements communication and thus shared understanding. Software professionals are not directors and thus they do not necessarily know what constitutes a good video in general and for an existing approach. Therefore, this lack of knowledge and skills on how to produce and use videos for visual communication impedes the application of videos by software professionals in requirements engineering. How to Create Effective Videos and Vision Videos?: This technical report addresses this lack of knowledge and skills by software professionals. We provide two guidelines that can be used as checklists to avoid frequent flaws in the production and use of videos respectively vision videos. Software professionals without special training should be able to follow these guidelines to achieve the basic capabilities to produce (vision) videos that are accepted by their stakeholders. These guidelines represent a core set of those capabilities in the preproduction, shooting, postproduction, and viewing of (vision) videos. We do not strive for perfection in any of these capabilities, .e.g., technical handling of video equipment, storytelling, or video editing. Instead, these guidelines support all steps of the (vision) video production and use process to a balanced way.

    更新日期:2020-01-22
  • How do Data Science Workers Collaborate? Roles, Workflows, and Tools
    arXiv.cs.SE Pub Date : 2020-01-18
    Amy X. Zhang; Michael Muller; Dakuo Wang

    Today, the prominence of data science within organizations has given rise to teams of data science workers collaborating on extracting insights from data, as opposed to individual data scientists working alone. However, we still lack a deep understanding of how data science workers collaborate in practice. In this work, we conducted an online survey with 183 participants who work in various aspects of data science. We focused on their reported interactions with each other (e.g., managers with engineers) and with different tools (e.g., Jupyter Notebook). We found that data science teams are extremely collaborative and work with a variety of stakeholders and tools during the six common steps of a data science workflow (e.g., clean data and train model). We also found that the collaborative practices workers employ, such as documentation, vary according to the kinds of tools they use. Based on these findings, we discuss design implications for supporting data science team collaborations and future research directions.

    更新日期:2020-01-22
  • Teaching Software Engineering for AI-Enabled Systems
    arXiv.cs.SE Pub Date : 2020-01-18
    Christian Kästner; Eunsuk Kang

    Software engineers have significant expertise to offer when building intelligent systems, drawing on decades of experience and methods for building systems that are scalable, responsive and robust, even when built on unreliable components. Systems with artificial-intelligence or machine-learning (ML) components raise new challenges and require careful engineering. We designed a new course to teach software-engineering skills to students with a background in ML. We specifically go beyond traditional ML courses that teach modeling techniques under artificial conditions and focus, in lecture and assignments, on realism with large and changing datasets, robust and evolvable infrastructure, and purposeful requirements engineering that considers ethics and fairness as well. We describe the course and our infrastructure and share experience and all material from teaching the course for the first time.

    更新日期:2020-01-22
  • Synergizing Domain Expertise with Self-Awareness in Software Systems: A Patternized Architecture Guideline
    arXiv.cs.SE Pub Date : 2020-01-20
    Tao Chen; Rami Bahsoon; Xin Yao

    Architectural patterns provide a reusable architectural solution for commonly recurring problems that can assist in designing software systems. In this regard, self-awareness architectural patterns are specialized patterns that leverage good engineering practices and experiences to help in designing self-awareness and self-adaptation of a software system. However, domain knowledge and engineers' expertise that is built over time are not explicitly linked to these patterns and the self-aware process. This linkage is important, as it can enrich the design patterns of these systems, which consequently leads to more effective and efficient self-aware and self-adaptive behaviours. This paper is an introductory work that highlights the importance of synergizing domain expertise into the self-awareness in software systems, relying on well-defined underlying approaches. In particular, we present a holistic framework that classifies widely known representations used to obtain and maintain the domain expertise, documenting their nature and specifics rules that permits different levels of synergies with self-awareness. Drawing on such, we describe mechanisms that can enrich existing patterns with engineers' expertise and knowledge of the domain. This, together with the framework, allow us to codify an intuitive step-by-step methodology that guides engineer in making design decisions when synergizing domain expertise into self-awareness and reveal their importances, in an attempt to keep 'engineers-in-the-loop'. Through three case studies, we demonstrate how the enriched patterns, the proposed framework and methodology can be applied in different domains, within which we quantitatively compare the actual benefits of incorporating engineers' expertise into self-awareness, at alternative levels of synergies.

    更新日期:2020-01-22
  • Checking Smart Contracts with Structural Code Embedding
    arXiv.cs.SE Pub Date : 2020-01-20
    Zhipeng Gao; Lingxiao Jiang; Xin Xia; David Lo; John Grundy

    Smart contracts have been increasingly used together with blockchains to automate financial and business transactions. However, many bugs and vulnerabilities have been identified in many contracts which raises serious concerns about smart contract security, not to mention that the blockchain systems on which the smart contracts are built can be buggy. Thus, there is a significant need to better maintain smart contract code and ensure its high reliability. In this paper, we propose an automated approach to learn characteristics of smart contracts in Solidity, which is useful for clone detection, bug detection and contract validation on smart contracts. Our new approach is based on word embeddings and vector space comparison. We parse smart contract code into word streams with code structural information, convert code elements (e.g., statements, functions) into numerical vectors that are supposed to encode the code syntax and semantics, and compare the similarities among the vectors encoding code and known bugs, to identify potential issues. We have implemented the approach in a prototype, named SmartEmbed. Results show that our tool can effectively identify many repetitive instances of Solidity code, where the clone ratio is around 90\%. Code clones such as type-III or even type-IV semantic clones can also be detected accurately. Our tool can identify more than 1000 clone related bugs based on our bug databases efficiently and accurately. Our tool can also help to efficiently validate any given smart contract against a known set of bugs, which can help to improve the users' confidence in the reliability of the contract. The anonymous replication packages can be accessed at: https://drive.google.com/file/d/1kauLT3y2IiHPkUlVx4FSTda-dVAyL4za/view?usp=sharing, and evaluated it with more than 22,000 smart contracts collected from the Ethereum blockchain.

    更新日期:2020-01-22
  • In-The-Field Monitoring of Functional Calls: Is It Feasible?
    arXiv.cs.SE Pub Date : 2020-01-20
    Oscar Cornejo; Daniela Briola; Daniela Micucci; Leonardo Mariani

    Collecting data about the sequences of function calls executed by an application while running in the field can be useful to a number of applications, including failure reproduction, profiling, and debugging. Unfortunately, collecting data from the field may introduce annoying slowdowns that negatively affect the quality of the user experience. So far, the impact of monitoring has been mainly studied in terms of the overhead that it may introduce in the monitored applications, rather than considering if the introduced overhead can be really recognized by users. In this paper we take a different perspective studying to what extent collecting data about sequences of function calls may impact the quality of the user experience, producing recognizable effects. Interestingly we found that, depending on the nature of the executed operation and its execution context, users may tolerate a non-trivial overhead. This information can be potentially exploited to collect significant amount of data without annoying users.

    更新日期:2020-01-22
  • AutoMATES: Automated Model Assembly from Text, Equations, and Software
    arXiv.cs.SE Pub Date : 2020-01-21
    Adarsh Pyarelal; Marco A. Valenzuela-Escarcega; Rebecca Sharp; Paul D. Hein; Jon Stephens; Pratik Bhandari; HeuiChan Lim; Saumya Debray; Clayton T. Morrison

    Models of complicated systems can be represented in different ways - in scientific papers, they are represented using natural language text as well as equations. But to be of real use, they must also be implemented as software, thus making code a third form of representing models. We introduce the AutoMATES project, which aims to build semantically-rich unified representations of models from scientific code and publications to facilitate the integration of computational models from different domains and allow for modeling large, complicated systems that span multiple domains and levels of abstraction.

    更新日期:2020-01-22
  • Towards Semantic Clone Detection via Probabilistic Software Modeling
    arXiv.cs.SE Pub Date : 2020-01-21
    Hannes Thaller; Lukas Linsbauer; Alexander Egyed

    Semantic clones are program components with similar behavior, but different textual representation. Semantic similarity is hard to detect, and semantic clone detection is still an open issue. We present semantic clone detection via Probabilistic Software Modeling (PSM) as a robust method for detecting semantically equivalent methods. PSM inspects the structure and runtime behavior of a program and synthesizes a network of Probabilistic Models (PMs). Each PM in the network represents a method in the program and is capable of generating and evaluating runtime events. We leverage these capabilities to accurately find semantic clones. Results show that the approach can detect semantic clones in the complete absence of syntactic similarity with high precision and low error rates.

    更新日期:2020-01-22
  • Towards Fault Localization via Probabilistic Software Modeling
    arXiv.cs.SE Pub Date : 2020-01-21
    Hannes Thaller; Lukas Linsbauer; Alexander Egyed; Stefan Fischer

    Software testing helps developers to identify bugs. However, awareness of bugs is only the first step. Finding and correcting the faulty program components is equally hard and essential for high-quality software. Fault localization automatically pinpoints the location of an existing bug in a program. It is a hard problem, and existing methods are not yet precise enough for widespread industrial adoption. We propose fault localization via Probabilistic Software Modeling (PSM). PSM analyzes the structure and behavior of a program and synthesizes a network of Probabilistic Models (PMs). Each PM models a method with its inputs and outputs and is capable of evaluating the likelihood of runtime data. We use this likelihood evaluation to find fault locations and their impact on dependent code elements. Results indicate that PSM is a robust framework for accurate fault localization.

    更新日期:2020-01-22
  • An IoT Platform-as-a-service for NFV Based -- Hybrid Cloud / Fog Systems
    arXiv.cs.SE Pub Date : 2020-01-17
    Carla Mouradian; Fereshteh Ebrahimnezhad; Yassine Jebbar; Jasmeen Kaur Ahluwalia; Seyedeh Negar Afrasiabi; Roch H. Glitho; Ashok Moghe

    Cloud computing, despite its inherent advantages (e.g., resource efficiency) still faces several challenges. the wide are network used to connect the cloud to end-users could cause high latency, which may not be tolerable for some applications, especially Internet of Things (IoT applications. Fog computing can reduce this latency by extending the traditional cloud architecture to the edge of the network and by enabling the deployment of some application components on fog nodes. Application providers use Platform-as-a-Service (PaaS) to provision (i.e., develop, deploy, manage, and orchestrate) applications in cloud. However, existing PaaS solutions (including IoT PaaS) usually focus on cloud and do not enable provisioning of applications with components spanning cloud and fog. provisioning such applications require novel functions, such as application graph generation, that are absent from existing PaaS. Furthermore, several functions offered by existing PaaS (e.g., publication/discovery) need to be significantly extended in order to fit in a hybrid cloud/fog environment. In this paper, we propose a novel architecture for PaaS for hybrid cloud/fog system. It is IoT use case-driven, and its applications' components are implemented as Virtual Network Functions (VNFs) with execution sequences modeled s graphs with sub-structures such as selection and loops. It automates the provisioning of applications with components spanning cloud and fog. In addition, it enables the discovery of existing cloud and fog nodes and generates application graphs. A proof of concept is built based on Cloudify open source. Feasibility is demonstrated by evaluating its performance when PaaS modules and application components are placed in clouds and fogs in different geographical locations.

    更新日期:2020-01-22
  • Engineering AI Systems: A Research Agenda
    arXiv.cs.SE Pub Date : 2020-01-16
    Jan Bosch; Ivica Crnkovic; Helena Holmström Olsson

    Deploying machine-, and in particular deep-learning, (ML/DL) solutions in industry-strength, production quality contexts proves to challenging. This requires a structured engineering approach to constructing and evolving systems that contain ML/DL components. In this paper, we provide a conceptualization of the typical evolution patterns that companies experience when employing ML/DL well as a framework for integrating ML/DL components in systems consisting of multiple types of components. In addition, we provide an overview of the engineering challenges surrounding AI/ML/DL solutions and, based on that, we provide a research agenda and overview of open items that need to be addressed by the research community at large.

    更新日期:2020-01-22
  • The effects of change decomposition on code review -- a controlled experiment
    arXiv.cs.SE Pub Date : 2018-05-28
    Marco di Biase; Magiel Bruntink; Arie van Deursen; Alberto Bacchelli

    Background: Code review is a cognitively demanding and time-consuming process. Previous qualitative studies hinted at how decomposing change sets into multiple yet internally coherent ones would improve the reviewing process. So far, literature provided no quantitative analysis of this hypothesis. Aims: (1) Quantitatively measure the effects of change decomposition on the outcome of code review (in terms of number of found defects, wrongly reported issues, suggested improvements, time, and understanding); (2) Qualitatively analyze how subjects approach the review and navigate the code, building knowledge and addressing existing issues, in large vs. decomposed changes. Method: Controlled experiment using the pull-based development model involving 28 software developers among professionals and graduate students. Results: Change decomposition leads to fewer wrongly reported issues, influences how subjects approach and conduct the review activity (by increasing context-seeking), yet impacts neither understanding the change rationale nor the number of found defects. Conclusions: Change decomposition reduces the noise for subsequent data analyses but also significantly supports the tasks of the developers in charge of reviewing the changes. As such, commits belonging to different concepts should be separated, adopting this as a best practice in software engineering.

    更新日期:2020-01-22
  • Practical relevance of software engineering research: Synthesizing the community's voice
    arXiv.cs.SE Pub Date : 2018-12-04
    Vahid Garousi; Markus Borg; Markku Oivo

    Software engineering (SE) research should be relevant to industrial practice. There have been regular discussions in the SE community on this issue since the 1980's, led by pioneers such as Robert Glass. As we recently passed the milestone of "50 years of software engineering", some recent positive efforts have been made in this direction, e.g., establishing "industrial" tracks in several SE conferences. However, many researchers and practitioners believe that we, as a community, are still struggling with research relevance and utility. The goal of this paper is to synthesize the evidence and experience-based opinions shared on this topic so far in the SE community, and to encourage the community to further reflect and act on the research relevance. For this purpose, we have conducted a Multi-vocal Literature Review (MLR) of 54 systematically-selected sources (papers and non peer-reviewed articles). Instead of relying on and considering the individual opinions on research relevance, mentioned in each of the sources, the MLR aims to synthesize and provide the "holistic" view on the topic. The highlights of our MLR findings are as follows. The top three root causes of low relevance, discussed in the community, are: (1) Researchers having simplistic views (or wrong assumptions) about SE in practice; (2) Lack of connection with industry; and (3) Wrong identification of research problems. The top three suggestions for improving research relevance are: (1) Using appropriate research approaches such as action-research; (2) Choosing relevant research problems; and (3) Collaborating with industry. By synthesizing all the discussions on this important topic so far, this paper aims to encourage further discussions and actions in the community to increase our collective efforts to improve the research relevance.

    更新日期:2020-01-22
  • LEOPARD: Identifying Vulnerable Code for Vulnerability Assessment through Program Metrics
    arXiv.cs.SE Pub Date : 2019-01-31
    Xiaoning Du; Bihuan Chen; Yuekang Li; Jianmin Guo; Yaqin Zhou; Yang Liu; Yu Jiang

    Identifying potentially vulnerable locations in a code base is critical as a pre-step for effective vulnerability assessment; i.e., it can greatly help security experts put their time and effort to where it is needed most. Metric-based and pattern-based methods have been presented for identifying vulnerable code. The former relies on machine learning and cannot work well due to the severe imbalance between non-vulnerable and vulnerable code or lack of features to characterize vulnerabilities. The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities. In this paper, we propose and implement a generic, lightweight and extensible framework, LEOPARD, to identify potentially vulnerable functions through program metrics. LEOPARD requires no prior knowledge about known vulnerabilities. It has two steps by combining two sets of systematically derived metrics. First, it uses complexity metrics to group the functions in a target application into a set of bins. Then, it uses vulnerability metrics to rank the functions in each bin and identifies the top ones as potentially vulnerable. Our experimental results on 11 real-world projects have demonstrated that, LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as vulnerable and outperform machine learning-based and static analysis-based techniques. We further propose three applications of LEOPARD for manual code review and fuzzing, through which we discovered 22 new bugs in real applications like PHP, radare2 and FFmpeg, and eight of them are new vulnerabilities.

    更新日期:2020-01-22
  • RESTORE: Retrospective Fault Localization Enhancing Automated Program Repair
    arXiv.cs.SE Pub Date : 2019-06-05
    Tongtong Xu; Liushan Chen; Yu Pei; Tian Zhang; Minxue Pan; Carlo A. Furia

    Fault localization is a crucial step of automated program repair, because accurately identifying program locations that are most closely implicated with a fault greatly affects the effectiveness of the patching process. An ideal fault localization technique would provide precise information while requiring moderate computational resources---to best support an efficient search for correct fixes. In contrast, most automated program repair tools use standard fault localization techniques---which are not tightly integrated with the overall program repair process, and hence deliver only subpar efficiency. In this paper, we present retrospective fault localization: a novel fault localization technique geared to the requirements of automated program repair. A key idea of retrospective fault localization is to reuse the outcome of failed patch validation to support mutation-based dynamic analysis---providing accurate fault localization information without incurring onerous computational costs. We implemented retrospective fault localization in a tool called RESTORE---based on the JAID Java program repair system. Experiments involving faults from the Defects4J standard benchmark indicate that retrospective fault localization can boost automated program repair: RESTORE efficiently explores a large fix space, delivering state-of-the-art effectiveness (41 Defects4J bugs correctly fixed, 8 more than any other automated repair tools for Java) while simultaneously boosting performance (speedup over 3 compared to JAID). Retrospective fault localization is applicable to any automated program repair techniques that rely on fault localization and dynamic validation of patches.

    更新日期:2020-01-22
  • Microservices Migration in Industry: Intentions, Strategies, and Challenges
    arXiv.cs.SE Pub Date : 2019-06-11
    Jonas Fritzsch; Justus Bogner; Stefan Wagner; Alfred Zimmermann

    To remain competitive in a fast changing environment, many companies started to migrate their legacy applications towards a Microservices architecture. Such extensive migration processes require careful planning and consideration of implications and challenges likewise. In this regard, hands-on experiences from industry practice are still rare. To fill this gap in scientific literature, we contribute a qualitative study on intentions, strategies, and challenges in the context of migrations to Microservices. We investigated the migration process of 14 systems across different domains and sizes by conducting 16 in-depth interviews with software professionals from 10 companies. We present a separate description of each case and summarize the most important findings. As primary migration drivers, maintainability and scalability were identified. Due to the high complexity of their legacy systems, most companies preferred a rewrite using current technologies over splitting up existing code bases. This was often caused by the absence of a suitable decomposition approach. As such, finding the right service cut was a major technical challenge, next to building the necessary expertise with new technologies. Organizational challenges were especially related to large, traditional companies that simultaneously established agile processes. Initiating a mindset change and ensuring smooth collaboration between teams were crucial for them. Future research on the evolution of software systems will in particular profit from the individual cases presented.

    更新日期:2020-01-22
  • On the adoption, usage and evolution of Kotlin Features on Android development
    arXiv.cs.SE Pub Date : 2019-07-21
    Bruno Góis Mateus; Matias Martinez

    Currently, more than 2 million applications are published on Google Play, the official store of Android applications, which makes the Android platform the largest mobile platform. Since 2017, when Google announced Kotlin as an official programming language of the Android platform, developers have an option of writing applications using Kotlin, that combines object-oriented and functional features. The goal of this paper is to understand the usage of Kotlin's features. Particularly, we are interested in four aspects of feature usage: which features are adopted, what is the degree of adoption, when are these features added into Android applications for the first time, which are the features first introduced, and how the usage of features evolves along with applications' evolution. To analyze the usage of Kotlin features, we inspect the source code of Kotlin applications. To study how a feature is used along the life-cycle of a given mobile application, we identify the Kotlin features used on each version of that application. We also compute the moment that each feature is used for the first time. Finally, we identify the evolution trend that better describes the usage of a given feature. Our experiment showed that 15 out of 26 features are used on at least 50% of Android applications written in Kotlin. Also, we observed that the most used Kotlin features are those first used on Android applications. Finally, we report that the majority of applications tend to increase the number of instances of 24 out of 26 studied features along with the evolution of Android applications. Our study investigates Kotlin features usage and evolution and generates 8 main findings. We present the implications of our findings, which are addressed to developers, researchers, tool builders, and language designers in order to foster the use of Kotlin features in the context of Android development.

    更新日期:2020-01-22
  • On the k-synchronizability of systems
    arXiv.cs.SE Pub Date : 2019-09-04
    Cinzia Di GiustoC&A; Cinzia GiustoSARDES; Laetitia LaversaC&A; Etienne Lozes

    In this paper, we work on the notion of k-synchronizability: a system is k-synchronizable if any of its executions, up to reordering causally independent actions, can be divided into a succession of k-bounded interaction phases. We show two results (both for mailbox and peer-to-peer automata): first, the reachability problem is decidable for k-synchronizable systems; second, the membership problem (whether a given system is k-synchronizable) is decidable as well. Our proofs fix several important issues in previous attempts to prove these two results for mailbox automata.

    更新日期:2020-01-22
  • Towards Empirically Validated Remedies for Scrum Retrospective Headaches
    arXiv.cs.SE Pub Date : 2019-10-19
    Christoph Matthies; Franziska Dobrigkeit

    Agile methods, especially Scrum, have become staples of the modern software development industry. Retrospective meetings are Scrum's instrument for process improvement and adaptation. They are considered one of the most important aspects of the Scrum method and its implementation in organizations. However, Retrospectives face their own challenges. Agile practitioners have highlighted common problems, i.e. headaches, that repeatedly appear in meetings and negatively impact the quality of process improvement efforts. To remedy these headaches, Retrospective activities, which can help teams think together and break the usual routine, have been proposed. In this research, we present case studies of educational and industry teams, investigating the effects of eleven Retrospective activities on five identified headaches. While we find evidence for the claimed benefits of activities in the majority of studied cases, application of remedies also led to new headaches arising.

    更新日期:2020-01-22
  • Usability Methods for Designing Programming Languages for Software Engineers
    arXiv.cs.SE Pub Date : 2019-12-10
    Michael Coblenz; Gauri Kambhatla; Paulette Koronkevich; Jenna L. Wise; Celeste Barnaby; Joshua Sunshine; Jonathan Aldrich; Brad A. Myers

    Programming language design requires making many usability-related design decisions. We explored using user-centered methods to make languages more effective for programmers. However, existing HCI methods expect iteration with appropriate users, who must learn to use the language to be evaluated. These methods were impractical to apply to programming languages: they have high iteration costs, programmers require significant learning time, and user performance has high variance. To address these problems, we adapted HCI methods to reduce iteration and training costs and designed tasks and analyses that mitigated the high variance. We evaluated the methods by using them to design two languages for professional developers. Glacier extends Java to enable programmers to express immutability properties effectively and easily. Obsidian is a language for blockchains that includes verification of critical safety properties. Summative usability studies showed that programmers were able to program effectively in both languages after short training periods.

    更新日期:2020-01-22
  • Quantitative Aspects of Programming Languages and Systems over the past $2^4$ years and beyond
    arXiv.cs.PL Pub Date : 2020-01-20
    Alessandro AldiniUniversity of Urbino

    Quantitative aspects of computation are related to the use of both physical and mathematical quantities, including time, performance metrics, probability, and measures for reliability and security. They are essential in characterizing the behaviour of many critical systems and in estimating their properties. Hence, they need to be integrated both at the level of system modeling and within the verification methodologies and tools. Along the last two decades a variety of theoretical achievements and automated techniques have contributed to make quantitative modeling and verification mainstream in the research community. In the same period, they represented the central theme of the series of workshops entitled Quantitative Aspects of Programming Languages and Systems (QAPL) and born in 2001. The aim of this survey is to revisit such achievements and results from the standpoint of QAPL and its community.

    更新日期:2020-01-22
  • Probabilistic Output Analyses for Deterministic Programs --- Reusing Existing Non-probabilistic Analyses
    arXiv.cs.PL Pub Date : 2020-01-20
    Maja Hanne KirkebyComputer Science, Roskilde University, Denmark

    We consider reusing established non-probabilistic output analyses (either forward or backwards) that yield over-approximations of a program's pre-image or image relation, e.g., interval analyses. We assume a probability measure over the program input and present two techniques (one for forward and one for backward analyses) that both derive upper and lower probability bounds for the output events. We demonstrate the most involved technique, namely the forward technique, for two examples and compare their results to a cutting-edge probabilistic output analysis.

    更新日期:2020-01-22
  • Streaming Transformations of Infinite Ordered-Data Words
    arXiv.cs.PL Pub Date : 2020-01-20
    Xiaokang Qiu

    In this paper, we define streaming register transducer (SRT), a one-way, letter-to-letter, transductional machine model for transformations of infinite data words whose data domain forms a linear group. Comparing with existing data word transducers, SRT are able to perform two extra operations on the registers: a linear-order-based comparison and an additive update. We consider the transformations that can be defined by SRT and several subclasses of SRT. We investigate the expressiveness of these languages and several decision problems. Our main results include: 1) SRT are closed under union and intersection, and add-free SRT are also closed under composition; 2) SRT-definable transformations can be defined in monadic second-order (MSO) logic, but are not comparable with first-order (FO) definable transformations; 3) the functionality problem is decidable for add-free SRT, the reactivity problem and inclusion problem are decidable for deterministic add-free SRT, but none of these problems is decidable in general for SRT.

    更新日期:2020-01-22
  • Modular coinduction up-to for higher-order languages via first-order transition systems
    arXiv.cs.PL Pub Date : 2020-01-20
    Jean-Marie Madiot; Damien Pous; Davide Sangiorgi

    The bisimulation proof method can be enhanced by employing `bisimulations up-to' techniques. A comprehensive theory of such enhancements has been developed for first-order (i.e., CCS-like) labelled transition systems (LTSs) and bisimilarity, based on abstract fixed-point theory and compatible functions. We transport this theory onto languages whose bisimilarity and LTS go beyond those of first-order models. The approach consists in exhibiting fully abstract translations of the more sophisticated LTSs and bisimilarities onto the first-order ones. This allows us to reuse directly the large corpus of up-to techniques that are available on first-order LTSs. The only ingredient that has to be manually supplied is the compatibility of basic up-to techniques that are specific to the new languages. We investigate the method on the pi-calculus, the lambda-calculus, and a (call-by-value) lambda-calculus with references.

    更新日期:2020-01-22
  • Profunctor optics, a categorical update
    arXiv.cs.PL Pub Date : 2020-01-21
    Bryce Clarke; Derek Elkins; Jeremy Gibbons; Fosco Loregian; Bartosz Milewski; Emily Pillmore; Mario Román

    Profunctor optics are bidirectional data accessors that capture data transformation patterns such as accessing subfields or iterating over containers. They are modular, meaning that we can construct accessors for complex structures by combining simpler ones. Profunctor optics have been studied only using $\mathbf{Sets}$ as the enriching category and in the non-mixed case. However, functional programming languages are arguably better described by enriched categories and we have found that some structures in the literature are actually mixed optics. Our work generalizes a classic result by Pastro and Street on Tambara theory and uses it to describe mixed V-enriched profunctor optics and to endow them with V-category structure. We provide some original families of optics and derivations, including an elementary one for traversals that solves an open problem posed by Milewski. Finally, we discuss a Haskell implementation.

    更新日期:2020-01-22
  • Classical Control, Quantum Circuits and Linear Logic in Enriched Category Theory
    arXiv.cs.PL Pub Date : 2017-11-14
    Mathys Rennela; Sam Staton

    We describe categorical models of a circuit-based (quantum) functional programming language. We show that enriched categories play a crucial role. Following earlier work on QWire by Paykin et al., we consider both a simple first-order linear language for circuits, and a more powerful host language, such that the circuit language is embedded inside the host language. Our categorical semantics for the host language is standard, and involves cartesian closed categories and monads. We interpret the circuit language not in an ordinary category, but in a category that is enriched in the host category. We show that this structure is also related to linear/non-linear models. As an extended example, we recall an earlier result that the category of W*-algebras is dcpo-enriched, and we use this model to extend the circuit language with some recursive types.

    更新日期:2020-01-22
  • Complexity and Information in Invariant Inference
    arXiv.cs.PL Pub Date : 2019-10-27
    Yotam M. Y. Feldman; Neil Immerman; Mooly Sagiv; Sharon Shoham

    This paper addresses the complexity of SAT-based invariant inference, a prominent approach to safety verification. We consider the problem of inferring an inductive invariant of polynomial length given a transition system and a safety property. We analyze the complexity of this problem in a black-box model, called the Hoare-query model, which is general enough to capture algorithms such as IC3/PDR and its variants. An algorithm in this model learns about the system's reachable states by querying the validity of Hoare triples. We show that in general an algorithm in the Hoare-query model requires an exponential number of queries. Our lower bound is information-theoretic and applies even to computationally unrestricted algorithms, showing that no choice of generalization from the partial information obtained in a polynomial number of Hoare queries can lead to an efficient invariant inference procedure in this class. We then show, for the first time, that by utilizing rich Hoare queries, as done in PDR, inference can be exponentially more efficient than approaches such as ICE learning, which only utilize inductiveness checks of candidates. We do so by constructing a class of transition systems for which a simple version of PDR with a single frame infers invariants in a polynomial number of queries, whereas every algorithm using only inductiveness checks and counterexamples requires an exponential number of queries. Our results also shed light on connections and differences with the classical theory of exact concept learning with queries, and imply that learning from counterexamples to induction is harder than classical exact learning from labeled examples. This demonstrates that the convergence rate of Counterexample-Guided Inductive Synthesis depends on the form of counterexamples.

    更新日期:2020-01-22
  • Usability Methods for Designing Programming Languages for Software Engineers
    arXiv.cs.PL Pub Date : 2019-12-10
    Michael Coblenz; Gauri Kambhatla; Paulette Koronkevich; Jenna L. Wise; Celeste Barnaby; Joshua Sunshine; Jonathan Aldrich; Brad A. Myers

    Programming language design requires making many usability-related design decisions. We explored using user-centered methods to make languages more effective for programmers. However, existing HCI methods expect iteration with appropriate users, who must learn to use the language to be evaluated. These methods were impractical to apply to programming languages: they have high iteration costs, programmers require significant learning time, and user performance has high variance. To address these problems, we adapted HCI methods to reduce iteration and training costs and designed tasks and analyses that mitigated the high variance. We evaluated the methods by using them to design two languages for professional developers. Glacier extends Java to enable programmers to express immutability properties effectively and easily. Obsidian is a language for blockchains that includes verification of critical safety properties. Summative usability studies showed that programmers were able to program effectively in both languages after short training periods.

    更新日期:2020-01-22
  • Practical Sized Typing for Coq
    arXiv.cs.PL Pub Date : 2019-12-11
    Jonathan Chan; William J. Bowman

    Termination of recursive functions and productivity of corecursive functions are important for maintaining logical consistency in proof assistants. However, contemporary proof assistants, such as Coq, rely on syntactic criteria that prevent users from easily writing obviously terminating or productive programs, such as quicksort. This is troublesome, since there exist theories for type-based termination- and productivity-checking. In this paper, we present a design and implementation of sized type checking and inference for Coq. We extend past work on sized types for the Calculus of (Co)Inductive Constructions (CIC) with support for global definitions found in Gallina, and extend the sized-type inference algorithm to support completely unannotated Gallina terms. This allows our design to maintain complete backward compatibility with existing Coq developments. We provide an implementation that extends the Coq kernel with optional support for sized types.

    更新日期:2020-01-22
  • SPARTA: A Divide and Conquer Approach to Address Translation for Accelerators
    arXiv.cs.OS Pub Date : 2020-01-20
    Javier Picorel; Seyed Alireza Sanaee Kohroudi; Zi Yan; Abhishek Bhattacharjee; Babak Falsafi; Djordje Jevdjic

    Virtual memory (VM) is critical to the usability and programmability of hardware accelerators. Unfortunately, implementing accelerator VM efficiently is challenging because the area and power constraints make it difficult to employ the large multi-level TLBs used in general-purpose CPUs. Recent research proposals advocate a number of restrictions on virtual-to-physical address mappings in order to reduce the TLB size or increase its reach. However, such restrictions are unattractive because they forgo many of the original benefits of traditional VM, such as demand paging and copy-on-write. We propose SPARTA, a divide and conquer approach to address translation. SPARTA splits the address translation into accelerator-side and memory-side parts. The accelerator-side translation hardware consists of a tiny TLB covering only the accelerator's cache hierarchy (if any), while the translation for main memory accesses is performed by shared memory-side TLBs. Performing the translation for memory accesses on the memory side allows SPARTA to overlap data fetch with translation, and avoids the replication of TLB entries for data shared among accelerators. To further improve the performance and efficiency of the memory-side translation, SPARTA logically partitions the memory space, delegating translation to small and efficient per-partition translation hardware. Our evaluation on index-traversal accelerators shows that SPARTA virtually eliminates translation overhead, reducing it by over 30x on average (up to 47x) and improving performance by 57%. At the same time, SPARTA requires minimal accelerator-side translation hardware, reduces the total number of TLB entries in the system, gracefully scales with memory size, and preserves all key VM functionalities.

    更新日期:2020-01-22
  • Occlum: Secure and Efficient Multitasking Inside a Single Enclave of Intel SGX
    arXiv.cs.OS Pub Date : 2020-01-21
    Youren Shen; Hongliang Tian; Yu Chen; Kang Chen; Runji Wang; Yi Xu; Yubin Xia

    Intel Software Guard Extensions (SGX) enables user-level code to create private memory regions called enclaves, whose code and data are protected by the CPU from software and hardware attacks outside the enclaves. Recent work introduces library operating systems (LibOSes) to SGX so that legacy applications can run inside enclaves with few or even no modifications. As virtually any non-trivial application demands multiple processes, it is essential for LibOSes to support multitasking. However, none of the existing SGX LibOSes support multitasking both securely and efficiently. This paper presents Occlum, a system that enables secure and efficient multitasking on SGX. We implement the LibOS processes as SFI-Isolated Processes (SIPs). SFI is a software instrumentation technique for sandboxing untrusted modules (called domains). We design a novel SFI scheme named MPX-based, Multi-Domain SFI (MMDSFI) and leverage MMDSFI to enforce the isolation of SIPs. We also design an independent verifier to ensure the security guarantees of MMDSFI. With SIPs safely sharing the single address space of an enclave, the LibOS can implement multitasking efficiently. The Occlum LibOS outperforms the state-of-the-art SGX LibOS on multitasking-heavy workloads by up to 6,600X on micro-benchmarks and up to 500X on application benchmarks.

    更新日期:2020-01-22
  • SGX-LKL: Securing the Host OS Interface for Trusted Execution
    arXiv.cs.OS Pub Date : 2019-08-29
    Christian Priebe; Divya Muthukumaran; Joshua Lind; Huanzhou Zhu; Shujie Cui; Vasily A. Sartakov; Peter Pietzuch

    Hardware support for trusted execution in modern CPUs enables tenants to shield their data processing workloads in otherwise untrusted cloud environments. Runtime systems for the trusted execution must rely on an interface to the untrusted host OS to use external resources such as storage, network, and other functions. Attackers may exploit this interface to leak data or corrupt the computation. We describe SGX-LKL, a system for running Linux binaries inside of Intel SGX enclaves that only exposes a minimal, protected and oblivious host interface: the interface is (i) minimal because SGX-LKL uses a complete library OS inside the enclave, including file system and network stacks, which requires a host interface with only 7 calls; (ii) protected because SGX-LKL transparently encrypts and integrity-protects all data passed via low-level I/O operations; and (iii) oblivious because SGX-LKL performs host operations independently of the application workload. For oblivious disk I/O, SGX-LKL uses an encrypted ext4 file system with shuffled disk blocks. We show that SGX-LKL protects TensorFlow training with a 21% overhead.

    更新日期:2020-01-22
  • Parallel Performance of Algebraic Multigrid Domain Decomposition (AMG-DD)
    arXiv.cs.MS Pub Date : 2019-06-25
    Wayne B. Mitchell; Robert Strzodka; Robert D. Falgout

    Algebraic multigrid (AMG) is a widely used scalable solver and preconditioner for large-scale linear systems resulting from the discretization of a wide class of elliptic PDEs. While AMG has optimal computational complexity, the cost of communication has become a significant bottleneck that limits its scalability as processor counts continue to grow on modern machines. This paper examines the design, implementation, and parallel performance of a novel algorithm, Algebraic Multigrid Domain Decomposition (AMG-DD), designed specifically to limit communication. The goal of AMG-DD is to provide a low-communication alternative to standard AMG V-cycles by trading some additional computational overhead for a significant reduction in communication cost. Numerical results show that AMG-DD achieves superior accuracy per communication cost compared to AMG, and speedup over AMG is demonstrated on a large GPU cluster.

    更新日期:2020-01-22
  • Adaptive Parameterization for Neural Dialogue Generation
    arXiv.cs.IR Pub Date : 2020-01-18
    Hengyi Cai; Hongshen Chen; Cheng Zhang; Yonghao Song; Xiaofang Zhao; Dawei Yin

    Neural conversation systems generate responses based on the sequence-to-sequence (SEQ2SEQ) paradigm. Typically, the model is equipped with a single set of learned parameters to generate responses for given input contexts. When confronting diverse conversations, its adaptability is rather limited and the model is hence prone to generate generic responses. In this work, we propose an {\bf Ada}ptive {\bf N}eural {\bf D}ialogue generation model, \textsc{AdaND}, which manages various conversations with conversation-specific parameterization. For each conversation, the model generates parameters of the encoder-decoder by referring to the input context. In particular, we propose two adaptive parameterization mechanisms: a context-aware and a topic-aware parameterization mechanism. The context-aware parameterization directly generates the parameters by capturing local semantics of the given context. The topic-aware parameterization enables parameter sharing among conversations with similar topics by first inferring the latent topics of the given context and then generating the parameters with respect to the distributional topics. Extensive experiments conducted on a large-scale real-world conversational dataset show that our model achieves superior performance in terms of both quantitative metrics and human evaluations.

    更新日期:2020-01-22
  • Stacked Adversarial Network for Zero-Shot Sketch based Image Retrieval
    arXiv.cs.IR Pub Date : 2020-01-18
    Anubha Pandey; Ashish Mishra; Vinay Kumar Verma; Anurag Mittal; Hema A. Murthy

    Conventional approaches to Sketch-Based Image Retrieval (SBIR) assume that the data of all the classes are available during training. The assumption may not always be practical since the data of a few classes may be unavailable, or the classes may not appear at the time of training. Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR) relaxes this constraint and allows the algorithm to handle previously unseen classes during the test. This paper proposes a generative approach based on the Stacked Adversarial Network (SAN) and the advantage of Siamese Network (SN) for ZS-SBIR. While SAN generates a high-quality sample, SN learns a better distance metric compared to that of the nearest neighbor search. The capability of the generative model to synthesize image features based on the sketch reduces the SBIR problem to that of an image-to-image retrieval problem. We evaluate the efficacy of our proposed approach on TU-Berlin, and Sketchy database in both standard ZSL and generalized ZSL setting. The proposed method yields a significant improvement in standard ZSL as well as in a more challenging generalized ZSL setting (GZSL) for SBIR.

    更新日期:2020-01-22
  • Ranking Significant Discrepancies in Clinical Reports
    arXiv.cs.IR Pub Date : 2020-01-18
    Sean MacAvaney; Arman Cohan; Nazli Goharian; Ross Filice

    Medical errors are a major public health concern and a leading cause of death worldwide. Many healthcare centers and hospitals use reporting systems where medical practitioners write a preliminary medical report and the report is later reviewed, revised, and finalized by a more experienced physician. The revisions range from stylistic to corrections of critical errors or misinterpretations of the case. Due to the large quantity of reports written daily, it is often difficult to manually and thoroughly review all the finalized reports to find such errors and learn from them. To address this challenge, we propose a novel ranking approach, consisting of textual and ontological overlaps between the preliminary and final versions of reports. The approach learns to rank the reports based on the degree of discrepancy between the versions. This allows medical practitioners to easily identify and learn from the reports in which their interpretation most substantially differed from that of the attending physician (who finalized the report). This is a crucial step towards uncovering potential errors and helping medical practitioners to learn from such errors, thus improving patient-care in the long run. We evaluate our model on a dataset of radiology reports and show that our approach outperforms both previously-proposed approaches and more recent language models by 4.5% to 15.4%.

    更新日期:2020-01-22
  • Information Foraging for Enhancing Implicit Feedback in Content-based Image Recommendation
    arXiv.cs.IR Pub Date : 2020-01-19
    Amit Kumar Jaiswal; Haiming Liu; Ingo Frommholz

    User implicit feedback plays an important role in recommender systems. However, finding implicit features is a tedious task. This paper aims to identify users' preferences through implicit behavioural signals for image recommendation based on the Information Scent Model of Information Foraging Theory. In the first part, we hypothesise that the users' perception is improved with visual cues in the images as behavioural signals that provide users' information scent during information seeking. We designed a content-based image recommendation system to explore which image attributes (i.e., visual cues or bookmarks) help users find their desired image. We found that users prefer recommendations predicated by visual cues and therefore consider the visual cues as good information scent for their information seeking. In the second part, we investigated if visual cues in the images together with the images itself can be better perceived by the users than each of them on its own. We evaluated the information scent artifacts in image recommendation on the Pinterest image collection and the WikiArt dataset. We find our proposed image recommendation system supports the implicit signals through Information Foraging explanation of the information scent model.

    更新日期:2020-01-22
  • SlideImages: A Dataset for Educational Image Classification
    arXiv.cs.IR Pub Date : 2020-01-19
    David Morris; Eric Müller-Budack; Ralph Ewerth

    In the past few years, convolutional neural networks (CNNs) have achieved impressive results in computer vision tasks, which however mainly focus on photos with natural scene content. Besides, non-sensor derived images such as illustrations, data visualizations, figures, etc. are typically used to convey complex information or to explore large datasets. However, this kind of images has received little attention in computer vision. CNNs and similar techniques use large volumes of training data. Currently, many document analysis systems are trained in part on scene images due to the lack of large datasets of educational image data. In this paper, we address this issue and present SlideImages, a dataset for the task of classifying educational illustrations. SlideImages contains training data collected from various sources, e.g., Wikimedia Commons and the AI2D dataset, and test data collected from educational slides. We have reserved all the actual educational images as a test dataset in order to ensure that the approaches using this dataset generalize well to new educational images, and potentially other domains. Furthermore, we present a baseline system using a standard deep neural architecture and discuss dealing with the challenge of limited training data.

    更新日期:2020-01-22
  • On the Minimum Achievable Age of Information for General Service-Time Distributions
    arXiv.cs.IR Pub Date : 2020-01-19
    Jaya Prakash Champati; Ramana R. Avula; Tobias J. Oechtering; James Gross

    There is a growing interest in analysing the freshness of data in networked systems. Age of Information (AoI) has emerged as a popular metric to quantify this freshness at a given destination. There has been a significant research effort in optimizing this metric in communication and networking systems under different settings. In contrast to previous works, we are interested in a fundamental question, what is the minimum achievable AoI in any single-server-single-source queuing system for a given service-time distribution? To address this question, we study a problem of optimizing AoI under service preemptions. Our main result is on the characterization of the minimum achievable average peak AoI (PAoI). We obtain this result by showing that a fixed-threshold policy is optimal in the set of all randomized-threshold causal policies. We use the characterization to provide necessary and sufficient conditions for the service-time distributions under which preemptions are beneficial.

    更新日期:2020-01-22
  • Common Conversational Community Prototype: Scholarly Conversational Assistant
    arXiv.cs.IR Pub Date : 2020-01-19
    Krisztian Balog; Lucie Flekova; Matthias Hagen; Rosie Jones; Martin Potthast; Filip Radlinski; Mark Sanderson; Svitlana Vakulenko; Hamed Zamani

    This paper discusses the potential for creating academic resources (tools, data, and evaluation approaches) to support research in conversational search, by focusing on realistic information needs and conversational interactions. Specifically, we propose to develop and operate a prototype conversational search system for scholarly activities. This Scholarly Conversational Assistant would serve as a useful tool, a means to create datasets, and a platform for running evaluation challenges by groups across the community. This article results from discussions of a working group at Dagstuhl Seminar 19461 on Conversational Search.

    更新日期:2020-01-22
  • Quantum-like Structure in Multidimensional Relevance Judgements
    arXiv.cs.IR Pub Date : 2020-01-20
    Sagar Uprety; Prayag Tiwari; Shahram Dehdashti; Lauren Fell; Dawei Song; Peter Bruza; Massimo Melucci

    A large number of studies in cognitive science have revealed that probabilistic outcomes of certain human decisions do not agree with the axioms of classical probability theory. The field of Quantum Cognition provides an alternative probabilistic model to explain such paradoxical findings. It posits that cognitive systems have an underlying quantum-like structure, especially in decision-making under uncertainty. In this paper, we hypothesise that relevance judgement, being a multidimensional, cognitive concept, can be used to probe the quantum-like structure for modelling users' cognitive states in information seeking. Extending from an experiment protocol inspired by the Stern-Gerlach experiment in Quantum Physics, we design a crowd-sourced user study to show violation of the Kolmogorovian probability axioms as a proof of the quantum-like structure, and provide a comparison between a quantum probabilistic model and a Bayesian model for predictions of relevance.

    更新日期:2020-01-22
  • Audio Summarization with Audio Features and Probability Distribution Divergence
    arXiv.cs.IR Pub Date : 2020-01-20
    Carlos-Emiliano González-Gallardo; Romain Deveaud; Eric SanJuan; Juan-Manuel Torres

    The automatic summarization of multimedia sources is an important task that facilitates the understanding of an individual by condensing the source while maintaining relevant information. In this paper we focus on audio summarization based on audio features and the probability of distribution divergence. Our method, based on an extractive summarization approach, aims to select the most relevant segments until a time threshold is reached. It takes into account the segment's length, position and informativeness value. Informativeness of each segment is obtained by mapping a set of audio features issued from its Mel-frequency Cepstral Coefficients and their corresponding Jensen-Shannon divergence score. Results over a multi-evaluator scheme shows that our approach provides understandable and informative summaries.

    更新日期:2020-01-22
  • BiOnt: Deep Learning using Multiple Biomedical Ontologies for Relation Extraction
    arXiv.cs.IR Pub Date : 2020-01-20
    Diana Sousa; Francisco M. Couto

    Successful biomedical relation extraction can provide evidence to researchers and clinicians about possible unknown associations between biomedical entities, advancing the current knowledge we have about those entities and their inherent mechanisms. Most biomedical relation extraction systems do not resort to external sources of knowledge, such as domain-specific ontologies. However, using deep learning methods, along with biomedical ontologies, has been recently shown to effectively advance the biomedical relation extraction field. To perform relation extraction, our deep learning system, BiOnt, employs four types of biomedical ontologies, namely, the Gene Ontology, the Human Phenotype Ontology, the Human Disease Ontology, and the Chemical Entities of Biological Interest, regarding gene-products, phenotypes, diseases, and chemical compounds, respectively. We tested our system with three data sets that represent three different types of relations of biomedical entities. BiOnt achieved, in F-score, an improvement of 4.93 percentage points for drug-drug interactions (DDI corpus), 4.99 percentage points for phenotype-gene relations (PGR corpus), and 2.21 percentage points for chemical-induced disease relations (BC5CDR corpus), relatively to the state-of-the-art. The code supporting this system is available at https://github.com/lasigeBioTM/BiONT.

    更新日期:2020-01-22
  • Finding temporal patterns using algebraic fingerprints
    arXiv.cs.IR Pub Date : 2020-01-20
    Suhas Thejaswi; Aristides Gionis

    In this paper we study a family of pattern-detection problems in vertex-colored temporal graphs. In particular, given a vertex-colored temporal graph and a multi-set of colors as a query, we search for temporal paths in the graph that contain the colors specified in the query. These types of problems have several interesting applications, for example, recommending tours for tourists, or searching for abnormal behavior in a network of financial transactions. For the family of pattern-detection problems we define, we establish complexity results and design an algebraic-algorithmic framework based on constrained multilinear sieving. We demonstrate that our solution can scale to massive graphs with up to hundred million edges, despite the problems being NP-hard. Our implementation, which is publicly available, exhibits practical edge-linear scalability and highly optimized. For example, in a real-world graph dataset with more than six million edges and a multi-set query with ten colors, we can extract an optimal solution in less than eight minutes on a haswell desktop with four cores.

    更新日期:2020-01-22
  • Random-walk Based Generative Model for Classifying Document Networks
    arXiv.cs.IR Pub Date : 2020-01-21
    Takafumi J. Suzuki

    Document networks are found in various collections of real-world data, such as citation networks, hyperlinked web pages, and online social networks. A large number of generative models have been proposed because they offer intuitive and useful pictures for analyzing document networks. Prominent examples are relational topic models, where documents are linked according to their topic similarities. However, existing generative models do not make full use of network structures because they are largely dependent on topic modeling of documents. In particular, centrality of graph nodes is missing in generative processes of previous models. In this paper, we propose a novel generative model for document networks by introducing random walkers on networks to integrate the node centrality into link generation processes. The developed method is evaluated in semi-supervised classification tasks with real-world citation networks. We show that the proposed model outperforms existing probabilistic approaches especially in detecting communities in connected networks.

    更新日期:2020-01-22
  • Hybrid Semantic Recommender System for Chemical Compounds
    arXiv.cs.IR Pub Date : 2020-01-21
    Marcia Barros; André Moitinho; Francisco M. Couto

    Recommending Chemical Compounds of interest to a particular researcher is a poorly explored field. The few existent datasets with information about the preferences of the researchers use implicit feedback. The lack of Recommender Systems in this particular field presents a challenge for the development of new recommendations models. In this work, we propose a Hybrid recommender model for recommending Chemical Compounds. The model integrates collaborative-filtering algorithms for implicit feedback (Alternating Least Squares (ALS) and Bayesian Personalized Ranking(BPR)) and semantic similarity between the Chemical Compounds in the ChEBI ontology (ONTO). We evaluated the model in an implicit dataset of Chemical Compounds, CheRM. The Hybrid model was able to improve the results of state-of-the-art collaborative-filtering algorithms, especially for Mean Reciprocal Rank, with an increase of 6.7% when comparing the collaborative-filtering ALS and the Hybrid ALS_ONTO.

    更新日期:2020-01-22
  • Discovering seminal works with marker papers
    arXiv.cs.IR Pub Date : 2019-01-22
    Robin Haunschild; Werner Marx

    Bibliometric information retrieval in databases can employ different strategies. Com-monly, queries are performed by searching in title, abstract and/or author keywords (author vocabulary). More advanced queries employ database keywords to search in a controlled vo-cabulary. Queries based on search terms can be augmented with their citing papers if a re-search field cannot be curtailed by the search query alone. Here, we present another strategy to discover the most important papers of a research field. A marker paper is used to reveal the most important works for the relevant community. All papers co-cited with the marker paper are analyzed using reference publication year spectroscopy (RPYS). For demonstration of the marker paper approach, density functional theory (DFT) is used as a research field. Compari-sons between a prior RPYS on a publication set compiled using a keyword-based search in a controlled vocabulary and three different co-citation RPYS (RPYS-CO) analyses show very similar results. Similarities and differences are discussed.

    更新日期:2020-01-22
  • GLEE: Geometric Laplacian Eigenmap Embedding
    arXiv.cs.IR Pub Date : 2019-05-23
    Leo Torres; Kevin S Chan; Tina Eliassi-Rad

    Graph embedding seeks to build a low-dimensional representation of a graph G. This low-dimensional representation is then used for various downstream tasks. One popular approach is Laplacian Eigenmaps, which constructs a graph embedding based on the spectral properties of the Laplacian matrix of G. The intuition behind it, and many other embedding techniques, is that the embedding of a graph must respect node similarity: similar nodes must have embeddings that are close to one another. Here, we dispose of this distance-minimization assumption. Instead, we use the Laplacian matrix to find an embedding with geometric properties instead of spectral ones, by leveraging the so-called simplex geometry of G. We introduce a new approach, Geometric Laplacian Eigenmap Embedding (or GLEE for short), and demonstrate that it outperforms various other techniques (including Laplacian Eigenmaps) in the tasks of graph reconstruction and link prediction.

    更新日期:2020-01-22
  • Privacy Preserving Threat Hunting in Smart Home Environments
    arXiv.cs.IR Pub Date : 2019-11-06
    Ahmed M. Elmisery; Mirela Sertovic

    The recent proliferation of smart home environments offers new and transformative circumstances for various domains with a commitment to enhancing the quality of life and experience. Most of these environments combine different gadgets offered by multiple stakeholders in a dynamic and decentralized manner, which in turn presents new challenges from the perspective of digital investigation. In addition, a plentiful amount of data records got generated because of the day to day interactions between these gadgets and homeowners, which poses difficulty in managing and analyzing such data. The analysts should endorse new digital investigation approaches to tackle the current limitations in traditional approaches when used in these environments. The digital evidence in such environments can be found inside the records of logfiles that store the historical events occurred inside the smart home. Threat hunting can leverage the collective nature of these gadgets to gain deeper insights into the best way for responding to new threats, which in turn can be valuable in reducing the impact of breaches. Nevertheless, this approach depends mainly on the readiness of smart homeowners to share their own personal usage logs that have been extracted from their smart home environments. However, they might disincline to employ such service due to the sensitive nature of the information logged by their personal gateways. In this paper, we presented an approach to enable smart homeowners to share their usage logs in a privacy preserving manner. A distributed threat hunting approach has been developed to permit the composition of diverse threat classes without revealing the logged records to other involved parties. Furthermore, a scenario was proposed to depict a proactive threat Intelligence sharing for the detection of potential threats in smart home environments with some experimental results.

    更新日期:2020-01-22
  • Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation
    arXiv.cs.IR Pub Date : 2019-11-10
    Xueying Bai; Jian Guan; Hongning Wang

    Reinforcement learning is well suited for optimizing policies of recommender systems. Current solutions mostly focus on model-free approaches, which require frequent interactions with the real environment, and thus are expensive in model learning. Offline evaluation methods, such as importance sampling, can alleviate such limitations, but usually request a large amount of logged data and do not work well when the action space is large. In this work, we propose a model-based reinforcement learning solution which models user-agent interaction for offline policy learning via a generative adversarial network. To reduce bias in the learned model and policy, we use a discriminator to evaluate the quality of generated data and scale the generated rewards. Our theoretical analysis and empirical evaluations demonstrate the effectiveness of our solution in learning policies from the offline and generated data.

    更新日期:2020-01-22
  • LATTE: Latent Type Modeling for Biomedical Entity Linking
    arXiv.cs.IR Pub Date : 2019-11-21
    Ming Zhu; Busra Celikkaya; Parminder Bhatia; Chandan K. Reddy

    Entity linking is the task of linking mentions of named entities in natural language text, to entities in a curated knowledge-base. This is of significant importance in the biomedical domain, where it could be used to semantically annotate a large volume of clinical records and biomedical literature, to standardized concepts described in an ontology such as Unified Medical Language System (UMLS). We observe that with precise type information, entity disambiguation becomes a straightforward task. However, fine-grained type information is usually not available in biomedical domain. Thus, we propose LATTE, a LATent Type Entity Linking model, that improves entity linking by modeling the latent fine-grained type information about mentions and entities. Unlike previous methods that perform entity linking directly between the mentions and the entities, LATTE jointly does entity disambiguation, and latent fine-grained type learning, without direct supervision. We evaluate our model on two biomedical datasets: MedMentions, a large scale public dataset annotated with UMLS concepts, and a de-identified corpus of dictated doctor's notes that has been annotated with ICD concepts. Extensive experimental evaluation shows our model achieves significant performance improvements over several state-of-the-art techniques.

    更新日期:2020-01-22
  • Solving Cold Start Problem in Recommendation with Attribute Graph Neural Networks
    arXiv.cs.IR Pub Date : 2019-12-28
    Tieyun Qian; Yile Liang; Qing Li

    Matrix completion is a classic problem underlying recommender systems. It is traditionally tackled with matrix factorization. Recently, deep learning based methods, especially graph neural networks, have made impressive progress on this problem. Despite their effectiveness, existing methods focus on modeling the user-item interaction graph. The inherent drawback of such methods is that their performance is bound to the density of the interactions, which is however usually of high sparsity. More importantly, for a cold start user/item that does not have any interactions, such methods are unable to learn the preference embedding of the user/item since there is no link to this user/item in the graph. In this work, we develop a novel framework Attribute Graph Neural Networks (AGNN) by exploiting the attribute graph rather than the commonly used interaction graph. This leads to the capability of learning embeddings for cold start users/items. Our AGNN can produce the preference embedding for a cold user/item by learning on the distribution of attributes with an extended variational auto-encoder structure. Moreover, we propose a new graph neural network variant, i.e., gated-GNN, to effectively aggregate various attributes of different modalities in a neighborhood. Empirical results on two real-world datasets demonstrate that our model yields significant improvements for cold start recommendations and outperforms or matches state-of-the-arts performance in the warm start scenario.

    更新日期:2020-01-22
  • SEPT: Improving Scientific Named Entity Recognition with Span Representation
    arXiv.cs.IR Pub Date : 2019-11-08
    Tan Yan; Heyan Huang; Xian-Ling Mao

    We introduce a new scientific named entity recognizer called SEPT, which stands for Span Extractor with Pre-trained Transformers. In recent papers, span extractors have been demonstrated to be a powerful model compared with sequence labeling models. However, we discover that with the development of pre-trained language models, the performance of span extractors appears to become similar to sequence labeling models. To keep the advantages of span representation, we modified the model by under-sampling to balance the positive and negative samples and reduce the search space. Furthermore, we simplify the origin network architecture to combine the span extractor with BERT. Experiments demonstrate that even simplified architecture achieves the same performance and SEPT achieves a new state of the art result in scientific named entity recognition even without relation information involved.

    更新日期:2020-01-22
  • The effect of national and international multiple affiliations on citation impact
    arXiv.cs.DL Pub Date : 2020-01-19
    Sichao Tong; Ting Yue; Zhesi Shen; Liying Yang

    Researchers affiliated with multiple institutions are increasingly seen in current scientific environment. In this paper we systematically analyze the multi-affiliated authorship and its effect on citation impact, with focus on the scientific output of research collaboration. By considering the nationality of each institutions, we further differentiate the national multi-affiliated authorship and international multi-affiliated authorship and reveal their different patterns across disciplines and countries. We observe a large share of publications with multi-affiliated authorship (45.6%) in research collaboration, with a larger share of publications containing national multi-affiliated authorship in medicine related and biology related disciplines, and a larger share of publications containing international type in Space Science, Physics and Geosciences. To a country-based view, we distinguish between domestic and foreign multi-affiliated authorship to a specific country. Taking G7 and BRICS countries as samples from different S&T level, we find that the domestic national multi-affiliated authorship relate to more on citation impact for most disciplines of G7 countries, while domestic international multi-affiliated authorships are more positively influential for most BRICS countries.

    更新日期:2020-01-22
  • Measuring Diversity of Artificial Intelligence Conferences
    arXiv.cs.DL Pub Date : 2020-01-20
    Ana Freire; Lorenzo Porcaro; Emilia Gómez

    The lack of diversity of the Artificial Intelligence (AI) field is nowadays a concern, and several initiatives such as funding schemes and mentoring programs have been designed to fight against it. However, there is no indication on how these initiatives actually impact AI diversity in the short and long term. This work studies the concept of diversity in this particular context and proposes a small set of diversity indicators (i.e. indexes) of AI scientific events. These indicators are designed to quantify the lack of diversity of the AI field and monitor its evolution. We consider diversity in terms of gender, geographical location and business (understood as the presence of academia versus industry). We compute these indicators for the different communities of a conference: authors, keynote speakers and organizing committee. From these components we compute a summarized diversity indicator for each AI event. We evaluate the proposed indexes for a set of recent major AI conferences and we discuss their values and limitations.

    更新日期:2020-01-22
  • The evolution of knowledge within and across fields in modern physics
    arXiv.cs.DL Pub Date : 2020-01-20
    Ye Sun; Vito Latora

    The exchange of knowledge across different areas and disciplines plays a key role in the process of knowledge creation, and can stimulate innovation and the emergence of new fields. We develop here a quantitative framework to extract significant dependencies among scientific disciplines and turn them into a time-varying network whose nodes are the different fields, while the weighted links represent the flow of knowledge from one field to another at a given period of time. Drawing on a comprehensive data set on scientific production in modern physics and on the patterns of citations between articles published in the various fields in the last thirty years, we are then able to map, over time, how the ideas developed in a given field in a certain time period have influenced later discoveries in the same field or in other fields. The analysis of knowledge flows internal to each field displays a remarkable variety of temporal behaviours, with some fields of physics showing to be more self-referential than others. The temporal networks of knowledge exchanges across fields reveal cases of one field continuously absorbing knowledge from another field in the entire observed period, pairs of fields mutually influencing each other, but also cases of evolution from absorbing to mutual or even to back-nurture behaviors.

    更新日期:2020-01-22
  • The stability of Twitter metrics: A study on unavailable Twitter mentions of scientific publications
    arXiv.cs.DL Pub Date : 2020-01-21
    Zhichao Fang; Jonathan Dudek; Rodrigo Costas

    This paper investigates the stability of Twitter counts of scientific publications over time. For this, we conducted an analysis of the availability statuses of over 2.6 million Twitter mentions received by the 1,154 most tweeted scientific publications recorded by Altmetric.com up to October 2017. Results show that of the Twitter mentions for these highly tweeted publications, about 14.3% have become unavailable by April 2019. Deletion of tweets by users is the main reason for unavailability, followed by suspension and protection of Twitter user accounts. This study proposes two measures for describing the Twitter dissemination structures of publications: Degree of Originality (i.e., the proportion of original tweets received by a paper) and Degree of Concentration (i.e., the degree to which retweets concentrate on a single original tweet). Twitter metrics of publications with relatively low Degree of Originality and relatively high Degree of Concentration are observed to be at greater risk of becoming unstable due to the potential disappearance of their Twitter mentions. In light of these results, we emphasize the importance of paying attention to the potential risk of unstable Twitter counts, and the significance of identifying the different Twitter dissemination structures when studying the Twitter metrics of scientific publications.

    更新日期:2020-01-22
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
2020新春特辑
限时免费阅读临床医学内容
ACS材料视界
科学报告最新纳米科学与技术研究
清华大学化学系段昊泓
自然科研论文编辑服务
加州大学洛杉矶分校
上海纽约大学William Glover
南开大学化学院周其林
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug