• arXiv.cs.LO Pub Date : 2020-01-24
Rick Erkens; Jurriaan Rot; Bas Luttik

Ever since the introduction of behavioral equivalences on processes one has been searching for efficient proof techniques that accompany those equivalences. Both strong bisimilarity and weak bisimilarity are accompanied by an arsenal of up-to techniques: enhancements of their proof methods. For branching bisimilarity, these results have not been established yet. We show that a powerful proof technique is sound for branching bisimilarity by combining the three techniques of up to union, up to expansion and up to context for Bloom's BB cool format. We then make an initial proposal for casting the correctness proof of the up to context technique in an abstract coalgebraic setting, covering branching but also {\eta}, delay and weak bisimilarity.

更新日期：2020-01-27
• arXiv.cs.LO Pub Date : 2020-01-24
Murat Cubuktepe; Zhe Xu; Ufuk Topcu

We study the synthesis of policies for multi-agent systems to implement spatial-temporal tasks. We formalize the problem as a factored Markov decision process subject to so-called graph temporal logic specifications. The transition function and the spatial-temporal task of each agent depend on the agent itself and its neighboring agents. The structure in the model and the specifications enable to develop a distributed algorithm that, given a factored Markov decision process and a graph temporal logic formula, decomposes the synthesis problem into a set of smaller synthesis problems, one for each agent. We prove that the algorithm runs in time linear in the total number of agents. The size of the synthesis problem for each agent is exponential only in the number of neighboring agents, which is typically much smaller than the number of agents. We demonstrate the algorithm in case studies on disease control and urban security. The numerical examples show that the algorithm can scale to hundreds of agents.

更新日期：2020-01-27
• arXiv.cs.LO Pub Date : 2018-06-14
Thorsten Wißmann; Ulrich Dorsch; Stefan Milius; Lutz Schröder

We present a generic partition refinement algorithm that quotients coalgebraic systems by behavioural equivalence, an important task in system analysis and verification. Coalgebraic generality allows us to cover not only classical relational systems but also, e.g. various forms of weighted systems and furthermore to flexibly combine existing system types. Under assumptions on the type functor that allow representing its finite coalgebras in terms of nodes and edges, our algorithm runs in time $\mathcal{O}(m\cdot \log n)$ where $n$ and $m$ are the numbers of nodes and edges, respectively. The generic complexity result and the possibility of combining system types yields a toolbox for efficient partition refinement algorithms. Instances of our generic algorithm match the run-time of the best known algorithms for unlabelled transition systems, Markov chains, deterministic automata (with fixed alphabets), Segala systems, and for color refinement.

更新日期：2020-01-27
• arXiv.cs.LO Pub Date : 2019-03-22
Robert Furber; Radu Mardare; Matteo Mio

We introduce a novel real-valued endogenous logic for expressing properties of probabilistic transition systems called Riesz modal logic. The design of the syntax and semantics of this logic is directly inspired by the theory of Riesz spaces, a mature field of mathematics at the intersection of universal algebra and functional analysis. By using powerful results from this theory, we develop the duality theory of the Riesz modal logic in the form of an algebra-to-coalgebra correspondence. This has a number of consequences including: a sound and complete axiomatization, the proof that the logic characterizes probabilistic bisimulation and other convenient results such as completion theorems. This work is intended to be the basis for subsequent research on extensions of Riesz modal logic with fixed-point operators.

更新日期：2020-01-27
• arXiv.cs.LO Pub Date : 2019-11-11
Bill Stoddart; Frank Zeyda; Steve Dunne

In his book "A practical theory of programming" Eric Hehner proposes and applies a remarkably radical reformulation of set theory, in which the collection and packaging of elements are seen as separate activities. This provides for unpackaged collections, referred to as "bunches". Bunches allow us to reason about non-determinism at the level of terms, and, very remarkably, allow us to reason about the conceptual entity "nothing", which is just an empty bunch (and very different from an empty set). This eliminates mathematical "gaps" caused by undefined terms. We compare the use of bunches with other approaches to this problem, and we illustrate the use of bunch theory in formulating program semantics which combines non-deterministic, preferential, and probabilistic choice. We show how an existing axiomatisation of set theory can be extended to incorporate bunches, and we provide and validate a model. Standard functions are lifted when applied to a bunch of values, but we also define a wholistic function application which allows whole bunches to be accepted as arguments, and we develop its associated fixed point theory.

更新日期：2020-01-27
• arXiv.cs.LO Pub Date : 2019-12-05
Sergey Goncharov; Sergey Ospichev; Denis Ponomaryov; Dmitri Sviridenko

We consider the language of $\Delta_0$-formulas with list terms interpreted over hereditarily finite list superstructures. We study the complexity of reasoning in extensions of the language of $\Delta_0$-formulas with non-standard list terms, which represent bounded list search, bounded iteration, and bounded recursion. We prove a number of results on the complexity of model checking and satisfiability for these formulas. In particular, we show that the set of $\Delta_0$-formulas with bounded recursive terms true in a given list superstructure $HW(\mathcal{M})$ is non-elementary (it contains the class kEXPTIME, for all $k\geqslant 1$). For $\Delta_0$-formulas with restrictions on the usage of iterative and recursive terms, we show lower complexity.

更新日期：2020-01-27
• arXiv.cs.LO Pub Date : 2019-06-24
Jeremy G. Siek

The subtyping relation for intersection type systems traditionally employs a transitivity rule (Barendregt et al. 1983), which means that the subtyping judgment does not enjoy the subformula property. Laurent develops a sequent-style subtyping judgment, without transitivity, and proves transitivity via a sequence of six lemmas that culminate in cut-elimination (2018). This article presents a subtyping judgment, in regular style, that satisfies the subformula property, and presents a direct proof of transitivity. Borrowing from Laurent's system, the rule for function types is essentially the $\beta$-soundness property. The main lemma required for the transitivity proof is one that has been used to prove the inversion principle for subtyping of function types. The choice of induction principle for the proof of transitivity is subtle: we use well-founded induction on the lexicographical ordering of the sum of the depths of the first and last type followed by the sum of the sizes of the middle and last type. The article concludes with a proof that the new subtyping judgment is equivalent to that of Barendregt, Coppo, and Dezani-Ciancaglini.

更新日期：2020-01-27
• arXiv.cs.GT Pub Date : 2020-01-23
Sneihil Gopal; Sanjit K. Kaul; Rakesh Chaturvedi; Sumit Roy

We consider a network of selfish nodes that would like to minimize the age of their updates at the other nodes. The nodes send their updates over a shared spectrum using a CSMA/CA based access mechanism. We model the resulting competition as a non-cooperative one-shot multiple access game and investigate equilibrium strategies for two distinct medium access settings (a) collisions are shorter than successful transmissions and (b) collisions are longer. We investigate competition in a CSMA/CA slot, where a node may choose to transmit or stay idle. We find that medium access settings exert strong incentive effects on the nodes. We show that when collisions are shorter, transmit is a weakly dominant strategy. This leads to all nodes transmitting in the CSMA/CA slot, therefore guaranteeing a collision. In contrast, when collisions are longer, no weakly dominant strategy exists and under certain conditions on the ages at the beginning of the slot, we derive the mixed strategy Nash equilibrium.

更新日期：2020-01-27
• arXiv.cs.GT Pub Date : 2017-05-11
Erel Segal-Halevi

Competitive equilibrium (CE) is a fundamental concept in market economics. Its efficiency and fairness properties make it particularly appealing as a rule for fair allocation of resources among agents with possibly different entitlements. However, when the resources are indivisible, a CE might not exist even when there is one resource and two agents with equal incomes. Recently, Babaioff and Nisan and Talgam-Cohen (2017) have suggested to consider the entire space of possible incomes, and check whether there exists a competitive equilibrium for almost all income-vectors --- all income-space except a subset of measure zero. They proved various existence and non-existence results, but left open the cases of four goods and three or four agents with monotonically-increasing preferences. This paper proves non-existence in both these cases, thus completing the characterization of CE existence for almost all incomes in the domain of monotonically increasing preferences. Additionally, the paper provides a complete characterization of CE existence in the domain of monotonically decreasing preferences, corresponding to allocation of chores. On the positive side, the paper proves that CE exists for almost all incomes when there are four goods and three agents with additive preferences. The proof uses a new tool for describing a CE, as a subgame-perfect equilibrium of a specific sequential game. The same tool also enables substantially simpler proofs to the cases already proved by Babaioff et al. Additionally, this paper proves several strong fairness properties that are satisfied by any CE allocation, illustrating its usefulness for fair allocation among agents with different entitlements.

更新日期：2020-01-27
• arXiv.cs.GT Pub Date : 2018-03-16
Philipp Geiger; Michel Besserve; Justus Winkelmann; Claudius Proissl; Bernhard Schölkopf

We study data-driven assistants that provide congestion forecasts to users of shared facilities (roads, cafeterias, etc.), to support coordination between them, and increase efficiency of such collective systems. Key questions are: (1) when and how much can (accurate) predictions help for coordination, and (2) which assistant algorithms reach optimal predictions? First we lay conceptual ground for this setting where user preferences are a priori unknown and predictions influence outcomes. Addressing (1), we establish conditions under which self-fulfilling prophecies, i.e., "perfect" (probabilistic) predictions of what will happen, solve the coordination problem in the game-theoretic sense of selecting a Bayesian Nash equilibrium (BNE). Next we prove that such prophecies exist even in large-scale settings where only aggregated statistics about users are available. This entails a new (nonatomic) BNE existence result. Addressing (2), we propose two assistant algorithms that sequentially learn from users' reactions, together with optimality/convergence guarantees. We validate one of them in a large real-world experiment.

更新日期：2020-01-27
• arXiv.cs.GT Pub Date : 2019-03-01
Sailik Sengupta; Zahra Zahedi; Subbarao Kambhampati

In scenarios where a robot generates and executes a plan, there may be instances where this generated plan is less costly for the robot to execute but incomprehensible to the human. When the human acts as a supervisor and is held accountable for the robot's plan, the human may be at a higher risk if the incomprehensible behavior is deemed to be infeasible or unsafe. In such cases, the robot, who may be unaware of the human's exact expectations, may choose to execute (1) the most constrained plan (i.e. one preferred by all possible supervisors) incurring the added cost of executing highly sub-optimal behavior when the human is monitoring it and (2) deviate to a more optimal plan when the human looks away. While robots do not have human-like ulterior motives (such as being lazy), such behavior may occur because the robot has to cater to the needs of different human supervisors. In such settings, the robot, being a rational agent, should take any chance it gets to deviate to a lower cost plan. On the other hand, continuous monitoring of the robot's behavior is often difficult for humans because it costs them valuable resources (e.g., time, cognitive overload, etc.). Thus, to optimize the cost for monitoring while ensuring the robots follow the safe behavior, we model this problem in the game-theoretic framework of trust. In settings where the human does not initially trust the robot, pure-strategy Nash Equilibrium provides a useful policy for the human.

更新日期：2020-01-27
• arXiv.cs.ET Pub Date : 2020-01-23
Johann Knechtel

As with most aspects of electronic systems and integrated circuits, hardware security has traditionally evolved around the dominant CMOS technology. However, with the rise of various emerging technologies, whose main purpose is to overcome the fundamental limitations for scaling and power consumption of CMOS technology, unique opportunities arise also to advance the notion of hardware security. In this paper, I first provide an overview on hardware security in general. Next, I review selected emerging technologies, namely (i) spintronics, (ii) memristors, (iii) carbon nanotubes and related transistors, (iv) nanowires and related transistors, and (v) 3D and 2.5D integration. I then discuss their application to advance hardware security and also outline related challenges.

更新日期：2020-01-27
• arXiv.cs.ET Pub Date : 2020-01-14

We propose a lightweight scheme where the formation of a data block is changed in such a way that it can tolerate soft errors significantly better than the baseline. The key insight behind our work is that CNN weights are normalized between -1 and 1 after each convolutional layer, and this leaves one bit unused in half-precision floating-point representation. By taking advantage of the unused bit, we create a backup for the most significant bit to protect it against the soft errors. Also, considering the fact that in MLC STT-RAMs the cost of memory operations (read and write), and reliability of a cell are content-dependent (some patterns take larger current and longer time, while they are more susceptible to soft error), we rearrange the data block to minimize the number of costly bit patterns. Combining these two techniques provides the same level of accuracy compared to an error-free baseline while improving the read and write energy by 9% and 6%, respectively.

更新日期：2020-01-27
• arXiv.cs.ET Pub Date : 2020-01-24
Beixiong Zheng; Qingqing Wu; Rui Zhang

The integration of intelligent reflecting surface (IRS) to multiple access networks is a cost-effective solution for boosting spectrum/energy efficiency and enlarging network coverage/connections. However, due to the new capability of IRS in reconfiguring the wireless propagation channels, it is fundamentally unknown which multiple access scheme is superior in the IRS-assisted wireless network. In this letter, we pursue a theoretical performance comparison between non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) in the IRS-assisted downlink communication, for which the transmit power minimization problems are formulated under the discrete unit-modulus reflection constraint on each IRS element. We analyze the minimum transmit powers required by different multiple access schemes and compare them numerically, which turn out to not fully comply with the stereotyped superiority of NOMA over OMA in conventional systems without IRS. Moreover, to avoid the exponential complexity of the brute-force search for the optimal discrete IRS phase shifts, we propose a low-complexity solution to achieve near-optimal performance.

更新日期：2020-01-27
• arXiv.cs.ET Pub Date : 2020-01-24
Mushegh Rafayelyan; Jonathan Dong; Yongqi Tan; Florent Krzakala; Sylvain Gigan

Reservoir computing is a relatively recent computational paradigm that originates from a recurrent neural network, and is known for its wide-range of implementations using different physical technologies. Large reservoirs are very hard to obtain in conventional computers as both the computation complexity and memory usage grows quadratically. We propose an optical scheme performing reservoir computing over very large networks of up to $10^6$ fully connected photonic nodes thanks to its intrinsic properties of parallelism. Our experimental studies confirm that in contrast to conventional computers, the computation time of our optical scheme is only linearly dependent on the number of photonic nodes of the network, which is due to electronic overheads, while the optical part of computation remains fully parallel and independent of the reservoir size. To demonstrate the scalability of our optical scheme, we perform for the first time predictions on large multidimensional chaotic datasets using the Kuramoto-Sivashinsky equation as an example of a spatiotemporal chaotic system. Our results are extremely challenging for conventional Turing-von Neumann machines, and they significantly advance the state-of-the-art of unconventional reservoir computing approaches in general.

更新日期：2020-01-27
• arXiv.cs.CE Pub Date : 2020-01-23
Zenan Huo; Gang Mei; Nengxiong Xu

The Smoothed Finite Element Method (S-FEM) proposed by Liu G.R. can achieve more accurate results than the conventional FEM. Currently, much commercial software and many open-source packages have been developed to analyze various science and engineering problems using the FEM. However, there is little work focusing on designing and developing software or packages for the S-FEM. In this paper, we design and implement an open-source package of the parallel S-FEM for elastic problems by utilizing the Julia language on multi-core CPU. The Julia language is a fast, easy-to-use, and open-source programming language that was originally designed for high-performance computing. We term our package as juSFEM. To the best of the authors knowledge, juSFEM is the first package of parallel S-FEM developed with the Julia language. To verify the correctness and evaluate the efficiency of juSFEM, two groups of benchmark tests are conducted. The benchmark results show that (1) juSFEM can achieve accurate results when compared to commercial FEM software ABAQUS, and (2) juSFEM only requires 543 seconds to calculate the displacements of a 3D elastic cantilever beam model which is composed of approximately 2 million tetrahedral elements, while in contrast the commercial FEM software needs 930 seconds for the same calculation model; (3) the parallel juSFEM executed on the 24-core CPU is approximately 20x faster than the corresponding serial version. Moreover, the structure and function of juSFEM are easily modularized, and the code in juSFEM is clear and readable, which is convenient for further development.

更新日期：2020-01-27
• arXiv.cs.CE Pub Date : 2020-01-24
Dimitri Pinel

This paper investigates the use of clustering in the context of designing the energy system of Zero Emission Neighborhoods (ZEN). ZENs are neighborhoods who aim to have net zero emissions during their lifetime. While previous work has used and studied clustering for designing the energy system of neighborhoods, no article dealt with neighborhoods such as ZEN, which have high requirements for the solar irradiance time series, include a CO2 factor time series and have a zero emission balance limiting the possibilities. To this end several methods are used and their results compared. The results are on the one hand the performances of the clustering itself and on the other hand, the performances of each method in the optimization model where the data is used. Various aspects related to the clustering methods are tested. The different aspects studied are: the goal (clustering to obtain days or hours), the algorithm (k-means or k-medoids), the normalization method (based on the standard deviation or range of values) and the use of heuristic. The results highlight that k-means offers better results than k-medoids and that k-means was systematically underestimating the objective value while k-medoids was constantly overestimating it. When the choice between clustering days and hours is possible, it appears that clustering days offers the best precision and solving time. The choice depends on the formulation used for the optimization model and the need to model seasonal storage. The choice of the normalization method has the least impact, but the range of values method show some advantages in terms of solving time. When a good representation of the solar irradiance time series is needed, a higher number of days or using hours is necessary. The choice depends on what solving time is acceptable.

更新日期：2020-01-27
• arXiv.cs.CE Pub Date : 2020-01-23
Yue Guan; Rade Grujicic; Xuechuan Wang; Leiting Dong; Satya N. Atluri

A new and effective computational approach is presented for analyzing transient heat conduction problems. The approach consists of a meshless Fragile Points Method (FPM) being utilized for spatial discretization, and a Local Variational Iteration (LVI) scheme for time discretization. Anisotropy and nonhomogeneity do not give rise to any difficulties in the present implementation. The meshless FPM is based on a Galerkin weak-form formulation and thus leads to symmetric matrices. Local, very simple, polynomial and discontinuous trial and test functions are employed. In the meshless FPM, Interior Penalty Numerical Fluxes are introduced to ensure the consistency of the method. The LVIM in the time domain is generated as a combination of the Variational Iteration Method (VIM) applied over a large time interval and numerical algorithms. A set of collocation nodes are employed in each finitely large time interval. The FPM + LVIM approach is capable of solving transient heat transfer problems in complex geometries with mixed boundary conditions, including pre-existing cracks. Numerical examples are presented in 2D and 3D domains. Both functionally graded materials and composite materials are considered. It is shown that, with suitable computational parameters, the FPM + LVIM approach is not only accurate, but also efficient, and has reliable stability under relatively large time intervals. The present methodology represents a considerable improvement to the current state of science in computational transient heat conduction in anisotropic nonhomogeneous media.

更新日期：2020-01-27
• arXiv.cs.CE Pub Date : 2020-01-20
Seiji Takeda; Toshiyuki Hama; Hsiang-Han Hsu; Toshiyuki Yamane; Koji Masuda; Victoria A. Piunova; Dmitry Zubarev; Jed Pitera; Daniel P. Sanders; Daiju Nakano

Designing novel materials that possess desired properties is a central need across many manufacturing industries. Driven by that industrial need, a variety of algorithms and tools have been developed that combine AI (machine learning and analytics) with domain knowledge in physics, chemistry, and materials science. AI-driven materials design can be divided to mainly two stages; the first one is the modeling stage, where the goal is to build an accurate regression or classification model to predict material properties (e.g. glass transition temperature) or attributes (e.g. toxic/non-toxic). The next stage is design, where the goal is to assemble or tune material structures so that they can achieve user-demanded target property values based on a prediction model that is trained in the modeling stage. For maximum benefit, these two stages should be architected to form a coherent workflow. Today there are several emerging services and tools for AI-driven material design, however, most of them provide only partial technical components (e.g. data analyzer, regression model, structure generator, etc.), that are useful for specific purposes, but for comprehensive material design, those components need to be orchestrated appropriately. Our material design system provides an end-to-end solution to this problem, with a workflow that consists of data input, feature encoding, prediction modeling, solution search, and structure generation. The system builds a regression model to predict properties, solves an inverse problem on the trained model, and generates novel chemical structure candidates that satisfy the target properties. In this paper we will introduce the methodology of our system, and demonstrate a simple example of inverse design generating new chemical structures that satisfy targeted physical property values.

更新日期：2020-01-27
• arXiv.cs.CC Pub Date : 2012-05-30
Michel Feldmann

We demonstrate that any logical problem can be solved by Bayesian inference. In this approach, the distinction between complexity classes vanishes. The method is illustrated by solving the 3-SAT problem in polynomial time. Beyond this, Bayesian inference could be the background of artificial neural network theory.

更新日期：2020-01-27
• arXiv.cs.CC Pub Date : 2019-08-07
Saulo Queiroz; Wesley Silva; João P. Vilela; Edmundo Monteiro

In this letter, we demonstrate a mapper that enables all waveforms of OFDM with Index Modulation (OFDM-IM) while preserving polynomial time and space computational complexities. Enabling all OFDM-IM waveforms maximizes the spectral efficiency (SE) gain over the classic OFDM but, as far as we know, the computational overhead of the resulting mapper remains conjectured as prohibitive across the OFDM-IM literature. We show that the largest number of binomial coefficient calculations performed by the original OFDM-IM mapper is polynomial on the number of subcarriers, even under the setup that maximizes the SE gain over OFDM. Also, such coefficients match the entries of the so-called Pascal's triangle (PT). Thus, by assisting the OFDM-IM mapper with a PT table, we show that the maximum SE gain over OFDM can be achieved under polynomial (rather than exponential) time and space complexities.

更新日期：2020-01-27
• arXiv.cs.CC Pub Date : 2019-10-26
Duncan Adamson; Argyrios Deligkas; Vladimir Gusev; Igor Potapov

Crystal Structure Prediction (csp) is one of the central and most challenging problems in materials science and computational chemistry. In csp, the goal is to find a configuration of ions in 3D space that yields the lowest potential energy. Finding an efficient procedure to solve this complex optimisation question is a well known open problem in computational chemistry. Due to the exponentially large search space, the problem has been referred in several materials-science papers as ''NP-Hard and very challenging'' without any formal proof though. This paper fills a gap in the literature providing the first set of formally proven NP-Hardness results for a variant of csp with various realistic constraints. In particular, we focus on the problem of removal: the goal is to find a substructure with minimal potential energy, by removing a subset of the ions from a given initial structure. Our main contributions are NP-Hardness results for the csp removal problem, new embeddings of combinatorial graph problems into geometrical settings, and a more systematic exploration of the energy function to reveal the complexity of csp. In a wider context, our results contribute to the analysis of computational problems for weighted graphs embedded into the three-dimensional Euclidean space.

更新日期：2020-01-27
• J. Supercomput. (IF 2.157) Pub Date : 2020-01-25

Abstract An online auction network (OAN) is a community of users who buy or sell items through an auction site. Along with the growing popularity of auction sites, concerns about auction frauds and criminal activities have increased. As a result, fraud detection in OANs has attracted renewed interest from researchers. Since most real OANs are large-scale networks, detecting fraudulent users is usually difficult, especially when multiple users collude with each other and new online auctions are continuously added. Although collusive auction frauds are not as popular as other types of auction frauds, they are more horrible and catastrophic because they often bring huge financial losses. To tackle this issue, some techniques have been proposed to detect collusive frauds in OANs. While all of the techniques have demonstrated promising results, they often suffer from low detection performance or slow convergence, especially in large-scale OANs. In this paper, we overcome these deficiencies by presenting ICAFD, a novel technique that recasts the problem of detecting collusive frauds in large-scale OANs as an incremental semi-supervised anomaly detection problem. In this technique, we propagate reputations from a small set of labeled benign users to unlabeled users along the auction relationships between them and then incrementally update reputations when a new auction gets added to the OAN. This increases the convergence of ICAFD and allows it to avoid wasteful recalculation of reputations from scratch. Our experimental results show that ICAFD can successfully detect different types of collusive auction frauds in a reasonable detection time.

更新日期：2020-01-26
• J. Supercomput. (IF 2.157) Pub Date : 2020-01-24
Parth Bir, Shylaja Vinaykumar Karatangi, Amrita Rai

Abstract The server models functioning in the industry are required to be more elastic in nature. They are constantly scaling-up and scaling-down on required computation power depending on different conditions. These elastic cloud platforms use accelerators like DSP’s, TPU’s, GPU’s, FPGA’s, and multi-core processors to provide exponential computing power and outsource their services. This process is not only costly and non-efficient but is also responsible for damaging the server’s hardware architecture. Furthermore, these additions degrade the level of threading and symmetric parallel processing capability of the architecture. Intel uses hyperthreading technology (HTT) to split the workload between hardware and operating system to avoid additions, but that too is only possible up until a certain limit. This paper presents design methodology and implementation of an elastic-natured 32-bit RISC-pipelined processor inspired from Intel Xeon and MIPS to function as a standard integrated platform for server models. It implements concepts of hyperthreading technology (HTT) and virtualization on hardware basis. It will allow to derive multiple outputs from units on hardware basis to enhance security and performance without compromising compatibility. The designed elastic core uses a probabilistic node-based closed-queuing network model for server analysis and implementation. Hence, elastic behavior from individual core microarchitecture to server model architecture enables a generic automated scaling self-aware optimization architecture.

更新日期：2020-01-26
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2020-01-25
Daniel Kernberger; Martin Lange

Hybrid branching-time logics are a powerful extension of branching-time logics like CTL, CTL⁎ or even the modal μ-calculus through the addition of binders, jumps and variable tests. Their expressiveness is not restricted by bisimulation-invariance anymore. Hence, they do not retain the tree model property, and the finite model property is equally lost. Their satisfiability problems are typically undecidable, their model checking problems (on finite models) are decidable with complexities ranging from polynomial to non-elementary time. In this paper we study the expressive power of such hybrid branching-time logics. We extend the hierarchy of branching-time logics CTL, CTL+, CTL⁎ and the modal μ-calculus to their hybrid extensions. We show that most separation results can be transferred to the hybrid world, even though the required techniques become more involved. We also present collapse results for linear, tree-shaped and finite models.

更新日期：2020-01-26
• J. Supercomput. (IF 2.157) Pub Date : 2020-01-24
Andrés E. Tomás, Enrique S. Quintana-Ortí

Abstract We present a novel method for the QR factorization of large tall-and-skinny matrices that introduces an approximation technique for computing the Householder vectors. This approach is very competitive on a hybrid platform equipped with a graphics processor, with a performance advantage over the conventional factorization due to the reduced amount of data transfers between the graphics accelerator and the main memory of the host. Our experiments show that, for tall–skinny matrices, the new approach outperforms the code in MAGMA by a large margin, while it is very competitive for square matrices when the memory transfers and CPU computations are the bottleneck of the Householder QR factorization.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-10-08
Prithwineel Paul; Kumar Sankar Ray

In this paper, we associate the idea of derivation languages with a restricted variant of flat splicing systems where at each step splicing is done with an element from the initial set of words present in the system. We call these flat splicing systems as restricted flat splicing systems. We show that the families of Szilard languages of labeled restricted flat finite splicing systems of type (m,n) and REG, CF and CS are incomparable. Also, any non-empty regular, non-empty context-free and recursively enumerable language can be obtained as homomorphic image of the Szilard language of the labeled restricted flat finite splicing systems of type (1,2),(2,2) and (5,2) respectively. We also introduce the idea of control languages for restricted labeled flat finite splicing systems and show that any non-empty regular and context-free language can be obtained as a control language of labeled restricted flat finite splicing systems of type (1,2) and (2,2) respectively. At the end, we show that any recursively enumerable language can be obtained as a control language of labeled restricted flat finite splicing systems of type (5,2) when λ-labeled rules are allowed.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-10-04
Linqiang Pan; David Orellana-Martín; Bosheng Song; Mario J. Pérez-Jiménez

P systems with active membranes are a class of computation models in the area of membrane computing, which are inspired from the mechanism by which chemicals interact and cross cell membranes. In this work, we consider a normal form of P systems with active membranes, called cell-like P systems with polarizations and minimal rules, where rules are minimal in the sense that an object evolves to exactly one object with the application of an evolution rule or a communication rule, or an object evolves to two objects that are assigned to the two new generated membranes by applying a division rule. The present work investigates the computational power of P systems with polarizations and minimal rules. Specifically, results about Turing universality and non-universality are obtained with the combination of the number of membranes, the number of polarizations, and the types of rules. We also show that polarizationless P systems with minimal rules are equivalent to Turing machines working in a polynomial space, that is, the class of problems that can be solved in polynomial time by polarizationless P systems with minimal rules is equal to the class PSPACE.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-09-25
Patrick Baillot; Alexis Ghyselen

Several type systems have been proposed to statically control the time complexity of lambda-calculus programs and characterize complexity classes such as FPTIME or FEXPTIME. A first line of research stems from linear logic and restricted versions of its !-modality controlling duplication. An instance of this is light linear logic for polynomial time computation [5]. A second approach relies on the idea of tracking the size increase between input and output, and together with a restricted recursion scheme, to deduce time complexity bounds. This second approach is illustrated for instance by non-size-increasing types [8]. However, both approaches suffer from limitations. The first one, that of linear logic, has a limited intensional expressivity, that is to say some natural polynomial time programs are not typable. As to the second approach it is essentially linear, more precisely it does not allow for a non-linear use of functional arguments. In the present work we incorporate both approaches into a common type system, in order to overcome their respective constraints. The source language we consider is a lambda-calculus with data-types and iteration, that is to say a variant of Gödel's system T. Our goal is to design a system for this language allowing both to handle non-linear functional arguments and to keep a good intensional expressivity. We illustrate our methodology by choosing the system of elementary linear logic (ELL) and combining it with a system of linear size types. We discuss the expressivity of this new type system, called sEAL, and prove that it gives a characterization of the complexity classes FPTIME and 2k-FEXPTIME, for k≥0.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-09-24
Cheol Ryu; Thierry Lecroq; Kunsoo Park

In this paper we propose the Maximal Average Shift (MAS) algorithm that finds a pattern scan order that maximizes the average shift length. We also present two extensions of MAS: one improves the scan speed of MAS by using the scan result of the previous window, and the other improves the running time of MAS by using q-grams. These algorithms show better average performances in scan speed than previous string matching algorithms for DNA sequences.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-09-24
José Fuentes-Sepúlveda; Gonzalo Navarro; Yakov Nekrich

The Burrows-Wheeler Transform (BWT) has become since its introduction a key tool for representing large text collections in compressed space while supporting indexed searching: on a text of length n over an alphabet of size σ, it requires O(nlg⁡σ) bits of space, instead of the O(nlg⁡n) bits required by classical indexes. A challenge for its adoption is to build it within O(nlg⁡σ) bits as well. There are some sequential algorithms building it within this space, but no such a parallel algorithm. In this paper we present a PRAM CREW algorithm to build the BWT using O(nlg⁡n) work, O(lg3⁡n/lg⁡σ) depth, and O(nlg⁡σ) bits.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-09-24
Ugo Dal Lago; Gabriele Vanoni

In this work we study randomised reduction strategies—a notion already known in the context of abstract reduction systems—for the λ-calculus. We develop a simple framework that allows us to prove a randomised strategy to be positive almost-surely normalising. Then we propose a simple example of randomised strategy for the λ-calculus that has such a property and we show why it is non-trivial with respect to classical deterministic strategies such as leftmost-outermost or rightmost-innermost. We conclude studying this strategy for two sub-λ-calculi, namely those where duplication and erasure are syntactically forbidden, showing some non-trivial properties.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-09-18
Tomasz Jurdzinski; Dariusz R. Kowalski; Michal Rozanski; Grzegorz Stachowiak

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-09-16
Valentina Castiglioni; Michele Loreti; Simone Tini

Behavioral equivalences were introduced as a simple and elegant proof methodology for establishing whether the behavior of two processes cannot be distinguished by an external observer. The knowledge of observers usually depends on the observations that they can make on process behavior. Furthermore, the combination of nondeterminism and probability in concurrent systems leads to several interpretations of process behavior. Clearly, different kinds of observations as well as different interpretations lead to different kinds of behavioral relations, such as (bi)simulations, traces and testing. If we restrict our attention to linear properties only, we can identify three main approaches to trace and testing semantics: the trace distributions, the trace-by-trace and the extremal probabilities approaches. In this paper, we propose novel notions of behavioral metrics that are based on the three classic approaches above, and that can be used to measure the disparities in the linear behavior of processes with respect to trace and testing semantics. We study the properties of these metrics, like compositionality (expressed in terms of the non-expansiveness property), and we compare their expressive powers. More precisely, we compare them also to (bi)simulation metrics, thus obtaining the first metric linear time – branching time spectrum.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-08-08
Giulia Bernardini; Nadia Pisanti; Solon P. Pissis; Giovanna Rosone

An elastic-degenerate string is a sequence of n sets of strings of total length N. It has been introduced to represent a multiple alignment of several closely-related sequences (e.g., pan-genome) compactly. In this representation, substrings of these sequences that match exactly are collapsed, while in positions where the sequences differ, all possible variants observed at that location are listed. The natural problem that arises is finding all matches of a deterministic pattern of length m in an elastic-degenerate text. There exists a non-combinatorial O(nm1.381+N)-time algorithm to solve this problem on-line [1]. In this paper, we study the same problem under the edit distance model and present an O(k2mG+kN)-time and O(m)-space algorithm, where G is the total number of strings in the elastic-degenerate text and k is the maximum edit distance allowed. We also present a simple O(kmG+kN)-time and O(m)-space algorithm for solving the problem under Hamming distance.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-08-07
Hideo Bannai; Travis Gagie; Tomohiro I

Gagie, Navarro and Prezza's r-index (SODA, 2018) promises to speed up DNA alignment and variation calling by allowing us to index entire genomic databases, provided certain obstacles can be overcome. In this paper we first strengthen and simplify Policriti and Prezza's Toehold Lemma (DCC '16; Algorithmica, 2017), which inspired the r-index and plays an important role in its implementation. We then show how to update the r-index efficiently after adding a new genome to the database, which is likely to be vital in practice. As a by-product of this result, we obtain an online version of Policriti and Prezza's algorithm for constructing the LZ77 parse from a run-length compressed Burrows-Wheeler Transform. Our experiments demonstrate the practicality of all three of these results. Finally, we show how to augment the r-index such that, given a new genome and fast random access to the database, we can quickly compute the matching statistics and maximal exact matches of the new genome with respect to the database.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-07-19
Juan Garay; David Johnson; Aggelos Kiayias; Moti Yung

In this paper, we study the following “balls in buckets” problem. Suppose there is a sequence B1,B2,…,Bn of buckets having integer sizes s1,s2,…,sn, respectively. For a given target fraction α, 0<α<1, our goal is to sequentially place balls in buckets until at least ⌈αn⌉ buckets are full, so as to minimize the number of balls used, which we shall denote by OPTα(I) for a given instance I. If we knew the size of each bucket, we could obtain an optimal assignment, simply by filling the buckets in order of increasing size until the desired number had been filled. Here we consider the case where, although we know n and α, we do not know the specific bucket sizes si, and when we place a ball in bucket Bj, we only learn whether or not the bucket Bj is now full. We study what can be done under four variants of incomplete information: 1. We know nothing at all about the bucket sizes; 2. we know the maximum bucket size; 3. we know the sizes s1≤s2≤⋯≤sm that occur in the instance; and 4. we know the profile of the sizes: the size list as above, and, for each size, si, the number ki of buckets that have that size, providing both algorithmic performance guarantees and lower bounds on the best that any algorithm can achieve. The game above showcases the rich variety of interesting combinatorial and algorithmic questions that this setup gives rise to, and in addition has applications in an area of cryptography known as secure multi-party computation, where taking over (“corrupting”) a party by an adversary has a cost, and where a hidden diversity—corresponding to lack of information on the amount of computational resources the adversary should invest to corrupt a participant—translates into robustness and efficiency benefits.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-07-19
Yunong Zhang; Min Yang; Binbin Qiu; Jian Li; Mingjie Zhu

The authors carried out time-varying problems solving (TVPS) including robot problems solving in 2001, and began to figure out the reasons for the problems solving via diverse layers. After eight years' thinking, i.e., in 2009, the authors began to manifest, put forward and carry out the thought of “physical equivalency”. By another eight years' practicing and experimenting, i.e., in 2017, the authors basically finished establishing the framework of Zhang equivalency. Now, it is the time to establish the complete theory in a brief manner. Therefore, concepts about mathematical equivalence simply termed equivalence are presented firstly including Ma equivalence (especially for robotics), and then concepts about physical equivalency simply termed equivalency are proposed. Meanwhile, concepts about Zhang equivalency as a kind of equivalency are further proposed, and concepts about gradient-dynamics equivalency simply termed gradient equivalency as a kind of equivalency are proposed as well. Furthermore, two specific applications are considered and investigated, which substantiate the efficacy of Zhang equivalency.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-07-08
Bruce M. Kapron; Florian Steinberg

This paper provides an alternate characterization of type-two polynomial-time computability, with the goal of making second-order complexity theory more approachable. We rely on the usual oracle machines to model programs with subroutine calls. In contrast to previous results, the use of higher-order objects as running times is avoided, either explicitly or implicitly. Instead, regular polynomials are used. This is achieved by refining the notion of oracle-polynomial-time introduced by Cook. We impose a further restriction on the oracle interactions to force feasibility. Both the restriction as well as its purpose are very simple: it is well-known that Cook's model allows polynomial depth iteration of functional inputs with no restrictions on size, and thus does not guarantee that polynomial-time computability is preserved. To mend this we restrict the number of lookahead revisions, that is the number of times a query can be asked that is bigger than any of the previous queries. We prove that this leads to a class of feasible functionals and that all feasible problems can be solved within this class if one is allowed to separate a task into efficiently solvable subtasks. Formally put: the closure of our class under lambda-abstraction and application includes all feasible operations. We also revisit the very similar class of strongly polynomial-time computable operators previously introduced by Kawamura and Steinberg. We prove it to be strictly included in our class and, somewhat surprisingly, to have the same closure property. This can be attributed to properties of the limited recursion operator: It is not strongly polynomial-time computable but decomposes into two such operations and lies in our class.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-07-05
Amihood Amir; Gad M. Landau; Shoshana Marcus; Dina Sokol

Maximal repetitions or runs in strings have a wide array of applications and thus have been extensively studied. In this paper, we extend this notion to 2-dimensions, precisely defining a maximal 2D repetition. We provide initial bounds on the number of maximal 2D repetitions that can occur in an n×n array. The main contribution of this paper is the presentation of the first algorithm for locating all maximal 2D repetitions. The algorithm is efficient and straightforward, with runtime O(n2log⁡n+ρ), where n2 is the size of the input array and ρ is the number of maximal 2D repetitions in the output.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-07-05
Gennaro Cordasco; Luisa Gargano; Joseph G. Peters; Adele A. Rescigno; Ugo Vaccaro

A widely studied model of influence diffusion in social networks represents the network as a graph G=(V,E), with an integer influence threshold t(v) for each node, and the diffusion process as follows: Initially the members of a chosen set S⊆V are influenced and, during each subsequent round, the set of influenced nodes is augmented by including every new node v that has at least t(v) previously influenced neighbours. The general problem is to find a small initial set that influences the whole network. In this paper we extend this model by using incentives to reduce the thresholds of some nodes. The goal is to minimize the total amount of the incentive required to ensure that the information diffusion process terminates within a given number of rounds λ. The problem is hard to approximate in general networks. We present optimal polynomial-time algorithms for paths, cycles, trees, and complete networks for any λ. For the special case λ=1, we present a polynomial-time algorithm with a logarithmic approximation guarantee for any network.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-07-05
Paweł Gawrychowski; Seungbum Jo; Shay Mozes; Oren Weimann

Given a string S of n integers in [0,σ), a range minimum query RMQ(i,j) asks for the index of the smallest integer in S[i…j]. It is well known that the problem can be solved with a succinct data structure of size 2n+o(n) and constant query-time. In this paper we show how to preprocess S into a compressed representation that allows fast range minimum queries. This allows for sublinear size data structures with logarithmic query time. The most natural approach is to use string compression and construct a data structure for answering range minimum queries directly on the compressed string. We investigate this approach in the context of grammar compression. We then consider an alternative approach. Instead of compressing S using string compression, we compress the Cartesian tree of S using tree compression. We show that this approach can be exponentially better than the former, is never worse by more than an O(σ) factor (i.e. for constant alphabets it is never asymptotically worse), and can in fact be worse by an Ω(σ) factor.

更新日期：2020-01-24
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2019-07-04
Rani M. R; Subashini R; Mohith Jagalmohanan

A binary matrix M has the consecutive ones property (C1P) for rows (resp. columns) if there exists a permutation of its columns (resp. rows) that arranges the ones consecutively in all the rows (resp. columns). If M has the C1P for rows and the C1P for columns, then M is said to have the simultaneous consecutive ones property (SC1P). In this article, we consider the classical complexity and fixed-parameter tractability of a few variants of decision problems related to the SC1P. Given a binary matrix M and a positive integer d, we focus on problems that decide whether there exists a set of rows, columns, and rows as well as columns, respectively, of size at most d in M, whose deletion results in a matrix with the SC1P. We also consider problems that decide whether there exists a set of 0-entries, 1-entries and 0-entries as well as 1-entries, respectively, of size at most d in M, whose flipping results in a matrix with the SC1P. In this paper, we show that all the above mentioned problems are NP-complete. We could also prove that all these problems are fixed-parameter tractable with respect to solution size as the parameter, except for two variants (flipping 1-entries and flipping 0/1-entries). We also give improved FPT algorithms for certain problems on restricted binary matrices.

更新日期：2020-01-24
• Inform. Syst. (IF 2.066) Pub Date : 2020-01-24
Amine Abbad Andaloussi; Andrea Burattin; Tijs Slaats; Ekkart Kindler; Barbara Weber

Process modeling plays a central role in the development of today’s process-aware information systems both on the management level (e.g., providing input for requirements elicitation and fostering communication) and on the enactment level (providing a blue-print for process execution and enabling simulation). The literature comprises a variety of process modeling approaches proposing different modeling languages (i.e., imperative and declarative languages) and different types of process artifact support (i.e., process models, textual process descriptions, and guided simulations). However, the use of an individual modeling language or a single type of process artifact is usually not enough to provide a clear and concise understanding of the process. To overcome this limitation, a set of so-called “hybrid” approaches combining languages and artifacts have been proposed, but no common grounds have been set to define and categorize them. This work aims at providing a fundamental understanding of these hybrid approaches by defining a unified terminology, providing a conceptual framework and proposing an overarching overview to identify and analyze them. Since no common terminology has been used in the literature, we combined existing concepts and ontologies to define a “Hybrid Business Process Representation” (HBPR). Afterward, we conducted a Systematic Literature Review (SLR) to identify and investigate the characteristics of HBPRs combining imperative and declarative languages or artifacts. The SLR resulted in 30 articles which were analyzed. The results indicate the presence of two distinct research lines and show common motivations driving the emergence of HBPRs, a limited maturity of existing approaches, and diverse application domains. Moreover, the results are synthesized into a taxonomy classifying different types of representations. Finally, the outcome of the study is used to provide a research agenda delineating the directions for future work.

更新日期：2020-01-24
• arXiv.cs.LO Pub Date : 2020-01-23
Jörg Endrullis; Jan Willem Klop; Roy Overbeek

The recursive path ordering is an established and crucial tool in term rewriting to prove termination. We revisit its presentation by means of some simple rules on trees equipped with a `star' as control symbol, signifying a command to make that tree (or term) smaller in the order being defined. This leads to star games that are very convenient for proving termination of many rewriting tasks. For instance, using already the simplest star game on finite unlabeled trees, we obtain a very direct proof of termination of the famous Hydra battle. We also include an alternative road to setting up the star games, using a proof method of Buchholz, and a quantitative version of the star as control symbol. We conclude with a number of questions and future research directions.

更新日期：2020-01-24
• arXiv.cs.LO Pub Date : 2020-01-23
Kumar Nitesh; Kuzelka Ondrej; De Raedt Luc

Relational autocompletion is the problem of automatically filling out some missing fields in a relational database. We tackle this problem within the probabilistic logic programming framework of Distributional Clauses (DC), which supports both discrete and continuous probability distributions. Within this framework, we introduce Dreaml -- an approach to learn both the structure and the parameters of DC programs from databases that may contain missing information. To realize this, Dreaml integrates statistical modeling, distributional clauses with rule learning. The distinguishing features of Dreaml are that it 1) tackles relational autocompletion, 2) learns distributional clauses extended with statistical models, 3) deals with both discrete and continuous distributions, 4) can exploit background knowledge, and 5) uses an expectation-maximization based algorithm to cope with missing data.

更新日期：2020-01-24
• arXiv.cs.LO Pub Date : 2020-01-23
Heng Zhang; Yan Zhang; Guifei Jiang

Existential rules, a.k.a. dependencies in databases, and Datalog+/- in knowledge representation and reasoning recently, are a family of important logical languages widely used in computer science and artificial intelligence. Towards a deep understanding of these languages in model theory, we establish model-theoretic characterizations for a number of existential rule languages such as (disjunctive) embedded dependencies, tuple-generating dependencies (TGDs), (frontier-)guarded TGDs and linear TGDs. All these characterizations hold for arbitrary structures, and most of them also work on the class of finite structures. As a natural application of these characterizations, complexity bounds for the rewritability of above languages are also identified.

更新日期：2020-01-24
• arXiv.cs.GT Pub Date : 2020-01-22
Theodor Cimpeanu; The Anh Han

Social punishment has been suggested as a key approach to ensuring high levels of cooperation and norm compliance in one-shot (i.e. non-repeated) interactions. However, it has been shown that it only works when punishment is highly cost-efficient. On the other hand, signalling retribution hearkens back to medieval sovereignty, insofar as the very word for gallows in French stems from the Latin word for power and serves as a grim symbol of the ruthlessness of high justice. Here we introduce the mechanism of signalling an act of punishment and a special type of defector emerges, one who can recognise this signal and avoid punishment by way of fear. We describe the analytical conditions under which threat signalling can maintain high levels of cooperation. Moreover, we perform extensive agent-based simulations so as to confirm and expand our understanding of the external factors that influence the success of social punishment. We show that our suggested mechanism serves as a catalyst for cooperation, even when signalling is costly or when punishment would be impractical. We observe the preventive nature of advertising retributive acts and we contend that the resulting social prosperity is a desirable outcome in the contexts of AI and multi-agent systems. To conclude, we argue that fear acts as an effective stimulus to pro-social behaviour.

更新日期：2020-01-24
• arXiv.cs.GT Pub Date : 2019-12-24
Wayes Tushar; Tapan K. Saha; Chau Yuen; M. Imran Azim; Thomas Morstyn; H. Vincent Poor; Dustin Niyato; Richard Bean

This paper studies social cooperation backed peer-to-peer energy trading technique by which prosumers can decide how they can use their batteries opportunistically for participating in the peer-to-peer trading. The objective is to achieve a solution in which the ultimate beneficiaries are the prosumers, i.e., a prosumer-centric solution. To do so, a coalition formation game is designed, which enables a prosumer to compare its benefit of participating in the peer-to-peer trading with and without using its battery and thus, allows the prosumer to form suitable social coalition groups with other similar prosumers in the network for conducting peer-to-peer trading. The properties of the formed coalitions are studied, and it is shown that 1) the coalition structure that stems from the social cooperation between participating prosumers at each time slot is both stable and optimal, and 2) the outcomes of the proposed peer- to-peer trading scheme is prosumer-centric. Case studies are conducted based on real household energy usage and solar generation data to highlight how the proposed scheme can benefit prosumers through exhibiting prosumer-centric properties.

更新日期：2020-01-24
• arXiv.cs.GT Pub Date : 2019-08-18
Sizhong Lan

We argue that the existing regret matchings for Nash equilibrium approximation conduct "jumpy" strategy updating when the probabilities of future plays are set to be proportional to positive regret measures. We propose a geometrical regret matching which features "smooth" strategy updating. Our approach is simple, intuitive and natural. The analytical and numerical results show that, continuously and "smoothly" suppressing "unprofitable" pure strategies is sufficient for the game to evolve towards Nash equilibrium, suggesting that in reality the tendency for equilibrium could be pervasive and irresistible. Technically, iterative regret matching gives rise to a sequence of adjusted mixed strategies for our study its approximation to the true equilibrium point. The sequence can be studied in metric space and visualized nicely as a clear path towards an equilibrium point. Our theory has limitations in optimizing the approximation accuracy.

更新日期：2020-01-24
• arXiv.cs.CE Pub Date : 2020-01-22
Hugo Casquero; Carles Bona-Casas; Deepesh Toshniwal; Thomas J. R. Hughes; Hector Gomez; Yongjie Jessica Zhang

We extend the recently introduced divergence-conforming immersed boundary (DCIB) method [1] to fluid-structure interaction (FSI) problems involving closed co-dimension one solids. We focus on capsules and vesicles, whose discretization is particularly challenging due to the higher-order derivatives that appear in their formulations. In two-dimensional settings, we employ cubic B-splines with periodic knot vectors to obtain discretizations of closed curves with C^2 inter-element continuity. In three-dimensional settings, we use analysis-suitable bi-cubic T-splines to obtain discretizations of closed surfaces with at least C^1 inter-element continuity. Large spurious changes of the fluid volume inside closed co-dimension one solids is a well-known issue for IB methods. The DCIB method results in volume changes orders of magnitude lower than conventional IB methods. This is a byproduct of discretizing the velocity-pressure pair with divergence-conforming B-splines, which lead to negligible incompressibility errors at the Eulerian level. The higher inter-element continuity of divergence-conforming B-splines is also crucial to avoid the quadrature/interpolation errors of IB methods becoming the dominant discretization error. Benchmark and application problems of vesicle and capsule dynamics are solved, including mesh-independence studies and comparisons with other numerical methods.

更新日期：2020-01-24
• arXiv.cs.CC Pub Date : 2020-01-23
Yasuhiro Takahashi; Yuki Takeuchi; Seiichiro Tani

We study the effect of noise on the classical simulatability of quantum circuits defined by computationally tractable (CT) states and efficiently computable sparse (ECS) operations. Examples of such circuits, which we call CT-ECS circuits, are IQP, Clifford Magic, and conjugated Clifford circuits. This means that there exist various CT-ECS circuits such that their output probability distributions are anti-concentrated and not classically simulatable in the noise-free setting (under plausible assumptions). First, we consider a noise model where a depolarizing channel with an arbitrarily small constant rate is applied to each qubit at the end of computation. We show that, under this noise model, if an approximate value of the noise rate is known, any CT-ECS circuit with an anti-concentrated output probability distribution is classically simulatable. This indicates that the presence of small noise drastically affects the classical simulatability of CT-ECS circuits. Then, we consider an extension of the noise model where the noise rate can vary with each qubit, and provide a similar sufficient condition for classically simulating CT-ECS circuits with anti-concentrated output probability distributions.

更新日期：2020-01-24
• arXiv.cs.CC Pub Date : 2020-01-23
Jiří Fink; Martin Loebl

The arc-routing problems are known to be notoriously hard. We study here a natural arc-routing problem on trees and more generally on bounded tree-width graphs and surprisingly show that it can be solved in a polynomial time. This implies a sub-exponential algorithm for the planar graphs and small number of maintaining cars, which is of practical relevance.

更新日期：2020-01-24
• arXiv.cs.CC Pub Date : 2020-01-23
Johan Kwisthout; Nils Donselaar

The last decade has seen the rise of neuromorphic architectures based on artificial spiking neural networks, such as the SpiNNaker, TrueNorth, and Loihi systems. The massive parallelism and co-locating of computation and memory in these architectures potentially allows for an energy usage that is orders of magnitude lower compared to traditional Von Neumann architectures. However, to date a comparison with more traditional computational architectures (particularly with respect to energy usage) is hampered by the lack of a formal machine model and a computational complexity theory for neuromorphic computation. In this paper we take the first steps towards such a theory. We introduce spiking neural networks as a machine model where---in contrast to the familiar Turing machine---information and the manipulation thereof are co-located in the machine. We introduce canonical problems, define hierarchies of complexity classes and provide some first completeness results.

更新日期：2020-01-24
• J. Supercomput. (IF 2.157) Pub Date : 2020-01-23
Riman Mandal, Manash Kumar Mondal, Sourav Banerjee, Utpal Biswas

Abstract With the rapid demand for service-oriented computing in association with the growth of cloud computing technologies, large-scale virtualized data centers have been established throughout the globe. These huge data centers consume power at a large scale that results in a high operational cost. The massive carbon footprint from the energy generators is another great issue to deal global warming. It is essential to lower the rate of carbon emission and energy consumption as much as possible. The live-migration-enabled dynamic virtual machine consolidation results in high energy saving. But it also incurs the violation of service level agreement (SLA). Excessive migration may lead to performance degradation and SLA violation. The process of VM selection for migration plays a vital role in the domain of energy-aware cloud computing. Using VM selection policies, VMs are selected for migration. A new power-aware VM selection policy has been proposed in this research that helps in VM selection for migration. The proposed power-aware VM selection policy has been further evaluated using trace-based simulation environment.

更新日期：2020-01-23
• J. Supercomput. (IF 2.157) Pub Date : 2020-01-23
Fernando G. Tinetti, Maximiliano J. Perez, Ariel Fraidenraich, Adolfo E. Altenberg

Abstract In this paper, we present several important details in the process of legacy code parallelization, mostly related to the problem of maintaining numerical output of a legacy code while obtaining a balanced workload for parallel processing. Since we maintained the non-uniform mesh imposed by the original finite element code, we have to develop a specially designed data distribution among processors so that data restrictions are met in the finite element method. In particular, we introduce a data distribution method that is initially used in shared memory parallel processing and obtain better performance than the previous parallel program version. Besides, this method can be extended to other parallel platforms such as distributed memory parallel computers. We present results including several problems related to performance profiling on different (development and production) parallel platforms. The use of new and old parallel computing architectures leads to different behavior of the same code, which in all cases provides better performance in multiprocessor hardware.

更新日期：2020-01-23
• Theor. Comput. Sci. (IF 0.718) Pub Date : 2020-01-23
Longchun Wang; Qingguo Li

In this paper, we build a logic which is named N-sequent calculus. Based on this logic, we provide two kinds of logical representations of Lawson compact algebraic L-domains: one in terms of logical algebras and the other in terms of logical syntax. The first representation takes the corresponding logical algebras as research objects. The use of prime filters achieves the connection between our logic and Lawson compact algebraic L-domains. This approach is inspired by Abramsky's SFP domain logic and the disjunctive propositional logic on algebraic L-domains introduced by Yixiang Chen and Achim Jung. However, there are essential differences between them at the morphisms part. For the second representation, we directly adopt N-sequent calculi themselves as objects instead of the logical algebras. Then we establish the category of N-sequent calculi with consequence relations equivalent to that of Lawson compact algebraic L-domains with Scott continuous maps. This demonstrates the capability of the syntax of the logic in representing domains.

更新日期：2020-01-23
• Inform. Syst. (IF 2.066) Pub Date : 2020-01-23
Md Anisur Rahman; Li-Minn Ang; Kah Phooi Seng

It is a crucial need for a clustering technique to produce high-quality clusters from biomedical and gene expression datasets without requiring any user inputs. Therefore, in this paper we present a clustering technique called KUVClust that produces high-quality clusters when applied on biomedical and gene expression datasets without requiring any user inputs. The KUVClust algorithm uses three concepts namely multivariate kernel density estimation, unique closest neighborhood set and vein-based clustering. Although these concepts are known in the literature, KUVClust combines the concepts in a novel manner to achieve high-quality clustering results. The performance of KUVClust is compared with established clustering techniques on real-world biomedical and gene expression datasets. The comparisons were evaluated in terms of three criteria (purity, entropy, and sum of squared error (SSE)). Experimental results demonstrated the superiority of the proposed technique over the existing techniques for clustering both the low dimensional biomedical and high dimensional gene expressions datasets used in the experiments.

更新日期：2020-01-23
• arXiv.cs.LO Pub Date : 2020-01-22
Sahil Verma; Roland H. C. Yap

Symbolic execution is a powerful technique for bug finding and program testing. It is successful in finding bugs in real-world code. The core reasoning techniques use constraint solving, path exploration, and search, which are also the same techniques used in solving combinatorial problems, e.g., finite-domain constraint satisfaction problems (CSPs). We propose CSP instances as more challenging benchmarks to evaluate the effectiveness of the core techniques in symbolic execution. We transform CSP benchmarks into C programs suitable for testing the reasoning capabilities of symbolic execution tools. From a single CSP P, we transform P depending on transformation choice into different C programs. Preliminary testing with the KLEE, Tracer-X, and LLBMC tools show substantial runtime differences from transformation and solver choice. Our C benchmarks are effective in showing the limitations of existing symbolic execution tools. The motivation for this work is we believe that benchmarks of this form can spur the development and engineering of improved core reasoning in symbolic execution engines.

更新日期：2020-01-23
• arXiv.cs.LO Pub Date : 2020-01-15
Luca Bernardinello; Irina Lomazova; Roman Nesterov; Lucia Pomello

In this paper, we propose a compositional approach to construct formal models of complex distributed systems with several synchronously and asynchronously interacting components. A system model is obtained from a composition of individual component models according to requirements on their interaction. We represent component behavior using workflow nets - a class of Petri nets. We propose a general approach to model and compose synchronously and asynchronously interacting workflow nets. Through the use of Petri net morphisms and their properties, we prove that this composition of workflow nets preserves component correctness.

更新日期：2020-01-23
Contents have been reproduced by permission of the publishers.

down
wechat
bug