当前期刊: International Journal of Approximate Reasoning Go to current issue    加入关注   
显示样式:        排序: 导出
我的关注
我的收藏
您暂时未登录!
登录
  • A decoupled design approach for complex systems under lack-of-knowledge uncertainty
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-17
    Marco Daub; Fabian Duddeck

    This paper introduces a special approach for the design of complex systems in early development stages accounting for lack-of-knowledge uncertainty. As complex systems have to be broken down into their components by a decoupling methodology, the presented work regards so-called box-shaped solution spaces as subsets of the total set of permissible designs not violating any design constraints. Hereby, the design variables are decoupled and their design intervals are maximized. Then, each aspect related to the corresponding interval can be studied independently in a subsequent development step by different stakeholders (design groups or designers). Especially in early design phases, the consideration of uncertainty is crucial; this is not so much related to aleatoric uncertainty as probability functions are often unavailable. More important and more difficult to handle is epistemic uncertainty, i.e. lack-of-knowledge uncertainty. Here, uncertainties which occur later in the development and the facts that the current design stage does not include smaller design features and that the available models represent only coarsely the later designs are important. This paper complements prior work by providing a complete methodology for relevant uncertainties. This includes uncertainties in controllable design variables as well as in uncontrollable parameters all captured by interval arithmetic. Furthermore, it extends existing worst-case approaches by best-case approaches. The user can now base design decisions on (a) a deterministic solution space without consideration of lack-of-knowledge uncertainties, (b) the same with consideration of uncertainties in uncontrollable parameters only or controllable variables only, or (c) the most complete approach where uncertainties in both controllable variables and uncontrollable parameters are considered. The corresponding scenarios are exemplified via examples from automotive engineering (design for crashworthiness).

    更新日期:2020-01-17
  • Partially observable game-theoretic agent programming in Golog
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-17
    Alberto Finzi; Thomas Lukasiewicz

    In this paper, we present the agent programming language POGTGolog (Partially Observable Game-Theoretic Golog), which integrates explicit agent programming in Golog with game-theoretic multi-agent planning in partially observable stochastic games. In this framework, we assume one team of cooperative agents acting under partial observability, where the agents may also have different initial belief states and not necessarily the same rewards. POGTGolog allows for specifying a partial control program in a high-level logical language, which is then completed by an interpreter in an optimal way. To this end, we define a formal semantics of POGTGolog programs in terms of Nash equilibria, and we then specify a POGTGolog interpreter that computes one of these Nash equilibria.

    更新日期:2020-01-17
  • Non-parametric learning of lifted restricted Boltzmann machines
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-15
    Navdeep Kaur; Gautam Kunapuli; Sriraam Natarajan

    We consider the problem of discriminatively learning restricted Boltzmann machines in the presence of relational data. Unlike previous approaches that employ a rule learner (for structure learning) and a weight learner (for parameter learning) sequentially, we develop a gradient-boosted approach that performs both simultaneously. Our approach learns a set of weak relational regression trees, whose paths from root to leaf are conjunctive clauses and represent the structure, and whose leaf values represent the parameters. When the learned relational regression trees are transformed into a lifted RBM, its hidden nodes are precisely the conjunctive clauses derived from the relational regression trees. This leads to a more interpretable and explainable model. Our empirical evaluations clearly demonstrate this aspect, while displaying no loss in effectiveness of the learned models.

    更新日期:2020-01-15
  • On properties of a new decomposable entropy of Dempster-Shafer belief functions
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-13
    Radim Jiroušek; Prakash P. Shenoy

    We define entropy of belief functions in the Dempster-Shafer (D-S) theory that satisfies a compound distributions property that is analogous to the property that characterizes Shannon's definitions of entropy and conditional entropy for probability mass functions. None of the existing definitions of entropy for belief functions in the D-S theory satisfy this property. We describe some important properties of our definition, and discuss its semantics as a measure of dissonance and not uncertainty. Finally, we compare our definition of entropy with some other definitions that are similar to ours in the sense that these definitions measure dissonance and not uncertainty.

    更新日期:2020-01-14
  • A Rule-based Framework for Risk Assessment in the Health Domain
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-10
    Luca Cattelani; Federico Chesani; Luca Palmerini; Pierpaolo Palumbo; Lorenzo Chiari; Stefania Bandinelli

    Risk assessment is an important decision support task in many domains, including health, engineering, process management, and economy. There is a growing interest in automated methods for risk assessment. These methods should be able to process information efficiently and with little user involvement. Currently, from the scientific literature in the health domain, there is availability of evidence-based knowledge about specific risk factors. On the other hand, there is no automatic procedure to exploit this available knowledge in order to create a general risk assessment tool which can combine the available quantitative data about risk factors and their impact on the corresponding risk. We present a Framework for the Assessment of Risk of adverse Events (FARE) and its first concrete applications FRAT-up and DRAT-up, which were used for fall and depression risk assessment in older persons and validated on four and three European epidemiological datasets, respectively. FARE consists of i) a novel formal ontology called On2Risk; and ii) a logical and probabilistic rule-based model. The ontology was designed to represent qualitative and quantitative data about risks in a general, structured and machine-readable manner so that this data may be concretely exploited by risk assessment algorithms. We describe the structure of the FARE model in the form of logic and probabilistic rules. We show how when starting from machine-readable data about risk factors, like the data contained in On2Risk, an instance of the algorithm can be automatically constructed and used to estimate the risk of an adverse event.

    更新日期:2020-01-11
  • Accelerator for supervised neighborhood based attribute reduction
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-09
    Zehua Jiang; Keyu Liu; Xibei Yang; Hualong Yu; Hamido Fujita; Yuhua Qian

    In neighborhood rough set, radius is a key factor. Different radii may generate different neighborhood relations for discriminating samples. Unfortunately, it is possible that two samples with different labels are regarded as indistinguishable, mainly because the neighborhood relation does not always provide satisfactory discriminating performance. Moreover, it should be noticed that the process of obtaining reducts in terms of multiple different radii is very time-consuming, mainly because different radii imply different reducts and those reducts should be searched, respectively. To solve the above problems, not only a supervised neighborhood relation is proposed for obtaining better discriminating performance, but also an accelerator is designed to speed up the process of obtaining reducts. Firstly, both intra-class radius and inter-class radius are proposed to distinguish samples. Different from the previous approaches, the labels of samples are taken into account and then this is why our approach is referred to as the supervised neighborhood based strategy. Secondly, from the viewpoint of the variation of radius, an accelerator is designed which aims to quickly obtain multiple radii based reducts. Such mechanism is based on the consideration that the reduct in terms of the previous radius may guide the process of obtaining the reduct in terms of the current radius. The experimental results over 12 UCI data sets show the following: 1) compared with the traditional and pseudo-label neighborhood based reducts, our supervised neighborhood based reducts can provide higher classification accuracies; 2) our accelerator can significantly reduce the elapsed time for obtaining reducts. This study suggests new trends for considering neighborhood rough set related topics.

    更新日期:2020-01-11
  • Data meaning and knowledge discovery: Semantical aspects of information systems
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-09
    Marcin Wolski; Anna Gomolińska

    Data tables provide the standard means of representation of qualitative or quantitative information about objects of interest. They also form a starting point for the task of information processing: an operation of passing from raw data or information to semantically processed knowledge. The fundamental issue here is the question about the meaning of data: What do entries in the data table actually tell us about objects? It entails another question: How should the meaning be further processed? The primary aim of the article is an attempt to answer these two questions. To this end we are going to employ conceptual scales from formal concept analysis, an important theory of data processing introduced by Rudolf Wille, and apply them to data tables so as to obtain multivalued information systems, which were introduced and developed within the conceptual framework of rough set theory by Zdzisław Pawlak and Ewa Orłowska. Our main idea is to regard multivalued information systems as the semantics or meaning of the original tables. This idea allows us to describe and combine classical rough set theory, dominance-based rough set approach, and formal concept analysis within the single framework of multivalued information systems, which is rich enough to cope with a number of semantical nuances that may occur during the process of data analysis.

    更新日期:2020-01-09
  • A multiple attribute decision making three-way model for intuitionistic fuzzy numbers
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-09
    Peide Liu; Yumei Wang; Fan Jia; Hamido Fujita

    In order to use three-way decision (TWD) to solve multiple attribute decision making (MADM) problems, in this article, a new TWD model with intuitionistic fuzzy numbers (IFNs) is proposed. First of all, we define the relative loss functions to demonstrate some features of loss functions in TWDs, which is the basis for future research. Then, based on the correlation between the loss functions and the IFNs, we get the relative loss functions based on IFNs. At the same time, the classification rules of the TWDs are discussed from different viewpoints, including the thresholds and their properties. Aiming at MADM problems with unreasonable values, a new integrated method of relative loss functions is established to obtain a fairer loss integration result of alternatives. In addition, considering that there are no decision attributes and only condition attributes in MADM, we use grey relational degree to calculate the condition probability. In the end, a novel TWD model is proposed to solve MADM problems with IFNs, and a practical example on selecting suppliers is used to demonstrate its effectiveness and practicability.

    更新日期:2020-01-09
  • Belief function of Pythagorean fuzzy rough approximation space and its applications
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-08
    Shao-Pu Zhang; Pin Sun; Ju-Sheng Mi; Tao Feng

    Rough set theory and evidence theory are two approaches to handle decision making and reduction problems of imprecise and uncertain knowledge. It is motivated that Pythagorean fuzzy set excels at describing the situation where the sum of non-membership degree and membership degree is greater than 1, and may have wider applications than intuitionistic fuzzy set. So in this paper we study probability measure of Pythagorean fuzzy sets and belief structure of Pythagorean fuzzy information systems based on rough set theory, and discuss the reduction of Pythagorean fuzzy information systems. First, we review the properties of Pythagorean fuzzy sets and the upper and lower Pythagorean fuzzy rough approximation operators on the level sets. Then using these properties, probability measure of Pythagorean fuzzy sets are constructed. And the belief and plausibility functions are studied by using the Pythagorean fuzzy rough upper and lower approximation operators. Finally, we apply the belief function to construct an attribute reduction algorithm, and an example is employed to illustrate the feasibility and validity of the algorithm.

    更新日期:2020-01-09
  • Modeling agent's conditional preferences under objective ambiguity in Dempster-Shafer theory
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-08
    Davide Petturiti; Barbara Vantaggi

    We manage decisions under “objective” ambiguity by considering generalized Anscombe-Aumann acts, mapping states of the world to generalized lotteries on a set of consequences. A generalized lottery is modeled through a belief function on consequences, interpreted as a partially specified randomizing device. Preference relations on these acts are given by a decision maker focusing on different scenarios (conditioning events). We provide a system of axioms which are necessary and sufficient for the representability of these “conditional preferences” through a conditional functional parametrized by a unique full conditional probability P on the algebra of events and a cardinal utility function u on consequences. The model is able to manage also “unexpected” (i.e., “null”) conditioning events and distinguishes between a systematically pessimistic or optimistic behavior, either referring to “objective” belief functions or their dual plausibility functions. Finally, an elicitation procedure is provided, reducing to a Quadratically Constrained Linear Program (QCLP).

    更新日期:2020-01-09
  • The dynamic update method of attribute-induced three-way granular concept in formal contexts
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-07
    Binghan Long; Weihua Xu; Xiaoyan Zhang; Lei Yang

    Granular computing is becoming a very hot research field, which has received extensive attention in recent years. It helps us to analyze and solve problems better by dividing complex problems into several simpler ones. Three-way granular concept is an important concept proposed by combining granular computing, formal concept analysis and three-way decision. Using traditional updating methods of three-way granular concepts, a lot of time and space resources are needed when multiple attributes or objects are deleted in formal context. In order to improve the efficiency and flexibility of obtaining three-way concepts, this paper discusses a novel dynamic update method of three-way granular concepts. In this paper, we firstly introduce the related knowledge of three-way granular concepts. Secondly, the update rules of the extension and connotation of attribute-induced three-way granular concepts are discussed in the dynamic formal context to construct three-way granular concepts. Moreover, we develop a method for establishing attribute-induced three-way granular concept by dynamic changes in the case of deleting multiple objects and attributes in the formal context. Furthermore, we design four algorithms to compare between the proposed approaches and traditional updating ways of three-way granular concepts. Finally, the validity of dynamic update method of attribute-induced three-way granular concept is verified through the experimental evaluation using six datasets coming from the University of California-Irvine (UCI) repository.

    更新日期:2020-01-07
  • Multilevel surrogate modeling approach for optimization problems with polymorphic uncertain parameters
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-06
    Steffen Freitag; Philipp Edler; Katharina Kremer; Günther Meschke

    The solution of optimization problems with polymorphic uncertain data requires combining stochastic and non-stochastic approaches. The concept of uncertain a priori parameters and uncertain design parameters quantified by random variables and intervals is presented in this paper. Multiple runs of the nonlinear finite element model solving the structural mechanics with varying a priori and design parameters are needed to obtain a solution by means of iterative optimization algorithms (e.g. particle swarm optimization). The combination of interval analysis and Monte Carlo simulation is required for each design to be optimized. This can only be realized by substituting the nonlinear finite element model by numerically efficient surrogate models. In this paper, a multilevel strategy for neural network based surrogate modeling is presented. The deterministic finite element simulation, the stochastic analysis as well as the interval analysis are approximated by sequentially trained artificial neural networks. The approach is verified and applied to optimize the concrete cover of a reinforced concrete structure, taking the variability of material parameters and the structural load as well as construction imprecision into account.

    更新日期:2020-01-06
  • Discernibility matrix based incremental feature selection on fused decision tables
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-03
    Ye Liu; Lidi Zheng; Yeliang Xiu; Hong Yin; Suyun Zhao; Xizhao Wang; Hong Chen; Cuiping Li

    In rough set philosophy, each set of data can be seen as a fuzzy decision table. Since a decision table dynamically increases with time and space, these decision tables are integrated into a new one called fused decision table. In this paper, we focus on designing an incremental feature selection method on fused decision table by optimizing the space constraint of storing discernibility matrix. Here discernibility matrix is a known way of discernibility information measure in rough set theory. This paper applies the quasi/pseudo value of discernibility matrix rather than the true value of discernibility matrix to design an incremental mechanism. Unlike those discernibility matrix based non-incremental algorithms, the improved algorithm needs not save the whole discernibility matrix in main memory, which is desirable for the large data sets. More importantly, with the increment of decision tables, the discernibility matrix-based feature selection algorithm could constrain the computational cost by applying efficient information updating techniques—quasi/pseudo approximation operators. Finally, our experiments reveal that the proposed algorithm needs less computational cost, especially less occupied space, on the condition that the accuracy is limitedly lost.

    更新日期:2020-01-04
  • Constructing copulas from shock models with imprecise distributions
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-11-26
    Matjaž Omladič; Damjan Škulj

    The omnipotence of copulas when modeling dependence given marginal distributions in a multivariate stochastic situation is assured by the Sklar's theorem. Montes et al. (2015) suggest the notion of what they call an imprecise copula that brings some of its power in bivariate case to the imprecise setting. When there is imprecision about the marginals, one can model the available information by means of p-boxes, that are pairs of ordered distribution functions. By analogy they introduce pairs of bivariate functions satisfying certain conditions. In this paper we introduce the imprecise versions of some classes of copulas emerging from shock models that are important in applications. The so obtained pairs of functions are not only imprecise copulas but satisfy an even stronger condition. The fact that this condition really is stronger is shown in Omladič and Stopar (2019) thus raising the importance of our results. The main technical difficulty in developing our imprecise copulas lies in introducing an appropriate stochastic order on these bivariate objects.

    更新日期:2020-01-04
  • Variance based three-way clustering approaches for handling overlapping clustering
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-03
    Mohammad Khan Afridi; Nouman Azam; JingTao Yao

    The conventional clustering approaches are not very effective in dealing with clusters having overlapping regions. The three-way clustering (3WC) is an effective and promising approach in this regards. A key issue in 3WC is the determination of thresholds which plays a crucial and important role in accurate estimation of the overlapping region. In this article, we propose different variance based criteria for determining the thresholds. In particular, we examine the variance or spread in evaluation function values of objects contained in the three regions obtained with 3WC of objects. An algorithm called 3WC-OR is introduced that considers the optimization of the proposed criteria for determining effective thresholds by incorporating approaches such as genetic algorithms and game-theoretic rough sets. Experimental results on five UCI datasets indicate that the proposed algorithm significantly improves results on datasets with overlapping clusters and provide comparable results on datasets with non-overlapping clusters.

    更新日期:2020-01-04
  • Finding strongly connected components of simple digraphs based on granulation strategy
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-09
    Taihua Xu; Guoyin Wang; Jie Yang

    Strongly connected components (SCCs) are an important kind of subgraphs in digraphs. It can be viewed as a kind of knowledge in the viewpoint of knowledge discovery. In our previous work, a knowledge discovery algorithm called RSCC was proposed for finding SCCs of simple digraphs based on two operators, k-step R-related set and k-step upper approximation, of rough set theory (RST). RSCC algorithm can find SCCs more efficiently than Tarjan algorithm of linear complexity. However, on the one hand, as the theoretical basis of RST applied to SCCs discovery of digraphs, the theoretical relationships between RST and graph theory investigated in previous work only include four equivalences between fundamental RST and graph concepts relating to SCCs. The reasonability of using the two RST operators to find SCCs still need to be investigated. On the other hand, it is found that there are three SCCs correlations between vertices after we use three RST concepts, R-related set, lower and upper approximation sets, to analyze SCCs. RSCC algorithm ignores these SCCs correlations so that the efficiency of RSCC is affected negatively. For the above two issues, firstly, we explore the equivalence between the two RST operators and Breadth-First Search (BFS) which is one of the most basic graph search algorithms and the most direct way to find SCCs. These equivalences explain the reasonability of using the two RST operators to find SCCs, and enrich the content of the theoretical relationships between RST and graph theory. Secondly, we design a granulation strategy according to these three SCCs correlations. Then an algorithm called GRSCC for finding SCCs of simple digraphs based on granulation strategy is proposed. Experimental results show that GRSCC provides better performance to RSCC.

    更新日期:2020-01-04
  • New transformations of aggregation functions based on monotone systems of functions
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-16
    LeSheng Jin; Radko Mesiar; Martin Kalina; Jana Špirková; Surajit Borkotokey

    The paper introduces a Generalized-Convex-Sum-Transformation of aggregation functions. It has the form of a transformation of aggregation functions by monotone systems of functions. A special case of the proposed Generalized-Convex-Sum-Transformation is the well-known ⁎-product, also called the Darsow product of copulas. Similarly, our approach covers Choquet integrals with respect to capacities induced by the considered aggregation function. The paper offers basic definitions and some properties of the mentioned transformation. Various examples illustrating the transformation are presented. The paper also gives two alternative transformations of aggregation functions under which the dimension of the transformed aggregation functions is higher than that of the original one. Interestingly, if a copula is transformed, under some conditions put on the monotone systems of functions, the transformed aggregation function is again a copula.

    更新日期:2020-01-04
  • A new rule reduction and training method for extended belief rule base based on DBSCAN algorithm
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-03
    An Zhang; Fei Gao; Mi Yang; Wenhao Bi

    Rule reduction is one of the focuses of numerous researches on belief-rule-based system, in some cases, too many redundant rules may be a concern to the rule-based system. Though rule reduction methods have been widely used in the belief-rule-based system, extended belief-rule-based system, which is an expansion of belief-rule-based system, still lacks methods to reduce and train rules in the extended belief rule base (EBRB). To this end, this paper proposes EBRB reduction and training method. Based on density-based spatial clustering applications with noise (DBSCAN) algorithm, a new EBRB reduction method is proposed, where all the rules in the EBRB will be visited and rules within the distance of the fusion threshold will be fused. Moreover, the EBRB training method using parameter learning, which uses a set of training data to train the parameters of EBRB, is also proposed to improve the accuracy of the EBRB system. Two case studies of regression and classification are used to illustrate the feasibility and efficiency of the proposed EBRB reduction and training method. Comparison results show that the proposed method can effectively downsize the EBRB and increase the accuracy of EBRB system.

    更新日期:2020-01-04
  • Testing the degree of overlap for the expected value of random intervals
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-03
    Ana Belén Ramos-Guajardo; Gil González-Rodríguez; Ana Colubi

    Some hypothesis tests for analyzing the degree of overlap between the expected value of random intervals are provided. For this purpose, a suitable measure to quantify the overlapping grade between intervals is considered on the basis of the Szymkiewicz-Simpson coefficient defined for general sets. It can be seen as a kind of likeness index to measure the mutual information between two intervals. On one hand, an estimator for the proposed degree of overlap between intervals is provided and its strong consistency is analyzed. On the other hand, two tests are also proposed in this framework: a one-sample test to examine the degree of overlap between the expected value of a random interval and a given interval, and a two-sample test to check the degree of overlap between the expected value of two random intervals. To solve such hypothesis tests, two statistics are suggested and their limit distributions are studied by considering both asymptotic and bootstrap techniques. Their power has been also explored by means of local alternatives. In addition, some simulation studies are carried out to investigate the behavior of the proposed approaches. Finally, the performance of the tests is also reported in a real-life application.

    更新日期:2020-01-04
  • Extending characteristic relations on an incomplete data set by the three-way decision theory
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2020-01-02
    Yingxiao Chen; Ping Zhu

    The methods of mining incomplete data based on characteristic sets or characteristic relation have been intensively studied in recent years. With the development of related research, many modifications of the definition of characteristic relation have been proposed. However, few of them can be used for decision rule induction due to the so-called definability problem. In this paper, by using the wide sense of the three-way decision theory, we extend the notions of characteristic relation and characteristic set to the systems with four types of characteristic relations and characteristic sets, respectively. Then we study the probabilistic approximations based on the extended characteristic set system. Moreover, we extend the maximal characteristic neighborhood system into four types of maximal characteristic neighborhood systems, and investigate the probabilistic approximations based on them. Finally, we generalize the definition of local definability for incomplete data processing. In particular, several existing methods based on characteristic sets are integrated into our model, and the new types of characteristic relations proposed by us seem more practical than the classical one in some aspects. In this way, our model illustrates the effectiveness and comprehensiveness of thinking in threes.

    更新日期:2020-01-04
  • Effectiveness assessment of Cyber-Physical Systems
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-19
    Gérald Rocher; Jean-Yves Tigli; Stéphane Lavirotte; Nhan Le Thanh

    By achieving their purposes through interactions with the physical world, Cyber-Physical Systems (CPS) pose new challenges in terms of dependability. Indeed, the evolution of the physical systems they control with transducers can be affected by surrounding physical processes over which they have no control and which may potentially hamper the achievement of their purposes. While it is illusory to hope for a comprehensive model of the physical environment at design time to anticipate and remove faults that may occur once these systems are deployed, it becomes necessary to evaluate their degree of effectiveness in vivo. In this paper, the degree of effectiveness is formally defined and generalized in the context of the measure theory. The measure is developed in the context of the Transferable Belief Model (TBM), an elaboration on the Dempster-Shafer Theory (DST) of evidence so as to handle epistemic and aleatory uncertainties respectively pertaining the users' expectations and the natural variability of the physical environment. The TBM is used in conjunction with the Input/Output Hidden Markov Modeling framework we denote by Ev-IOHMM to specify the expected evolution of the physical system controlled by the CPS and the tolerances towards uncertainties. The measure of effectiveness is then obtained from the forward algorithm, leveraging the conflict entailed by the successive combinations of the beliefs obtained from observations of the physical system and the beliefs corresponding to its expected evolution. The proposed approach is applied to autonomous vehicles and shows how the degree of effectiveness can be used for bench-marking their controller relative to the highway code speed limitations and passengers' well-being constraints, both modeled through an Ev-IOHMM.

    更新日期:2020-01-04
  • A correctness result for synthesizing plans with loops in stochastic domains
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-19
    Laszlo Treszkai; Vaishak Belle

    Finite-state controllers (FSCs), such as plans with loops, are powerful and compact representations of action selection widely used in robotics, video games and logistics. There has been steady progress on synthesizing FSCs in deterministic environments, but the algorithmic machinery needed for lifting such techniques to stochastic environments is not yet fully understood. While the derivation of FSCs has received some attention in the context of discounted expected reward measures, they are often solved approximately and/or without correctness guarantees. In essence, that makes it difficult to analyze fundamental concerns such as: do all paths terminate, and do the majority of paths reach a goal state? In this paper, we present new theoretical results on a generic technique for synthesizing FSCs in stochastic environments, allowing for highly granular specifications on termination and goal satisfaction.

    更新日期:2020-01-04
  • Incremental concept-cognitive learning based on attribute topology
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-19
    Tao Zhang; He-he Li; Meng-qi Liu; Mei Rong

    Incremental learning is an alternative approach for maintaining knowledge by utilizing previous computational results of dynamic data contexts. As a new and important part of incremental learning, incremental Concept-cognitive learning (CCL) is an emerging field of concerning evolution of object or attributes sets and dynamic knowledge processing in the dynamic big data. However, existing incremental CCL algorithms still face some challenges that improve the generalization ability of new concepts, and the previously acquired knowledge should be efficiently utilized to reduce the computational complexity of the algorithm. At the same time, formal concept analysis has become a potential direction of cognitive computing, which can describe the processes of CCL. Attribute topology (AT) as a new representation of formal concepts can clearly display the relationship between new data and original data for reducing the complexity of the CCL process; therefore, we present an incremental concept-cognitive algorithm based on AT for incremental concept calculation, which is expressed by a concept tree. First, a relationship between the new object and some of the original objects is established. Then, on the basis of this finding, we propose an algorithm for updating the concept and presenting them through a concept tree. The algorithm determines the position and subtree of the new object by the relationship between the object and the original objects. Finally, an example is presented to demonstrate that the concept update algorithm based on AT is feasible and effective, and different orders of increments will result in different concept tree structures.

    更新日期:2020-01-04
  • The linear algebra of pairwise comparisons
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-18
    Michele Fedrizzi; Matteo Brunelli; Alexandra Caprila

    In this paper, we start from the premise that pairwise comparisons between alternatives can be modeled by means of the additive representation of preferences. In this setting we study some algebraic properties of three sets: the set of pairwise comparison matrices, its subset of consistent ones and the orthogonal complement of the latter. The three sets are all vector spaces and we propose and interpret simple bases for each one. We prove that a convenient inner product can be found in the three cases such that the corresponding basis is orthonormal with respect to the considered inner product. In addition (i) we prove that the well-known method of the logarithmic least squares used to estimate the weight vector can be reinterpreted by referring to a basis for the set of consistent preferences and (ii) we interpret a transformation recently proposed by Csató.

    更新日期:2020-01-04
  • Indices for rough set approximation and the application to confusion matrices
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-18
    Ivo Düntsch; Günther Gediga

    Confusion matrices and their associated statistics are a well established tool in machine learning to evaluate the accuracy of a classifier. In the present study, we define a rough confusion matrix based on a very general classifier, and derive various statistics from it which are related to common rough set estimators. In other words, we perform a rough set–like analysis on a confusion matrix, which is the converse of the usual procedure; in particular, we consider upper approximations. A suitable index for measuring the tightness of the upper bound uses a ratio of odds. Odds ratios offer a symmetric interpretation of lower and upper precision, and remove the bias in the upper approximation. We investigate rough odds ratios of the parameters obtained from the confusion matrix; to guard against undue random influences, we also approximate their standard errors.

    更新日期:2020-01-04
  • On the construction of uninorms by paving
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-18
    Wenwen Zong; Yong Su; Hua-Wen Liu; Bernard De Baets

    Inspired by the paving construction method, we construct a new operation on the unit interval from a given operation defined on the unit interval and a discrete operation defined on an index set. In particular, we construct a new t-norm from a t-norm on the unit interval and a discrete t-norm; a t-conorm from a t-norm and a discrete t-superconorm; a proper uninorm from a t-norm and a discrete t-conorm and a uninorm from a t-norm and a discrete uninorm. Dually, we also construct some uninorms (including t-norms and t-conorms) from a t-conorm and a discrete t-conorm, t-subnorm, t-norm or uninorm.

    更新日期:2020-01-04
  • Probabilistic abstract argumentation frameworks, a possible world view
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-18
    Theofrastos Mantadelis; Stefano Bistarelli

    After Dung's founding work in Abstract Argumentation Frameworks there has been a growing interest in extending the Dung's semantics in order to describe more complex or real life situations. Several of these approaches take the direction of weighted or probabilistic extensions. One of the most prominent probabilistic approaches is that of constellation Probabilistic Abstract Argumentation Frameworks. In this paper, we first make the connection of possible worlds and constellation semantics; we then introduce the probabilistic attack normal form for the constellation semantics; we furthermore prove that the probabilistic attack normal form is sufficient to represent any Probabilistic Abstract Argumentation Framework of the constellation semantics; then we illustrate its connection with Probabilistic Logic Programming and briefly present an existing implementation. The paper continues by also discussing the probabilistic argument normal form for the constellation semantics and proves its equivalent properties. Finally, this paper introduces a new probabilistic structure for the constellation semantics, namely probabilistic cliques.

    更新日期:2020-01-04
  • Complexity results for probabilistic answer set programming
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-12-16
    Denis Deratani Mauá; Fabio Gagliardi Cozman

    We analyze the computational complexity of probabilistic logic programming with constraints, disjunctive heads, and aggregates such as sum and max. We consider propositional programs and relational programs with bounded-arity predicates, and look at cautious reasoning (i.e., computing the smallest probability of an atom over all probability models), cautious explanation (i.e., finding an interpretation that maximizes the lower probability of evidence) and cautious maximum-a-posteriori (i.e., finding a partial interpretation for a set of atoms that maximizes their lower probability conditional on evidence) under Lukasiewicz's credal semantics.

    更新日期:2020-01-04
  • A Bayesian Network Interpretation of the Cox's Proportional Hazard Model.
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2019-05-28
    Jidapa Kraisangka,Marek J Druzdzel

    Cox's proportional hazards (CPH) model is quite likely the most popular modeling technique in survival analysis. While the CPH model is able to represent a relationship between a collection of risks and their common effect, Bayesian networks have become an attractive alternative with an increased modeling power and far broader applications. Our paper focuses on a Bayesian network interpretation of the CPH model (BN-Cox). We provide a method of encoding knowledge from existing CPH models in the process of knowledge engineering for Bayesian networks. This is important because in practice we often have CPH models available in the literature and no access to the original data from which they have been derived. We compare the accuracy of the resulting BN-Cox model to the original CPH model, Kaplan-Meier estimate, and Bayesian networks learned from data, including Naive Bayes, Tree Augmented Naive Bayes, Noisy-Max, and parameter learning by means of the EM algorithm. BN-Cox model came out as the most accurate of all BN approaches and very close to the original CPH model. We study two approaches for simplifying the BN-Cox model for the sake of representational and computational efficiency: (1) parent divorcing and (2) removing less important risk factors. We show that removing less important risk factors leads to smaller loss of accuracy.

    更新日期:2019-11-01
  • Machine learning-based receiver operating characteristic (ROC) curves for crisp and fuzzy classification of DNA microarrays in cancer research.
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2008-12-17
    Leif E Peterson,Matthew A Coleman

    Receiver operating characteristic (ROC) curves were generated to obtain classification area under the curve (AUC) as a function of feature standardization, fuzzification, and sample size from nine large sets of cancer-related DNA microarrays. Classifiers used included k nearest neighbor (kNN), näive Bayes classifier (NBC), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), learning vector quantization (LVQ1), logistic regression (LOG), polytomous logistic regression (PLOG), artificial neural networks (ANN), particle swarm optimization (PSO), constricted particle swarm optimization (CPSO), kernel regression (RBF), radial basis function networks (RBFN), gradient descent support vector machines (SVMGD), and least squares support vector machines (SVMLS). For each data set, AUC was determined for a number of combinations of sample size, total sum[-log(p)] of feature t-tests, with and without feature standardization and with (fuzzy) and without (crisp) fuzzification of features. Altogether, a total of 2,123,530 classification runs were made. At the greatest level of sample size, ANN resulted in a fitted AUC of 90%, while PSO resulted in the lowest fitted AUC of 72.1%. AUC values derived from 4NN were the most dependent on sample size, while PSO was the least. ANN depended the most on total statistical significance of features used based on sum[-log(p)], whereas PSO was the least dependent. Standardization of features increased AUC by 8.1% for PSO and -0.2% for QDA, while fuzzification increased AUC by 9.4% for PSO and reduced AUC by 3.8% for QDA. AUC determination in planned microarray experiments without standardization and fuzzification of features will benefit the most if CPSO is used for lower levels of feature significance (i.e., sum[-log(p)] ~ 50) and ANN is used for greater levels of significance (i.e., sum[-log(p)] ~ 500). When only standardization of features is performed, studies are likely to benefit most by using CPSO for low levels of feature statistical significance and LVQ1 for greater levels of significance. Studies involving only fuzzification of features should employ LVQ1 because of the substantial gain in AUC observed and low expense of LVQ1. Lastly, PSO resulted in significantly greater levels of AUC (89.5% average) when feature standardization and fuzzification were performed. In consideration of the data sets used and factors influencing AUC which were investigated, if low-expense computation is desired then LVQ1 is recommended. However, if computational expense is of less concern, then PSO or CPSO is recommended.

    更新日期:2019-11-01
  • Bayesian network models in brain functional connectivity analysis.
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2013-12-10
    Jaime S Ide,Sheng Zhang,Chiang-Shan R Li

    Much effort has been made to better understand the complex integration of distinct parts of the human brain using functional magnetic resonance imaging (fMRI). Altered functional connectivity between brain regions is associated with many neurological and mental illnesses, such as Alzheimer and Parkinson diseases, addiction, and depression. In computational science, Bayesian networks (BN) have been used in a broad range of studies to model complex data set in the presence of uncertainty and when expert prior knowledge is needed. However, little is done to explore the use of BN in connectivity analysis of fMRI data. In this paper, we present an up-to-date literature review and methodological details of connectivity analyses using BN, while highlighting caveats in a real-world application. We present a BN model of fMRI dataset obtained from sixty healthy subjects performing the stop-signal task (SST), a paradigm widely used to investigate response inhibition. Connectivity results are validated with the extant literature including our previous studies. By exploring the link strength of the learned BN's and correlating them to behavioral performance measures, this novel use of BN in connectivity analysis provides new insights to the functional neural pathways underlying response inhibition.

    更新日期:2019-11-01
  • A Constraint Optimization Approach to Causal Discovery from Subsampled Time Series Data.
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2018-05-15
    Antti Hyttinen,Sergey Plis,Matti Järvisalo,Frederick Eberhardt,David Danks

    We consider causal structure estimation from time series data in which measurements are obtained at a coarser timescale than the causal timescale of the underlying system. Previous work has shown that such subsampling can lead to significant errors about the system's causal structure if not properly taken into account. In this paper, we first consider the search for system timescale causal structures that correspond to a given measurement timescale structure. We provide a constraint satisfaction procedure whose computational performance is several orders of magnitude better than previous approaches. We then consider finite-sample data as input, and propose the first constraint optimization approach for recovering system timescale causal structure. This algorithm optimally recovers from possible conflicts due to statistical errors. We then apply the method to real-world data, investigate the robustness and scalability of our method, consider further approaches to reduce underdetermination in the output, and perform an extensive comparison between different solvers on this inference problem. Overall, these advances build towards a full understanding of non-parametric estimation of system timescale causal structures from sub-sampled time series data.

    更新日期:2019-11-01
  • Estimating bounds on causal effects in high-dimensional and possibly confounded systems.
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2017-12-06
    Daniel Malinsky,Peter Spirtes

    We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent confounders. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to adjust for) to estimate a set of possible causal effects. Our approach is based on the IDA procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no latent confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm in simulation experiments.

    更新日期:2019-11-01
  • Particle MCMC algorithms and architectures for accelerating inference in state-space models.
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2017-04-05
    Grigorios Mingas,Leonardo Bottolo,Christos-Savvas Bouganis

    Particle Markov Chain Monte Carlo (pMCMC) is a stochastic algorithm designed to generate samples from a probability distribution, when the density of the distribution does not admit a closed form expression. pMCMC is most commonly used to sample from the Bayesian posterior distribution in State-Space Models (SSMs), a class of probabilistic models used in numerous scientific applications. Nevertheless, this task is prohibitive when dealing with complex SSMs with massive data, due to the high computational cost of pMCMC and its poor performance when the posterior exhibits multi-modality. This paper aims to address both issues by: 1) Proposing a novel pMCMC algorithm (denoted ppMCMC), which uses multiple Markov chains (instead of the one used by pMCMC) to improve sampling efficiency for multi-modal posteriors, 2) Introducing custom, parallel hardware architectures, which are tailored for pMCMC and ppMCMC. The architectures are implemented on Field Programmable Gate Arrays (FPGAs), a type of hardware accelerator with massive parallelization capabilities. The new algorithm and the two FPGA architectures are evaluated using a large-scale case study from genetics. Results indicate that ppMCMC achieves 1.96x higher sampling efficiency than pMCMC when using sequential CPU implementations. The FPGA architecture of pMCMC is 12.1x and 10.1x faster than state-of-the-art, parallel CPU and GPU implementations of pMCMC and up to 53x more energy efficient; the FPGA architecture of ppMCMC increases these speedups to 34.9x and 41.8x respectively and is 173x more power efficient, bringing previously intractable SSM-based data analyses within reach.

    更新日期:2019-11-01
  • Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2016-02-03
    Adam Zagorecki,Anna Łupińska-Dubicka,Mark Voortman,Marek J Druzdzel

    A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.

    更新日期:2019-11-01
  • Mathematical Foundations for a Theory of Confidence Structures.
    Int. J. Approx. Reason. (IF 1.982) Pub Date : 2012-10-01
    Michael Scott Balch

    This paper introduces a new mathematical object: the confidence structure. A confidence structure represents inferential uncertainty in an unknown parameter by defining a belief function whose output is commensurate with Neyman-Pearson confidence. Confidence structures on a group of input variables can be propagated through a function to obtain a valid confidence structure on the output of that function. The theory of confidence structures is created by enhancing the extant theory of confidence distributions with the mathematical generality of Dempster-Shafer evidence theory. Mathematical proofs grounded in random set theory demonstrate the operative properties of confidence structures. The result is a new theory which achieves the holistic goals of Bayesian inference while maintaining the empirical rigor of frequentist inference.

    更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
2020新春特辑
限时免费阅读临床医学内容
ACS材料视界
科学报告最新纳米科学与技术研究
清华大学化学系段昊泓
自然科研论文编辑服务
中国科学院大学楚甲祥
上海纽约大学William Glover
中国科学院化学研究所
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug