当前期刊: Neural Networks Go to current issue    加入关注   
显示样式:        排序: 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Exponential and adaptive synchronization of inertial complex-valued neural networks: A non-reduced order and non-separation approach
    Neural Netw. (IF 5.785) Pub Date : 2020-01-17
    Juan Yu; Cheng Hu; Haijun Jiang; Leimin Wang

    This paper mainly deals with the problem of exponential and adaptive synchronization for a type of inertial complex-valued neural networks via directly constructing Lyapunov functionals without utilizing standard reduced-order transformation for inertial neural systems and common separation approach for complex-valued systems. At first, a complex-valued feedback control scheme is designed and a nontrivial Lyapunov functional, composed of the complex-valued state variables and their derivatives, is proposed to analyze exponential synchronization. Some criteria involving multi-parameters are derived and a feasible method is provided to determine these parameters so as to clearly show how to choose control gains in practice. In addition, an adaptive control strategy in complex domain is developed to adjust control gains and asymptotic synchronization is ensured by applying the method of undeterminated coefficients in the construction of Lyapunov functional and utilizing Barbalat Lemma. Lastly, a numerical example along with simulation results is provided to support the theoretical work.

    更新日期:2020-01-17
  • Robust adaptation regularization based on within-class scatter for domain adaptation
    Neural Netw. (IF 5.785) Pub Date : 2020-01-17
    Liran Yang; Ping Zhong

    In many practical applications, the assumption that the distributions of the data employed for training and test are identical is rarely valid, which would result in a rapid decline in performance. To address this problem, the domain adaptation strategy has been developed in recent years. In this paper, we propose a novel unsupervised domain adaptation method, referred to as Robust Adaptation Regularization based on Within-Class Scatter (WCS-RAR), to simultaneously optimize the regularized loss, the within-class scatter, the joint distribution between domains, and the manifold consistency. On the one hand, to make the model robust against outliers, we adopt an l2,1-norm based loss function in virtue of its row sparsity, instead of the widely-used l2-norm based squared loss or hinge loss function to determine the residual. On the other hand, to well preserve the structure knowledge of the source data within the same class and strengthen the discriminant ability of the classifier, we incorporate the minimum within-class scatter into the process of domain adaptation. Lastly, to efficiently solve the resulting optimization problem, we extend the form of the Representer Theorem through the kernel trick, and thus derive an elegant solution for the proposed model. The extensive comparison experiments with the state-of-the-art methods on multiple benchmark data sets demonstrate the superiority of the proposed method.

    更新日期:2020-01-17
  • A 3D deep supervised densely network for small organs of human temporal bone segmentation in CT images
    Neural Netw. (IF 5.785) Pub Date : 2020-01-15
    Xiaoguang Li; Zhaopeng Gong; Hongxia Yin; Hui Zhang; Zhenchang Wang; Li Zhuo

    Computed Tomography (CT) has become an important way for examining the critical anatomical organs of the human temporal bone in the diagnosis and treatment of ear diseases. Segmentation of the critical anatomical organs is an important fundamental step for the computer assistant analysis of human temporal bone CT images. However, it is challenging to segment sophisticated and small organs. To deal with this issue, a novel 3D Deep Supervised Densely Network (3D-DSD Net) is proposed in this paper. The network adopts a dense connection design and a 3D multi-pooling feature fusion strategy in the encoding stage of the 3D-Unet, and a 3D deep supervised mechanism is employed in the decoding stage. The experimental results show that our method achieved competitive performance in the CT data segmentation task of the small organs in the temporal bone.

    更新日期:2020-01-15
  • Training high-performance and large-scale deep neural networks with full 8-bit integers
    Neural Netw. (IF 5.785) Pub Date : 2020-01-15
    Yukuan Yang; Lei Deng; Shuang Wu; Tianyi Yan; Yuan Xie; Guoqi Li

    Deep neural network (DNN) quantization converting floating-point (FP) data in the network to integers (INT) is an effective way to shrink the model size for memory saving and simplify the operations for compute acceleration. Recently, researches on DNN quantization develop from inference to training, laying a foundation for the online training on accelerators. However, existing schemes leaving batch normalization (BN) untouched during training are mostly incomplete quantization that still adopts high precision FP in some parts of the data paths. Currently, there is no solution that can use only low bit-width INT data during the whole training process of large-scale DNNs with acceptable accuracy. In this work, through decomposing all the computation steps in DNNs and fusing three special quantization functions to satisfy the different precision requirements, we propose a unified complete quantization framework termed as “WAGEUBN” to quantize DNNs involving all data paths including W (Weights), A (Activation), G (Gradient), E (Error), U (Update), and BN. Moreover, the Momentum optimizer is also quantized to realize a completely quantized framework. Experiments on ResNet18/34/50 models demonstrate that WAGEUBN can achieve competitive accuracy on the ImageNet dataset. For the first time, the study of quantization in large-scale DNNs is advanced to the full 8-bit INT level. In this way, all the operations in the training and inference can be bit-wise operations, pushing towards faster processing speed, decreased memory cost, and higher energy efficiency. Our throughout quantization framework has great potential for future efficient portable devices with online learning ability.

    更新日期:2020-01-15
  • Performance boost of time-delay reservoir computing by non-resonant clock cycle
    Neural Netw. (IF 5.785) Pub Date : 2020-01-15
    Florian Stelzer; André Röhm; Kathy Lüdge; Serhiy Yanchuk

    The time-delay-based reservoir computing setup has seen tremendous success in both experiment and simulation. It allows for the construction of large neuromorphic computing systems with only few components. However, until now the interplay of the different timescales has not been investigated thoroughly. In this manuscript, we investigate the effects of a mismatch between the time-delay and the clock cycle for a general model. Typically, these two time scales are considered to be equal. Here we show that the case of equal or resonant time-delay and clock cycle could be actively detrimental and leads to an increase of the approximation error of the reservoir. In particular, we can show that non-resonant ratios of these time scales have maximal memory capacities. We achieve this by translating the periodically driven delay-dynamical system into an equivalent network. Networks that originate from a system with resonant delay-times and clock cycles fail to utilize all of their degrees of freedom, which causes the degradation of their performance.

    更新日期:2020-01-15
  • Differential-game for resource aware approximate optimal control of large-scale nonlinear systems with multiple players
    Neural Netw. (IF 5.785) Pub Date : 2020-01-14
    Avimanyu Sahoo; Vignesh Narayanan

    In this paper, we propose a novel differential-game based neural network (NN) control architecture to solve an optimal control problem for a class of large-scale nonlinear systems involving N-players. We focus on optimizing the usage of the computational resources along with the system performance simultaneously. In particular, the N-players’ control policies are desired to be designed cooperatively to optimize the large-scale system performance and the sampling intervals for each player is designed to reduce the frequency of feedback execution. To develop a unified design framework that achieves both these objectives, we propose an optimal control problem by integrating both the design requirements, leading to a multi-player differential-game. A solution to this problem is numerically obtained by solving the associated Hamilton Jacobi (HJ) equation using event-driven approximate dynamic programming (E-ADP) and artificial NNs online and forward-in-time. We employ the critic neural networks to approximate the solution to the HJ equation, i.e., the optimal value function, with aperiodically available feedback information. Using the NN approximated value function, we design the control policies and the sampling schemes. Finally, the event-driven N-player system is remodeled as a hybrid dynamical system with impulsive weight update rules for analyzing its stability and convergence properties. The closed-loop practical stability of the system and Zeno free behavior of the sampling scheme are demonstrated using the Lyapunov method. Simulation results using an academic example are also included to substantiate the analytical results.

    更新日期:2020-01-14
  • Adaptive neural tree exploiting expert nodes to classify high-dimensional data
    Neural Netw. (IF 5.785) Pub Date : 2020-01-10
    Shadi Abpeikar; Mehdi Ghatee; Gian Luca Foresti; Christian Micheloni

    Classification of high dimensional data suffers from curse of dimensionality and over-fitting. Neural tree is a powerful method which combines a local feature selection and recursive partitioning to solve these problems, but it leads to high depth trees in classifying high dimensional data. On the other hand, if less depth trees are used, the classification accuracy decreases or over-fitting increases. This paper introduces a novel Neural Tree exploiting Expert Nodes (NTEN) to classify high-dimensional data. It is based on a decision tree structure, whose internal nodes are expert nodes performing multi-dimensional splitting. Any expert node has three decision-making abilities. Firstly, they can select the most eligible neural network with respect to the data complexity. Secondly, they evaluate the over-fitting. Thirdly, they can cluster the features to jointly minimize redundancy and overlapping . To this aim, metaheuristic optimization algorithms including GA, NSGA-II, PSO and ACO are applied. Based on these concepts, any expert node splits a class when the over-fitting is low, and clusters the features when the over-fitting is high. Some theoretical results on NTEN are derived, and experiments on 35 standard data show that NTEN reaches good classification results, reduces tree depth without over-fitting and degrading accuracy.

    更新日期:2020-01-11
  • Adaptive tracking synchronization for coupled reaction-diffusion neural networks with parameter mismatches
    Neural Netw. (IF 5.785) Pub Date : 2020-01-09
    Hao Zhang; Zhixia Ding; Zhigang Zeng

    In this paper, tracking synchronization for coupled reaction–diffusion neural networks with parameter mismatches is investigated. For such a networked control system, only local neighbor information is used to compensate the mismatch characteristic termed as parameter mismatch, uncertainty or external disturbance. Different from the general boundedness hypothesis, the parameter mismatches are permitted to be unbounded. For the known parameter mismatches, parameter-dependent controller and parameter-independent adaptive controller are respectively designed. While for fully unknown network parameters and parameter mismatches, a distributed adaptive controller is proposed. By means of partial differential equation theories and differential inequality techniques, the tracking synchronization errors driven by these nonlinear controllers are proved to be uniformly ultimately bounded and exponentially convergent to some adjustable bounded domains. Finally, three numerical examples are given to test the effectiveness of the proposed controllers.

    更新日期:2020-01-09
  • Transductive LSTM for time-series prediction: An application to weather forecasting
    Neural Netw. (IF 5.785) Pub Date : 2020-01-08
    Zahra Karevan; Johan A.K. Suykens

    Long Short-Term Memory (LSTM) has shown significant performance on many real-world applications due to its ability to capture long-term dependencies. In this paper, we utilize LSTM to obtain a data-driven forecasting model for an application of weather forecasting. Moreover, we propose Transductive LSTM (T-LSTM) which exploits the local information in time-series prediction. In transductive learning, the samples in the test point vicinity are considered to have higher impact on fitting the model. In this study, a quadratic cost function is considered for the regression problem. Localizing the objective function is done by considering a weighted quadratic cost function at which point the samples in the neighborhood of the test point have larger weights. We investigate two weighting schemes based on the cosine similarity between the training samples and the test point. In order to assess the performance of the proposed method in different weather conditions, the experiments are conducted on two different time periods of a year. The results show that T-LSTM results in better performance in the prediction task.

    更新日期:2020-01-08
  • Cluster stochastic synchronization of complex dynamical networks via fixed-time control scheme
    Neural Netw. (IF 5.785) Pub Date : 2020-01-07
    Wanli Zhang; Chuandong Li; Hongfei Li; Xinsong Yang

    By means of fixed-time (FDT) control technique, cluster stochastic synchronization of complex networks (CNs) is investigated. Quantized controller is designed to realize the synchronization of CNs within a settling time. FDT synchronization criteria are established with the help of Lyapunov functional and comparison system methods. It should be noted that the convergence of synchronization is further improved by comparing with existing FDT synchronization results. Numerical simulations are given to illustrate our results.

    更新日期:2020-01-07
  • Bayesian deep matrix factorization network for multiple images denoising
    Neural Netw. (IF 5.785) Pub Date : 2020-01-07
    Shuang Xu; Chunxia Zhang; Jiangshe Zhang

    This paper aims at proposing a robust and fast low rank matrix factorization model for multiple images denoising. To this end, a novel model, Bayesian deep matrix factorization network (BDMF), is presented, where a deep neural network (DNN) is designed to model the low rank components and the model is optimized via stochastic gradient variational Bayes. By the virtue of deep learning and Bayesian modeling, BDMF makes significant improvement on synthetic experiments and real-world tasks (including shadow removal and hyperspectral image denoising), compared with existing state-of-the-art models.

    更新日期:2020-01-07
  • Attention-guided CNN for image denoising
    Neural Netw. (IF 5.785) Pub Date : 2020-01-07
    Chunwei Tian; Yong Xu; Zuoyong Li; Wangmeng Zuo; Lunke Fei; Hong Liu

    Deep convolutional neural networks (CNNs) have attracted considerable interest in low-level computer vision. Researches are usually devoted to improving the performance via very deep CNNs. However, as the depth increases, influences of the shallow layers on deep layers are weakened. Inspired by the fact, we propose an attention-guided denoising convolutional neural network (ADNet), mainly including a sparse block (SB), a feature enhancement block (FEB), an attention block (AB) and a reconstruction block (RB) for image denoising. Specifically, the SB makes a tradeoff between performance and efficiency by using dilated and common convolutions to remove the noise. The FEB integrates global and local features information via a long path to enhance the expressive ability of the denoising model. The AB is used to finely extract the noise information hidden in the complex background, which is very effective for complex noisy images, especially real noisy images and bind denoising. Also, the FEB is integrated with the AB to improve the efficiency and reduce the complexity for training a denoising model. Finally, a RB aims to construct the clean image through the obtained noise mapping and the given noisy image. Additionally, comprehensive experiments show that the proposed ADNet performs very well in three tasks (i.e. synthetic and real noisy images, and blind denoising) in terms of both quantitative and qualitative evaluations. The code of ADNet is accessible at http://www.yongxu.org/lunwen.html.

    更新日期:2020-01-07
  • Multi-task learning for the prediction of wind power ramp events with deep neural networks
    Neural Netw. (IF 5.785) Pub Date : 2020-01-07
    M. Dorado-Moreno; N. Navarin; P.A. Gutiérrez; L. Prieto; A. Sperduti; S. Salcedo-Sanz; C. Hervás-Martínez

    In Machine Learning, the most common way to address a given problem is to optimize an error measure by training a single model to solve the desired task. However, sometimes it is possible to exploit latent information from other related tasks to improve the performance of the main one, resulting in a learning paradigm known as Multi-Task Learning (MTL). In this context, the high computational capacity of deep neural networks (DNN) can be combined with the improved generalization performance of MTL, by designing independent output layers for every task and including a shared representation for them. In this paper we exploit this theoretical framework on a problem related to Wind Power Ramps Events (WPREs) prediction in wind farms. Wind energy is one of the fastest growing industries in the world, with potential global spreading and deep penetration in developed and developing countries. One of the main issues with the majority of renewable energy resources is their intrinsic intermittency, which makes it difficult to increase the penetration of these technologies into the energetic mix. In this case, we focus on the specific problem of WPREs prediction, which deeply affect the wind speed and power prediction, and they are also related to different turbines damages. Specifically, we exploit the fact that WPREs are spatially-related events, in such a way that predicting the occurrence of WPREs in different wind farms can be taken as related tasks, even when the wind farms are far away from each other. We propose a DNN-MTL architecture, receiving inputs from all the wind farms at the same time to predict WPREs simultaneously in each of the farms locations. The architecture includes some shared layers to learn a common representation for the information from all the wind farms, and it also includes some specification layers, which refine the representation to match the specific characteristics of each location. Finally we modified the Adam optimization algorithm for dealing with imbalanced data, adding costs which are updated dynamically depending on the worst classified class. We compare the proposal against a baseline approach based on building three different independent models (one for each wind farm considered), and against a state-of-the-art reservoir computing approach. The DNN-MTL proposal achieves very good performance in WPREs prediction, obtaining a good balance for all the classes included in the problem (negative ramp, no ramp and positive ramp).

    更新日期:2020-01-07
  • A new fixed-time stability theorem and its application to the fixed-time synchronization of neural networks
    Neural Netw. (IF 5.785) Pub Date : 2020-01-07
    Chuan Chen; Lixiang Li; Haipeng Peng; Yixian Yang; Ling Mi; Hui Zhao

    In this paper, we derive a new fixed-time stability theorem based on definite integral, variable substitution and some inequality techniques. The fixed-time stability criterion and the upper bound estimate formula for the settling time are different from those in the existing fixed-time stability theorems. Based on the new fixed-time stability theorem, the fixed-time synchronization of neural networks is investigated by designing feedback controller, and sufficient conditions are derived to guarantee the fixed-time synchronization of neural networks. To show the usability and superiority of the obtained theoretical results, we propose a secure communication scheme based on the fixed-time synchronization of neural networks. Numerical simulations illustrate that the new upper bound estimate formula for the settling time is much tighter than those in the existing fixed-time stability theorems. What is more, the plaintext signals can be recovered according to the new fixed-time stability theorem, while the plaintext signals can not be recovered according to the existing fixed-time stability theorems.

    更新日期:2020-01-07
  • Spacial sampled-data control for H∞ output synchronization of directed coupled reaction–diffusion neural networks with mixed delays
    Neural Netw. (IF 5.785) Pub Date : 2020-01-07
    Binglong Lu; Haijun Jiang; Cheng Hu; Abdujelil Abdurahman

    This work investigates the H∞ output synchronization (HOS) of the directed coupled reaction–diffusion (R-D) neural networks (NNs) with mixed delays. Firstly, a model of the directed state coupled R-D NNs is introduced, which not only contains some discrete and distributed time delays, but also obeys a mixed Dirichlet–Neumann boundary condition. Secondly, a spacial sampled-data controller is proposed to achieve the HOS of the considered networks. This type of controller can reduce the update rate in the process of control by measuring the state of networks at some fixed sampling points in the space region. Moreover, some criteria for the HOS are established by designing a appropriate Lyapunov functional, and some quantitative relations between diffusion coefficients, mixed delays, coupling strength and control parameters are given accurately by these criteria. Thirdly, the case of directed spatial diffusion coupled networks is also studied and, the following finding is obtained: the spatial diffusion coupling can suppress the HOS while the state coupling can promote it. Finally, one example is simulated as the verification of the theoretical results.

    更新日期:2020-01-07
  • A causal discovery algorithm based on the prior selection of leaf nodes
    Neural Netw. (IF 5.785) Pub Date : 2020-01-07
    Yan Zeng; Zhifeng Hao; Ruichu Cai; Feng Xie; Liang Ou; Ruihui Huang

    In recent years, Linear Non-Gaussian Acyclic Model (LiNGAM) has been widely used for the discovery of causal network. However, solutions based on LiNGAM usually yield high computational complexity as well as unsatisfied accuracy when the data is high-dimensional or the sample size is too small. Such complexity or accuracy problems here are often originated from their prior selection of root nodes when estimating a causal ordering. Thus, a causal discovery algorithm termed as GPL algorithm (the LiNGAM algorithm of Giving Priority to Leaf-nodes) under a mild assumption is proposed in this paper. It assigns priority to leaf nodes other than root nodes. Since leaf nodes do not affect others in a structure, we can directly estimate a causal ordering in a bottom-up way without performing additional operations like data updating process. Corresponding proofs for both feasibility and superiority are offered based on the properties of leaf nodes. Aside from theoretical analyses, practical experiments are conducted on both synthetic and real-world data, which confirm that GPL algorithm outperforms the other two state-of-the-art algorithms in computational complexity and accuracy, especially when dealing with high-dimensional data (up to 200) or small sample size (down to 100 for the dimension of 70).

    更新日期:2020-01-07
  • Abstractive summarization of long texts by representing multiple compositionalities with temporal hierarchical pointer generator network
    Neural Netw. (IF 5.785) Pub Date : 2019-12-31
    Dennis Singh Moirangthem; Minho Lee

    In order to tackle the problem of abstractive summarization of long multi-sentence texts, it is critical to construct an efficient model, which can learn and represent multiple compositionalities better. In this paper, we introduce a temporal hierarchical pointer generator network that can represent multiple compositionalities in order to handle longer sequences of texts with a deep structure. We demonstrate how a multilayer gated recurrent neural network organizes itself with the help of an adaptive timescale in order to represent the compositions. The temporal hierarchical network is implemented with a multiple timescale architecture where the timescale of each layer is also learned during the training process through error backpropagation through time. We evaluate our proposed model using an Introduction-Abstract summarization dataset from scientific articles and the CNN/Daily Mail summarization benchmark dataset. The results illustrate that, we successfully implement a summary generation system for long texts by using the multiple timescale with adaptation concept. We also show that we have improved the summary generation system with our proposed model on the benchmark dataset.

    更新日期:2019-12-31
  • A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem
    Neural Netw. (IF 5.785) Pub Date : 2019-12-30
    Chen Xu; Yiyuan Chai; Sitian Qin; Zhenkun Wang; Jiqiang Feng

    This paper presents a new neurodynamic approach for solving the constrained pseudoconvex optimization problem based on more general assumptions. The proposed neural network is equipped with a hard comparator function and a piecewise linear function, which make the state solution not only stay in the feasible region, but also converge to an optimal solution of the constrained pseudoconvex optimization problem. Compared with other related existing conclusions, the neurodynamic approach here enjoys global convergence and lower dimension of the solution space. Moreover, the neurodynamic approach does not depend on some additional assumptions, such as the feasible region is bounded, the objective function is lower bounded over the feasible region or the objective function is coercive. Finally, both numerical illustrations and simulation results in support vector regression problem show the well performance and the viability of the proposed neurodynamic approach.

    更新日期:2019-12-30
  • The role of coupling connections in a model of the cortico-basal ganglia-thalamocortical neural loop for the generation of beta oscillations
    Neural Netw. (IF 5.785) Pub Date : 2019-12-30
    Chen Liu; Changsong Zhou; Jiang Wang; Chris Fietkiewicz; Kenneth A. Loparo

    Excessive neural synchronization in the cortico-basal ganglia-thalamocortical circuits in the beta (β) frequency range (12–35 Hz) is closely associated with dopamine depletion in Parkinson’s disease (PD) and correlated with movement impairments, but the neural basis remains unclear. In this work, we establish a double-oscillator neural mass model for the cortico-basal ganglia-thalamocortical closed-loop system and explore the impacts of dopamine depletion induced changes in coupling connections within or between the two oscillators on neural activities within the loop. Spectral analysis of the neural mass activities revealed that the power and frequency of their principal components are greatly dependent on the coupling strengths between nuclei. We found that the increased intra-coupling in the basal ganglia-thalamic (BG-Th) oscillator contributes to increased oscillations in the lower β frequency band (12–25 Hz), while increased intra-coupling in the cortical oscillator mainly contributes to increased oscillations in the upper β frequency band (26–35 Hz). Interestingly, pathological upper β oscillations in the cortical oscillator may be another origin of the lower β oscillations in the BG-Th oscillator, in addition to increased intra-coupling strength within the BG-Th network. Lower β oscillations in the BG-Th oscillator can also change the dominant oscillation frequency of a cortical nucleus from the upper to the lower β band. Thus, this work may pave the way towards revealing a possible neural basis underlying the Parkinsonian state.

    更新日期:2019-12-30
  • Global collaboration through local interaction in competitive learning
    Neural Netw. (IF 5.785) Pub Date : 2019-12-30
    Abbas Siddiqui; Dionysios Georgiadis

    Feature maps, that preserve the global topology of arbitrary datasets, can be formed by self-organizing competing agents. So far, it has been presumed that global interaction of agents is necessary for this process. We establish that this is not the case, and that global topology can be uncovered through strictly local interactions. Enforcing uniformity of map quality across all agents results in an algorithm that is able to consistently uncover the global topology of diversely challenging datasets. The applicability and scalability of this approach is further tested on a large point cloud dataset, revealing a linear relation between map training time and size. The presented work not only reduces algorithmic complexity but also constitutes first step towards a distributed self organizing map.

    更新日期:2019-12-30
  • Global synchronization of coupled delayed memristive reaction-diffusion neural networks
    Neural Netw. (IF 5.785) Pub Date : 2019-12-28
    Shiqin Wang; Zhenyuan Guo; Shiping Wen; Tingwen Huang

    This paper focuses on the global exponential synchronization of multiple memristive reaction–diffusion neural networks (MRDNNs) with time delay. Due to introducing the influences of space as well as time on state variables and replacing resistors with memristors in circuit realization, the state-dependent partial differential mathematical model of MRDNN is more general and realistic than traditional neural network model. Based on Lyapunov functional theory, Divergence theorem and inequality techniques, global exponential synchronization criteria of coupled delayed MRDNNs are derived via directed and undirected nonlinear coupling. Finally, three numerical simulation examples are presented to verify the feasibility of our main results.

    更新日期:2019-12-29
  • On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
    Neural Netw. (IF 5.785) Pub Date : 2019-12-23
    Satoshi Hayakawa; Taiji Suzuki

    Deep learning has been applied to various tasks in the field of machine learning and has shown superiority to other common procedures such as kernel methods. To provide a better theoretical understanding of the reasons for its success, we discuss the performance of deep learning and other methods on a nonparametric regression problem with a Gaussian noise. Whereas existing theoretical studies of deep learning have been based mainly on mathematical theories of well-known function classes such as Hölder and Besov classes, we focus on function classes with discontinuity and sparsity, which are those naturally assumed in practice. To highlight the effectiveness of deep learning, we compare deep learning with a class of linear estimators representative of a class of shallow estimators. It is shown that the minimax risk of a linear estimator on the convex hull of a target function class does not differ from that of the original target function class. This results in the suboptimality of linear methods over a simple but non-convex function class, on which deep learning can attain nearly the minimax-optimal rate. In addition to this extreme case, we consider function classes with sparse wavelet coefficients. On these function classes, deep learning also attains the minimax rate up to log factors of the sample size, and linear methods are still suboptimal if the assumed sparsity is strong. We also point out that the parameter sharing of deep neural networks can remarkably reduce the complexity of the model in our setting.

    更新日期:2019-12-23
  • New H∞ state estimation criteria of delayed static neural networks via the Lyapunov–Krasovskii functional with negative definite terms
    Neural Netw. (IF 5.785) Pub Date : 2019-12-20
    Jing He; Yan Liang; Feisheng Yang; Feng Yang

    In the estimation problem for delayed static neural networks (SNNs), constructing a proper Lyapunov–Krasovskii functional (LKF) is crucial for deriving less conservative estimation criteria. In this paper, a delay-product-type LKF with negative definite terms is proposed. Based on the third-order Bessel–Legendre (B-L) integral inequality and mixed convex combination approaches, a less conservative estimator design criterion is derived. Furthermore, the desired estimator gain matrices and the H∞ performance index are obtained by solving a set of linear matrix inequalities (LMIs). Finally, a numerical example is given to demonstrate the effectiveness of the proposed method.

    更新日期:2019-12-20
  • ELM embedded discriminative dictionary learning for image classification
    Neural Netw. (IF 5.785) Pub Date : 2019-12-20
    Yijie Zeng; Yue Li; Jichao Chen; Xiaofan Jia; Guang-Bin Huang

    Dictionary learning is a widely adopted approach for image classification. Existing methods focus either on finding a dictionary that produces discriminative sparse representation, or on enforcing priors that best describe the dataset distribution. In many cases, the dataset size is often small with large intra-class variability and nondiscriminative feature space. In this work we propose a simple and effective framework called ELM-DDL to address these issues. Specifically, we represent input features with Extreme Learning Machine (ELM) with orthogonal output projection, which enables diverse representation on nonlinear hidden space and task specific feature learning on output space. The embeddings are further regularized via a maximum margin criterion (MMC) to maximize the inter-class variance and minimize intra-class variance. For dictionary learning, we design a novel weighted class specific ℓ1,2 norm to regularize the sparse coding vectors, which promotes uniformity of the sparse patterns of samples belonging to the same class and suppresses support overlaps of different classes. We show that such regularization is robust, discriminative and easy to optimize. The proposed method is combined with a sparse representation classifier (SRC) to evaluate on benchmark datasets. Results show that our approach achieves state-of-the-art performance compared to other dictionary learning methods.

    更新日期:2019-12-20
  • Robust face alignment by cascaded regression and de-occlusion
    Neural Netw. (IF 5.785) Pub Date : 2019-12-20
    Jun Wan; Jing Li; Zhihui Lai; Bo Du; Lefei Zhang

    Face alignment is a typical facial behavior analysis task in computer vision. However, the performance of face alignment is degraded greatly when the face image is partially occluded. In order to achieve better mapping between facial appearance features and shape increments, we propose a robust and occlusion-free face alignment algorithm in which a face de-occlusion module and a deep regression module are integrated into a cascaded deep generative regression model. The face de-occlusion module is a disentangled representation learning Generative Adversarial Networks (GANs) which aims to locate occlusions and recover the genuine appearance from partially occluded face image. The deep regression module can enhance facial appearance representation by utilizing the recovered faces to obtain more accurate regressors. Then, by the cascaded deep generative regression model, we recover the partially occluded face image and achieve accurate locating of landmarks gradually. It is interesting to show that the cascaded deep generative regression model can effectively locate occlusions and recover more genuine faces, which can be further used to improve the performance of face alignment. Experimental results conducted on four challenging occluded face datasets demonstrate that our method outperforms state-of-the-art methods.

    更新日期:2019-12-20
  • Extreme learning machine for a new hybrid morphological/linear perceptron
    Neural Netw. (IF 5.785) Pub Date : 2019-12-19
    Peter Sussner; Israel Campiotti

    Morphological neural networks (MNNs) can be characterized as a class of artificial neural networks that perform an operations of mathematical morphology at every node, possibly followed by the application of an activation function. Morphological perceptrons (MPs) and (gray-scale) morphological associative memories are among the most widely known MNN models. Since their neuronal aggregation functions are not differentiable, classical methods of non-linear optimization can in principle not be directly applied in order to train these networks. The same observation holds true for hybrid morphological/linear perceptrons and other related models. Circumventing these problems of non-differentiability, this paper introduces an extreme learning machine approach for training a hybrid morphological/linear perceptron, whose morphological components were drawn from previous MP models. We apply the resulting model to a number of well-known classification problems from the literature and compare the performance of our model with the ones of several related models, including some recent MNNs and hybrid morphological/linear neural networks.

    更新日期:2019-12-19
  • Synchronization of Hindmarsh Rose neurons
    Neural Netw. (IF 5.785) Pub Date : 2019-12-18
    Malik S.A.; Mir A.H.

    Modeling and implementation of biological neurons are key to the fundamental understanding of neural network architectures in the brain and its cognitive behavior. Synchronization of neuronal models play a significant role in neural signal processing as it is very difficult to identify the actual interaction between neurons in living brain. Therefore, the synchronization study of these neuronal architectures has received extensive attention from researchers. Higher biological accuracy of these neuronal units demands more computational overhead and requires more hardware resources for implementation. This paper presents a two coupled hardware implementation of Hindmarsh Rose neuron model which is mathematically simpler model and yet mimics several behaviors of a real biological neuron. These neurons are synchronized using an exponential function. The coupled system shows several behaviors depending upon the parameters of HR model and coupling function. An approximation of coupling function is also provided to reduce the hardware cost. Both simulations and a low cost hardware implementations of exponential synaptic coupling function and its approximation are carried out for comparison. Hardware implementation on field programmable gate array (FPGA) of approximated coupling function shows that the coupled network produces different dynamical behaviors with acceptable error. Hardware implementation shows that the approximated coupling function has significantly lower implementation cost. A spiking neural network based on HR neuron is also shown as a practical application of this coupled HR neural networks. The spiking network successfully encodes and decodes a time varying input.

    更新日期:2019-12-19
  • A counterexample regarding “New study on neural networks: The essential order of approximation”
    Neural Netw. (IF 5.785) Pub Date : 2019-12-18
    Steffen Goebbels

    The paper “New study on neural networks: the essential order of approximation” by Jianjun Wang and Zongben Xu, appeared in Neural Networks 23 (2010), deals with upper and lower estimates for the error of best approximation with sums of nearly exponential type activation functions in terms of moduli of smoothness. In particular, the presented lower bound is astonishingly good. However, the proof is incorrect and the bound is wrong.

    更新日期:2019-12-18
  • Existence and finite-time stability of discrete fractional-order complex-valued neural networks with time delays
    Neural Netw. (IF 5.785) Pub Date : 2019-12-17
    Xingxing You; Qiankun Song; Zhenjiang Zhao

    Without decomposing complex-valued systems into real-valued systems, the existence and finite-time stability for discrete fractional-order complex-valued neural networks with time delays are discussed in this paper. First of all, in order to obtain the main results, a new discrete Caputo fractional difference equation is proposed in complex field based on the theory of discrete fractional calculus, which generalizes the fractional-order neural networks in the real domain. Additionally, by utilizing Arzela–Ascoli’s theorem, inequality scaling skills and fixed point theorem, some sufficient criteria of delay-dependent are deduced to ensure the existence and finite-time stability of solutions for proposed networks. Finally, the validity and feasibility of the derived theoretical results are indicated by two numerical examples with simulations. Furthermore, we have drawn the following facts: with the lower order, the discrete fractional-order complex-valued neural networks will achieve the finite-time stability more easily.

    更新日期:2019-12-18
  • CS-MRI reconstruction based on analysis dictionary learning and manifold structure regularization
    Neural Netw. (IF 5.785) Pub Date : 2019-12-17
    Jianxin Cao; Shujun Liu; Hongqing Liu; Hongwei Lu

    Compressed sensing (CS) significantly accelerates magnetic resonance imaging (MRI) by allowing the exact reconstruction of image from highly undersampling k-space data. In this process, the high sparsity obtained by the learned dictionary and exploitation of correlation among patches are essential to the reconstructed image quality. In this paper, by a use of these two aspects, we propose a novel CS-MRI model based on analysis dictionary learning and manifold structure regularization (ADMS). Furthermore, a proper tight frame constraint is used to obtain an effective overcomplete analysis dictionary with a high sparsifying capacity. The constructed manifold structure regularization nonuniformly enforces the correlation of each group formed by similar patches, which is more consistent with the diverse nonlocal similarity in realistic images. The proposed model is efficiently solved by the alternating direction method of multipliers (ADMM), in which the fast algorithm for each sub-problem is separately developed. The experimental results demonstrate that main components in the proposed method contribute to the final reconstruction performance and the effectiveness of the proposed model.

    更新日期:2019-12-18
  • Finite-time nonfragile time-varying proportional retarded synchronization for Markovian Inertial Memristive NNs with reaction-diffusion items
    Neural Netw. (IF 5.785) Pub Date : 2019-12-17
    Xiaona Song; Jingtao Man; Shuai Song; Zhen Wang

    The issue of synchronization for a class of inertial memristive neural networks over a finite-time interval is investigated in this paper. Specifically, the reaction–diffusion items and Markovian jump parameters are both considered in the system model, meanwhile, a novel nonfragile time-varying proportional retarded control scheme is proposed. First, a befitting variable substitution is invoked to transform the original second-order differential system into a first-order differential form, such that the corresponding first-order synchronization error system is established. Second, by utilizing integral inequality technique, reciprocally convex combination approach and free-weighting matrix method, less conservative synchronization criterion in terms of linear matrix inequality are obtained. Finally, three simulations are exploited to illustrate the feasibility, practicability and superiority of the designed controller so that the acquired theoretical results are supported.

    更新日期:2019-12-18
  • Efficient network architecture search via multiobjective particle swarm optimization based on decomposition
    Neural Netw. (IF 5.785) Pub Date : 2019-12-16
    Jing Jiang; Fei Han; Qinghua Ling; Jie Wang; Tiange Li; Henry Han

    The efforts devoted to manually increasing the width and depth of convolutional neural network (CNN) usually require a large amount of time and expertise. It has stimulated a rising demand of neural architecture search (NAS) over these years. However, most popular NAS approaches solely optimize for a low predictive error without penalizing a high structural complexity. To this end, this paper proposes MOPSO/D-Net, a CNN architecture search method with multiobjective particle swarm optimization based on decomposition (MOPSO/D). The main goal is to reformulate NAS as a multiobjective evolutionary optimization problem, where the optimal architecture is learnt by minimizing two conflicting objectives namely the error rate of classification and params of network. Along with the hybrid binary encoding and adaptive penalty-based boundary intersection, an improved MOPSO/D is further proposed to solve the formulated multiobjective NAS and provide diverse tradeoff solutions. Experimental studies verify the effectiveness of MOPSO/D-Net compared with current manual and automated CNN generation methods. The proposed algorithm achieves impressive classification performance with a small number of parameters on each of two benchmark datasets, particularly, 0.4% error rate with 0.16 M params on MNIST and 5.88% error rate with 8.1 M params on CIFAR-10, respectively.

    更新日期:2019-12-17
  • Discriminative structure learning of sum-product networks for data stream classification
    Neural Netw. (IF 5.785) Pub Date : 2019-12-16
    Zhengya Sun; Cheng-Lin Liu; Jinghao Niu; Wensheng Zhang

    Sum–product network (SPN) is a deep probabilistic representation that allows for exact and tractable inference. There has been a trend of online SPN structure learning from massive and continuous data streams. However, online structure learning of SPNs has been introduced only for the generative settings so far. In this paper, we present an online discriminative approach for SPNs for learning both the structure and parameters. The basic idea is to keep track of informative and representative examples to capture the trend of time-changing class distributions. Specifically, by estimating the goodness of model fitting of data points and dynamically maintaining a certain amount of informative examples over time, we generate new sub-SPNs in a recursive and top-down manner. Meanwhile, an outlier-robust margin-based log-likelihood loss is applied locally to each data point and the parameters of SPN are updated continuously using most probable explanation (MPE) inference. This leads to a fast yet powerful optimization procedure and improved discrimination capability between the genuine class and rival classes. Empirical results show that the proposed approach achieves better prediction performance than the state-of-the-art online structure learner for SPNs, while promising order-of-magnitude speedup. Comparison with state-of-the-art stream classifiers further prove the superiority of our approach.

    更新日期:2019-12-17
  • A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia
    Neural Netw. (IF 5.785) Pub Date : 2019-12-14
    Cosimo Ieracitano; Nadia Mammone; Amir Hussain; Francesco C. Morabito

    Electroencephalografic (EEG) recordings generate an electrical map of the human brain that are useful for clinical inspection of patients and in biomedical smart Internet-of-Things (IoT) and Brain-Computer Interface (BCI) applications. From a signal processing perspective, EEGs yield a nonlinear and nonstationary, multivariate representation of the underlying neural circuitry interactions. In this paper, a novel multi-modal Machine Learning (ML) based approach is proposed to integrate EEG engineered features for automatic classification of brain states. EEGs are acquired from neurological patients with Mild Cognitive Impairment (MCI) or Alzheimer’s disease (AD) and the aim is to discriminate Healthy Control (HC) subjects from patients. Specifically, in order to effectively cope with nonstationarities, 19-channels EEG signals are projected into the time-frequency (TF) domain by means of the Continuous Wavelet Transform (CWT) and a set of appropriate features (denoted as CWT features) are extracted from δ, θ, α1, α2, β EEG sub-bands. Furthermore, to exploit nonlinear phase-coupling information of EEG signals, higher order statistics (HOS) are extracted from the bispectrum (BiS) representation. BiS generates a second set of features (denoted as BiS features) which are also evaluated in the five EEG sub-bands. The CWT and BiS features are fed into a number of ML classifiers to perform both 2-way (AD vs. HC, AD vs. MCI, MCI vs. HC) and 3-way (AD vs. MCI vs. HC) classifications. As an experimental benchmark, a balanced EEG dataset that includes 63 AD, 63 MCI and 63 HC is analyzed. Comparative results show that when the concatenation of CWT and BiS features (denoted as multi-modal (CWT+BiS) features) is used as input, the Multi-Layer Perceptron (MLP) classifier outperforms all other models, specifically, the Autoencoder (AE), Logistic Regression (LR) and Support Vector Machine (SVM). Consequently, our proposed multi-modal ML scheme can be considered a viable alternative to state-of-the-art computationally intensive deep learning approaches.

    更新日期:2019-12-17
  • Evolving artificial neural networks with feedback
    Neural Netw. (IF 5.785) Pub Date : 2019-12-14
    Sebastian Herzog; Christian Tetzlaff; Florentin Wörgötter

    Neural networks in the brain are dominated by sometimes more than 60% feedback connections, which most often have small synaptic weights. Different from this, little is known how to introduce feedback into artificial neural networks. Here we use transfer entropy in the feed-forward paths of deep networks to identify feedback candidates between the convolutional layers and determine their final synaptic weights using genetic programming. This adds about 70% more connections to these layers all with very small weights. Nonetheless performance improves substantially on different standard benchmark tasks and in different networks. To verify that this effect is generic we use 36000 configurations of small (2-10 hidden layer) conventional neural networks in a non-linear classification task and select the best performing feed-forward nets. Then we show that feedback reduces total entropy in these networks always leading to performance increase. This method may, thus, supplement standard techniques (e.g. error backprop) adding a new quality to network learning.

    更新日期:2019-12-17
  • Minimum variance-embedded deep kernel regularized least squares method for one-class classification and its applications to biomedical data
    Neural Netw. (IF 5.785) Pub Date : 2019-12-12
    Chandan Gautam; Pratik K. Mishra; Aruna Tiwari; Bharat Richhariya; Hari Mohan Pandey; Shuihua Wang; M. Tanveer

    Deep kernel learning has been well explored for multi-class classification tasks; however, relatively less work is done for one-class classification (OCC). OCC needs samples from only one class to train the model. Most recently, kernel regularized least squares (KRL) method-based deep architecture is developed for the OCC task. This paper introduces a novel extension of this method by embedding minimum variance information within this architecture. This embedding improves the generalization capability of the classifier by reducing the intra-class variance. In contrast to traditional deep learning methods, this method can effectively work with small-size datasets. We conduct a comprehensive set of experiments on 18 benchmark datasets (13 biomedical and 5 other datasets) to demonstrate the performance of the proposed classifier. We compare the results with 16 state-of-the-art one-class classifiers. Further, we also test our method for 2 real-world biomedical datasets viz.; detection of Alzheimer’s disease from structural magnetic resonance imaging data and detection of breast cancer from histopathological images. Proposed method exhibits more than 5% F1 score compared to existing state-of-the-art methods for various biomedical benchmark datasets. This makes it viable for application in biomedical fields where relatively less amount of data is available.

    更新日期:2019-12-13
  • Cuneate spiking neural network learning to classify naturalistic texture stimuli under varying sensing conditions
    Neural Netw. (IF 5.785) Pub Date : 2019-12-09
    Udaya B. Rongala, Alberto Mazzoni, Anton Spanne, Henrik Jörntell, Calogero M. Oddo

    We implemented a functional neuronal network that was able to learn and discriminate haptic features from biomimetic tactile sensor inputs using a two-layer spiking neuron model and homeostatic synaptic learning mechanism. The first order neuron model was used to emulate biological tactile afferents and the second order neuron model was used to emulate biological cuneate neurons. We have evaluated 10 naturalistic textures using a passive touch protocol, under varying sensing conditions. Tactile sensor data acquired with five textures under five sensing conditions were used for a synaptic learning process, to tune the synaptic weights between tactile afferents and cuneate neurons. Using post-learning synaptic weights, we evaluated the individual and population cuneate neuron responses by decoding across 10 stimuli, under varying sensing conditions. This resulted in a high decoding performance. We further validated the decoding performance across stimuli, irrespective of sensing velocities using a set of 25 cuneate neuron responses. This resulted in a median decoding performance of 96% across the set of cuneate neurons. Being able to learn and perform generalized discrimination across tactile stimuli, makes this functional spiking tactile system effective and suitable for further robotic applications.

    更新日期:2019-12-11
  • Exploiting the stimuli encoding scheme of evolving Spiking Neural Networks for stream learning
    Neural Netw. (IF 5.785) Pub Date : 2019-12-06
    Jesus L. Lobo, Izaskun Oregi, Albert Bifet, Javier Del Ser

    Stream data processing has lately gained momentum with the arrival of new Big Data scenarios and applications dealing with continuously produced information flows. Unfortunately, traditional machine learning algorithms are not prepared to tackle the specific challenges imposed by data stream processing, such as the need for learning incrementally, limited memory and processing time requirements, and adaptation to non-stationary data, among others. To face these paradigms, Spiking Neural Networks have emerged as one of the most promising stream learning techniques, with variants such as Evolving Spiking Neural Networks capable of efficiently addressing many of these challenges. Interestingly, these networks resort to a particular population encoding scheme – Gaussian Receptive Fields – to transform the incoming stimuli into temporal spikes. The study presented in this manuscript sheds lights on the predictive potential of this encoding scheme, focusing on how it can be applied as a computationally lightweight, model-agnostic preprocessing step for data stream learning. We provide informed intuition to unveil under which circumstances the aforementioned population encoding method yields effective prediction gains in data stream classification with respect to the case where no preprocessing is performed. Results obtained for a variety of stream learning models and both synthetic and real stream datasets are discussed to empirically buttress the capability of Gaussian Receptive Fields to boost the predictive performance of stream learning methods, spanning further research towards extrapolating our findings to other machine learning problems.

    更新日期:2019-12-07
  • Structured pruning of recurrent neural networks through neuron selection
    Neural Netw. (IF 5.785) Pub Date : 2019-12-05
    Liangjian Wen, Xuanyang Zhang, Haoli Bai, Zenglin Xu

    Recurrent neural networks (RNNs) have recently achieved remarkable successes in a number of applications. However, the huge sizes and computational burden of these models make it difficult for their deployment on edge devices. A practically effective approach is to reduce the overall storage and computation costs of RNNs by network pruning techniques. Despite their successful applications, those pruning methods based on Lasso either produce irregular sparse patterns in weight matrices, which is not helpful in practical speedup. To address these issues, we propose a structured pruning method through neuron selection which can remove the independent neuron of RNNs. More specifically, we introduce two sets of binary random variables, which can be interpreted as gates or switches to the input neurons and the hidden neurons, respectively. We demonstrate that the corresponding optimization problem can be addressed by minimizing the L0 norm of the weight matrix. Finally, experimental results on language modeling and machine reading comprehension tasks have indicated the advantages of the proposed method in comparison with state-of-the-art pruning competitors. In particular, nearly 20× practical speedup during inference was achieved without losing performance for the language model on the Penn TreeBank dataset, indicating the promising performance of the proposed method.

    更新日期:2019-12-05
  • Discriminative margin-sensitive autoencoder for collective multi-view disease analysis
    Neural Netw. (IF 5.785) Pub Date : 2019-12-02
    Zheng Zhang, Qi Zhu, Guo-Sen Xie, Yi Chen, Zhengming Li, Shuihua Wang

    Medical prediction is always collectively determined based on bioimages collected from different sources or various clinical characterizations described from multiple physiological features. Notably, learning intrinsic structures from multiple heterogeneous features is significant but challenging in multi-view disease understanding. Different from existing methods that separately deal with each single view, this paper proposes a discriminative Margin-Sensitive Autoencoder (MSAE) framework for automated Alzheimer’s disease (AD) diagnosis and accurate protein fold recognition. Generally, our MSAE aims to collaboratively explore the complementary properties of multi-view bioimage features in a semantic-sensitive encoder–decoder paradigm, where the discriminative semantic space is explicitly constructed in a margin-scalable regression model. Specifically, we develop a semantic-sensitive autoencoder, where an encoder projects multi-view visual features into the common semantic-aware latent space, and a decoder is exerted as an additional constraint to reconstruct the respective visual features. In particular, the importance of different views is adaptively weighted by self-adjusting learning scheme, such that their underlying correlations and complementary characteristics across multiple views are simultaneously preserved into the latent common representations. Moreover, a flexible semantic space is formulated by a margin-scalable support vector machine to improve the discriminability of the learning model. Importantly, correntropy induced metric is exploited as a robust regularization measurement to better control outliers for effective classification. A half-quadratic minimization and alternating learning strategy are devised to optimize the resulting framework such that each subproblem exists a closed-form solution in each iterative minimization phase. Extensive experimental results performed on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) datasets show that our MSAE can achieve superior performances for both binary and multi-class classification in AD diagnosis, and evaluations on protein folds demonstrate that our method can achieve very encouraging performance on protein structure recognition, outperforming the state-of-the-art methods.

    更新日期:2019-12-02
  • Simplified calcium signaling cascade for synaptic plasticity
    Neural Netw. (IF 5.785) Pub Date : 2019-12-02
    Vladimir Kornijcuk, Dohun Kim, Guhyun Kim, Doo Seok Jeong

    We propose a model for synaptic plasticity based on a calcium signaling cascade. The model simplifies the full signaling pathways from a calcium influx to the phosphorylation (potentiation) and dephosphorylation (depression) of glutamate receptors that are gated by fictive C1 and C2 catalysts, respectively. This model is based on tangible chemical reactions, including fictive catalysts, for long-term plasticity rather than the conceptual theories commonplace in various models, such as preset thresholds of calcium concentration. Our simplified model successfully reproduced the experimental synaptic plasticity induced by different protocols such as (i) a synchronous pairing protocol and (ii) correlated presynaptic and postsynaptic action potentials (APs). Further, the ocular dominance plasticity (or the experimental verification of the celebrated Bienenstock—Cooper—Munro theory) was reproduced by two model synapses that compete by means of back-propagating APs (bAPs). The key to this competition is synapse-specific bAPs with reference to bAP-boosting on the physiological grounds.

    更新日期:2019-12-02
  • Neonatal seizure detection from raw multi-channel EEG using a fully convolutional architecture
    Neural Netw. (IF 5.785) Pub Date : 2019-11-30
    Alison O’Shea, Gordon Lightbody, Geraldine Boylan, Andriy Temko

    A deep learning classifier for detecting seizures in neonates is proposed. This architecture is designed to detect seizure events from raw electroencephalogram (EEG) signals as opposed to the state-of-the-art hand engineered feature-based representation employed in traditional machine learning based solutions. The seizure detection system utilises only convolutional layers in order to process the multichannel time domain signal and is designed to exploit the large amount of weakly labelled data in the training stage. The system performance is assessed on a large database of continuous EEG recordings of 834h in duration; this is further validated on a held-out publicly available dataset and compared with two baseline SVM based systems. The developed system achieves a 56% relative improvement with respect to a feature-based state-of-the art baseline, reaching an AUC of 98.5%; this also compares favourably both in terms of performance and run-time. The effect of varying architectural parameters is thoroughly studied. The performance improvement is achieved through novel architecture design which allows more efficient usage of available training data and end-to-end optimisation from the front-end feature extraction to the back-end classification. The proposed architecture opens new avenues for the application of deep learning to neonatal EEG, where the performance becomes a function of the amount of training data with less dependency on the availability of precise clinical labels.

    更新日期:2019-11-30
  • Learning physical properties in complex visual scenes: An intelligent machine for perceiving blood flow dynamics from static CT angiography imaging
    Neural Netw. (IF 5.785) Pub Date : 2019-11-30
    Zhifan Gao, Xin Wang, Shanhui Sun, Dan Wu, Junjie Bai, Youbing Yin, Xin Liu, Heye Zhang, Victor Hugo C. de Albuquerque

    Humans perceive physical properties such as motion and elastic force by observing objects in visual scenes. Recent research has proven that computers are capable of inferring physical properties from camera images like humans. However, few studies perceive the physical properties in more complex environment, i.e. humans have difficulty estimating physical quantities directly from the visual observation, or encounter difficulty visualizing the physical process in mind according to their daily experiences. As an appropriate example, fractional flow reserve (FFR), which measures the blood pressure difference across the vessel stenosis, becomes an important physical quantitative value determining the likelihood of myocardial ischemia in clinical coronary intervention procedure. In this study, we propose a novel deep neural network solution (TreeVes-Net) that allows machines to perceive FFR values directly from static coronary CT angiography images. Our framework fully utilizes a tree-structured recurrent neural network (RNN) with a coronary representation encoder. The encoder captures coronary geometric information providing the blood fluid-related representation. The tree-structured RNN builds a long-distance spatial dependency of blood flow information inside the coronary tree. The experiments performed on 13000 synthetic coronary trees and 180 real coronary trees from clinical patients show that the values of the area under ROC curve (AUC) are 0.92 and 0.93 under two clinical criterions. These results can demonstrate the effectiveness of our framework and its superiority to seven FFR computation methods based on machine learning.

    更新日期:2019-11-30
  • Global exponential synchronization of delayed memristive neural networks with reaction–diffusion terms
    Neural Netw. (IF 5.785) Pub Date : 2019-11-29
    Yanyi Cao, Yuting Cao, Zhenyuan Guo, Tingwen Huang, Shiping Wen

    This paper investigates the global exponential synchronization problem of delayed memristive neural networks (MNNs) with reaction–diffusion terms. First, by utilizing the pinning control technique, two novel kinds of control methods are introduced to achieve synchronization of delayed MNNs with reaction–diffusion terms. Then, with the help of inequality techniques, pinning control technique, the drive-response concept and Lyapunov functional method, two sufficient conditions are obtained in the form of algebraic inequalities, which can be used for ensuring the exponential synchronization of the proposed delayed MNNs with reaction–diffusion terms. Moreover, the obtained results based on algebraic inequality complement and improve the previously known results. Finally, two illustrative examples are given to support the effectiveness and validity of the obtained theoretical results.

    更新日期:2019-11-30
  • Bipartite synchronization for inertia memristor-based neural networks on coopetition networks
    Neural Netw. (IF 5.785) Pub Date : 2019-11-29
    Ning Li, Wei Xing Zheng

    This paper addresses the bipartite synchronization problem of coupled inertia memristor-based neural networks with both cooperative and competitive interactions. Generally, coopetition interaction networks are modeled by a signed graph, and the corresponding Laplacian matrix is different from the nonnegative graph. The coopetition networks with structural balance can reach a final state with identical magnitude but opposite sign, which is called bipartite synchronization. Additionally, an inertia system is a second-order differential system. In this paper, firstly, by using suitable variable substitutions, the inertia memristor-based neural networks (IMNNs) are transformed into the first-order differential equations. Secondly, by designing suitable discontinuous controllers, the bipartite synchronization criteria for IMNNs with or without a leader node on coopetition networks are obtained. Finally, two illustrative examples with simulations are provided to validate the effectiveness of the proposed discontinuous control strategies for achieving bipartite synchronization.

    更新日期:2019-11-30
  • Finite-time and fixed-time anti-synchronization of Markovian neural networks with stochastic disturbances via switching control
    Neural Netw. (IF 5.785) Pub Date : 2019-11-28
    Peng Wan, Dihua Sun, Min Zhao

    This paper proposes a unified theoretical framework to study the problem of finite/fixed-time drive-response anti-synchronization for a class of Markovian stochastic neural networks. State feedback switching controllers without the sign function are designed to achieve the finite/fixed-time anti-synchronization of the addressed systems. Compared with the existing synchronization criteria, our results indicate that the controllers via the switching control without the sign function are given with less conservativeness, and the controllers without any sign function can deal with the chattering problem. By employing Lyapunov functional method and properties of the Weiner process, several finite/fixed-time synchronization criteria are presented and the corresponding settling times are calculated as well. Finally, three numerical examples are provided to illustrate the effectiveness of the theoretical results.

    更新日期:2019-11-29
  • Multiple Partial Empirical Kernel Learning with Instance Weighting and Boundary Fitting
    Neural Netw. (IF 5.785) Pub Date : 2019-11-28
    Zonghai Zhu, Zhe Wang, Dongdong Li, Wenli Du, Yangming Zhou

    By dividing the original data set into several sub-sets, Multiple Partial Empirical Kernel Learning (MPEKL) constructs multiple kernel matrixes corresponding to the sub-sets, and these kernel matrixes are decomposed to provide the explicit kernel functions. Then, the instances in the original data set are mapped into multiple kernel spaces, which provide better performance than single kernel space. It is known that the instances in different locations and distributions behave differently. Therefore, this paper defines the weight of instance in accordance with the location and distribution of the instances. According to the location, the instances can be categorized into intrinsic instances, boundary instances and noise instances. Generally, the boundary instances, as well as the minority instances in the imbalanced data set, are assigned high weight. Meanwhile, a regularization term, which regulates the classification hyperplane to fit the distribution trend of the class boundary, is constructed by the boundary instances. Then, the weight of instance and the regularization term are introduced into MPEKL to form an algorithm named Multiple Partial Empirical Kernel Learning with Instance Weighting and Boundary Fitting (IBMPEKL). Experiments demonstrate the good performance of IBMPEKL, and validate the effectiveness of the instance weighting and boundary fitting.

    更新日期:2019-11-29
  • Modeling functional resting-state brain networks through neural message passing on the human connectome
    Neural Netw. (IF 5.785) Pub Date : 2019-11-23
    Julio A. Peraza-Goicolea, Eduardo Martínez-Montes, Eduardo Aubert, Pedro A. Valdés-Hernández, Roberto Mulet

    In this work, we propose a natural model for information flow in the brain through a neural message-passing dynamics on a structural network of macroscopic regions, such as the human connectome (HC). In our model, each brain region is assumed to have a binary behavior (active or not), the strengths of interactions among them are encoded in the anatomical connectivity matrix defined by the HC, and the dynamics of the system is defined by the Belief Propagation (BP) algorithm, working near the critical point of the network. We show that in the absence of direct external stimuli the BP algorithm converges to a spatial map of activations that is similar to the Default Mode Network (DMN) of the brain, which has been defined from the analysis of functional MRI data. Moreover, we use Susceptibility Propagation (SP) to compute the matrix of long-range correlations between the different regions and show that the modules defined by a clustering of this matrix resemble several Resting State Networks (RSN) determined experimentally. Both results suggest that the functional DMN and RSNs can be seen as simple consequences of the anatomical structure of the brain and a neural message-passing dynamics between macroscopic regions. With the new model, we explore predictions on how functional maps change when the anatomical brain network suffers structural alterations, like in Alzheimer’s disease and in lesions of the Corpus Callosum. The implications and novel interpretations suggested by the model, as well as the role of criticality is discussed.

    更新日期:2019-11-26
  • Person identification using fusion of iris and periocular deep features
    Neural Netw. (IF 5.785) Pub Date : 2019-11-23
    Saiyed Umer, Alamgir Sardar, Bibhas Chandra Dhara, Ranjeet Kumar Raout, Hari Mohan Pandey

    A novel method for person identification based on the fusion of iris and periocular biometrics has been proposed in this paper. The challenges for image acquisition for Near-Infrared or Visual Wavelength lights under constrained and unconstrained environments have been considered here. The proposed system is divided into image preprocessing data augmentation followed by feature learning for classification components. In image preprocessing an annular iris, the portion is segmented out from an eyeball image and then transformed into a fixed-sized image region. The parameters of iris localization have been used to extract the local periocular region. Due to different imaging environments, the images suffer from various noise artifacts which create data insufficiency and complicates the recognition task. To overcome this situation a novel method for data augmentation technique has been introduced here. For features extraction and classification tasks well-known VGG16, ResNet50, and Inception-v3 CNN architectures have been employed. The performance due to iris and periocular are fused together to increase the performance of the recognition system. The extensive experimental results have been demonstrated in four benchmark iris databases namely: MMU1, UPOL, CASIA-Iris-distance, and UBIRIS.v2. The comparison with the state-of-the-art methods with respect to these databases shows the robustness and effectiveness of the proposed approach.

    更新日期:2019-11-26
  • A scalable multi-signal approach for the parallelization of self-organizing neural networks
    Neural Netw. (IF 5.785) Pub Date : 2019-11-23
    Mirto Musci, Giacomo Parigi, Virginio Cantoni, Marco Piastra

    Self-Organizing Neural Networks (SONNs) have a wide range of applications with massive computational requirements that often need to be satisfied with optimized parallel algorithms and implementations. In literature, SONN have been generally parallelized with GPU computing according to a single-signal paradigm: each GPU thread manages one or more nodes of the network and works concurrently on one input signal at the time. This paper presents two contributions. The first one is the experimental proof that the single-signal approach for SONNs is not optimal for the task, as it is intrinsically sequential at its core and thus inherently limited in its performance. The non-optimality of the single-signal paradigm is illustrated via a specific and simplified benchmark. The second contribution is the introduction of a new multi-signal paradigm for the parallelization of SONNs, whereby multiple signals are processed at once in each iteration hence allowing different GPU threads to work on different signals. The advantages of the multi-signal approach are shown through several benchmarks involving the Self-Organizing Adaptive Map (SOAM) algorithm as a basis for evaluation. Having a graph-based termination condition that depends on the features of the network being grown, the SOAM algorithm allows assessing both functional equivalence and performances of the paradigm proposed without relying on arbitrary thresholds. Nonetheless, the evaluation proposed has a broader scope since it refers to a unified framework for the GPU parallelization of a generic SONN.

    更新日期:2019-11-26
  • Simultaneously learning affinity matrix and data representations for machine fault diagnosis
    Neural Netw. (IF 5.785) Pub Date : 2019-11-22
    Yue Li, Yijie Zeng, Tianchi Liu, Xiaofan Jia, Guang-Bin Huang

    Recently, preserving geometry information of data while learning representations have attracted increasing attention in intelligent machine fault diagnosis. Existing geometry preserving methods require to predefine the similarities between data points in the original data space. The predefined affinity matrix, which is also known as the similarity matrix, is then used to preserve geometry information during the process of representations learning. Hence, the data representations are learned under the assumption of a fixed and known prior knowledge, i.e., similarities between data points. However, the assumed prior knowledge is difficult to precisely determine the real relationships between data points, especially in high dimensional space. Also, using two separated steps to learn affinity matrix and data representations may not be optimal and universal for data classification. In this paper, based on the extreme learning machine autoencoder (ELM-AE), we propose to learn the data representations and the affinity matrix simultaneously. The affinity matrix is treated as a variable and unified in the objective function of ELM-AE. Instead of predefining and fixing the affinity matrix, the proposed method adjusts the similarities by taking into account its capability of capturing the geometry information in both original data space and non-linearly mapped representation space. Meanwhile, the geometry information of original data can be preserved in the embedded representations with the help of the affinity matrix. Experimental results on several benchmark datasets demonstrate the effectiveness of the proposed method, and the empirical study also shows it is an efficient tool on machine fault diagnosis.

    更新日期:2019-11-22
  • Dimension independent bounds for general shallow networks
    Neural Netw. (IF 5.785) Pub Date : 2019-11-22
    H.N. Mhaskar

    This paper proves an abstract theorem addressing in a unified manner two important problems in function approximation: avoiding curse of dimensionality and estimating the degree of approximation for out-of-sample extension in manifold learning. We consider an abstract (shallow) network that includes, for example, neural networks, radial basis function networks, and kernels on data defined manifolds used for function approximation in various settings. A deep network is obtained by a composition of the shallow networks according to a directed acyclic graph, representing the architecture of the deep network. In this paper, we prove dimension independent bounds for approximation by shallow networks in the very general setting of what we have called G-networks on a compact metric measure space, where the notion of dimension is defined in terms of the cardinality of maximal distinguishable sets, generalizing the notion of dimension of a cube or a manifold. Our techniques give bounds that improve without saturation with the smoothness of the kernel involved in an integral representation of the target function. In the context of manifold learning, our bounds provide estimates on the degree of approximation for an out-of-sample extension of the target function to the ambient space. One consequence of our theorem is that without the requirement of robust parameter selection, deep networks using a non-smooth activation function such as the ReLU, do not provide any significant advantage over shallow networks in terms of the degree of approximation alone.

    更新日期:2019-11-22
  • Generative adversarial networks with mixture of t-distributions noise for diverse image generation
    Neural Netw. (IF 5.785) Pub Date : 2019-11-18
    Jinxuan Sun, Guoqiang Zhong, Yang Chen, Yongbin Liu, Tao Li, Kaizhu Huang

    Image generation is a long-standing problem in the machine learning and computer vision areas. In order to generate images with high diversity, we propose a novel model called generative adversarial networks with mixture of t-distributions noise (tGANs). In tGANs, the latent generative space is formulated using a mixture of t-distributions. Particularly, the parameters of the components in the mixture of t-distributions can be learned along with others in the model. To improve the diversity of the generated images in each class, each noise vector and a class codeword are concatenated as the input of the generator of tGANs. In addition, a classification loss is added to both the generator and the discriminator losses to strengthen their performances. We have conducted extensive experiments to compare tGANs with a state-of-the-art pixel by pixel image generation approach, pixelCNN, and related GAN-based models. The experimental results and statistical comparisons demonstrate that tGANs perform significantly better than pixleCNN and related GAN-based models for diverse image generation.

    更新日期:2019-11-18
  • Global Mittag-Leffler stability and synchronization of discrete-time fractional-order complex-valued neural networks with time delay
    Neural Netw. (IF 5.785) Pub Date : 2019-11-15
    Xingxing You, Qiankun Song, Zhenjiang Zhao

    Without decomposing complex-valued systems into real-valued systems, this paper investigates existence, uniqueness, global Mittag-Leffler stability and global Mittag-Leffler synchronization of discrete-time fractional-order complex-valued neural networks (FCVNNs) with time delay. Inspired by Lyapunov’s direct method on continuous-time systems, a class of discrete-time FCVNNs is further discussed by employing the fractional-order extension of Lyapunov’s direct method. Firstly, by means of contraction mapping theory and Cauchy’s inequality, a sufficient condition is presented to ascertain the existence and uniqueness of the equilibrium point for discrete-time FCVNNs. Then, based on the theory of discrete fractional calculus, discrete Laplace transform, the theory of complex functions and discrete Mittag-Leffler functions, a sufficient condition is established for global Mittag-Leffler stability of the proposed networks. Additionally, by applying the Lyapunov’s direct method and designing a effective control scheme, the sufficient criterion is derived to ensure the global Mittag-Leffler synchronization of discrete-time FCVNNs. Finally, two numerical examples are also presented to manifest the feasibility and validity of the obtained results.

    更新日期:2019-11-18
  • A review on neural network models of schizophrenia and autism spectrum disorder
    Neural Netw. (IF 5.785) Pub Date : 2019-11-13
    Pablo Lanillos, Daniel Oliva, Anja Philippsen, Yuichi Yamashita, Yukie Nagai, Gordon Cheng

    This survey presents the most relevant neural network models of autism spectrum disorder and schizophrenia, from the first connectionist models to recent deep network architectures. We analyzed and compared the most representative symptoms with its neural model counterpart, detailing the alteration introduced in the network that generates each of the symptoms, and identifying their strengths and weaknesses. We additionally cross-compared Bayesian and free-energy approaches, as they are widely applied to modeling psychiatric disorders and share basic mechanisms with neural networks. Models of schizophrenia mainly focused on hallucinations and delusional thoughts using neural dysconnections or inhibitory imbalance as the predominating alteration. Models of autism rather focused on perceptual difficulties, mainly excessive attention to environment details, implemented as excessive inhibitory connections or increased sensory precision. We found an excessive tight view of the psychopathologies around one specific and simplified effect, usually constrained to the technical idiosyncrasy of the used network architecture. Recent theories and evidence on sensorimotor integration and body perception combined with modern neural network architectures could offer a broader and novel spectrum to approach these psychopathologies. This review emphasizes the power of artificial neural networks for modeling some symptoms of neurological disorders but also calls for further developing these techniques in the field of computational psychiatry.

    更新日期:2019-11-13
  • Local distinguishability aggrandizing network for human anomaly detection
    Neural Netw. (IF 5.785) Pub Date : 2019-11-13
    Maoguo Gong, Huimin Zeng, Yu Xie, Hao Li, Zedong Tang

    With the growing demand for an intelligent system to prevent abnormal events, many methods have been proposed to detect and locate anomalous behaviors in surveillance videos. However, most of these methods contain two shortcomings mainly: distraction of the network and insufficient discriminating ability. In this paper, we propose a local distinguishability aggrandizing network (LDA-Net) in a supervised manner, consisting of a human detection module and an anomaly detection module. In the human detection module, we obtain segmented patches of specific human subjects and take them as the input of the latter module to focus the network on learning motion characteristics of each person. In addition, considering that the auxiliary information, such as the specific type of an action, can aggrandize the whole network to extract distinguishable detail features of normal and abnormal behaviors, the proposed anomaly detection module is comprised of a primary binary classification sub-branch and an auxiliary distinguishability aggrandizing sub-branch, through which we can jointly detect anomalies and recognize actions. To further reduce the misclassification of the extremely imbalanced datasets, we design a novel inhibition loss function and embed it into the auxiliary sub-branch of the anomaly detection module. Experiments on several public benchmark datasets for frame-level and pixel-level anomaly detection show that the proposed supervised LDA-Net achieves state-of-the-art results on UCSD Ped2 and Subway Exit datasets.

    更新日期:2019-11-13
  • Liver disease screening based on densely connected deep neural networks
    Neural Netw. (IF 5.785) Pub Date : 2019-11-11
    Zhenjie Yao, Jiangong Li, Zhaoyu Guan, Yancheng Ye, Yixin Chen

    Liver disease is an important public health problem. Liver Function Tests (LFT) is the most achievable test for liver disease diagnosis. Most liver diseases are manifested as abnormal LFT. Liver disease screening by LFT data is helpful for computer aided diagnosis. In this paper, we propose a densely connected deep neural network (DenseDNN), on 13 most commonly used LFT indicators and demographic information of subjects for liver disease screening. The algorithm was tested on a dataset of 76,914 samples (more than 100 times of data than the previous datasets). The Area Under Curve (AUC) of DenseDNN is 0.8919, that of DNN is 0.8867, that of random forest is 0.8790, and that of logistic regression is 0.7974. The performance of deep learning models are significantly better than conventional methods. As for the deep learning methods, DenseDNN shows better performance than DNN.

    更新日期:2019-11-11
  • Model-based optimized phase-deviation deep brain stimulation for Parkinson ’s disease
    Neural Netw. (IF 5.785) Pub Date : 2019-11-09
    Ying Yu, Yuqing Hao, Qingyun Wang

    High-frequency deep brain stimulation (HF-DBS) of the subthalamic nucleus (STN), globus pallidus interna (GPi) and globus pallidus externa (GPe) are often considered as effective methods for the treatment of Parkinson’s disease (PD). However, the stimulation of a single nucleus by HF-DBS can cause specific physical damage, produce side effects and usually consume more electrical energy. Therefore, we use a biophysically-based model of basal ganglia–thalamic circuits to explore more effective stimulation patterns to reduce adverse effects and save energy. In this paper, we computationally investigate the combined DBS of two nuclei with the phase deviation between two stimulation waveforms (CDBS). Three different stimulation combination strategies are proposed, i.e., STN and GPe CDBS (SED), STN and GPi CDBS (SID), as well as GPi and GPe CDBS (GGD). Resultantly, it is found that anti-phase CDBS is more effective in improving parkinsonian dynamical properties, including desynchronization of neurons and the recovery of the thalamus relay ability. Detailed simulation investigation shows that anti-phase SED and GGD are superior to SID. Besides, the energy consumption can be largely reduced by SED and GGD (72.5% and 65.5%), compared to HF-DBS. These results provide new insights into the optimal stimulation parameter and target choice of PD, which may be helpful for the clinical practice.

    更新日期:2019-11-11
  • Partition level multiview subspace clustering
    Neural Netw. (IF 5.785) Pub Date : 2019-11-06
    Zhao Kang, Xinjia Zhao, Chong Peng, Hongyuan Zhu, Joey Tianyi Zhou, Xi Peng, Wenyu Chen, Zenglin Xu

    Multiview clustering has gained increasing attention recently due to its ability to deal with multiple sources (views) data and explore complementary information between different views. Among various methods, multiview subspace clustering methods provide encouraging performance. They mainly integrate the multiview information in the space where the data points lie. Hence, their performance may be deteriorated because of noises existing in each individual view or inconsistent between heterogeneous features. For multiview clustering, the basic premise is that there exists a shared partition among all views. Therefore, the natural space for multiview clustering should be all partitions. Orthogonal to existing methods, we propose to fuse multiview information in partition level following two intuitive assumptions: (i) each partition is a perturbation of the consensus clustering; (ii) the partition that is close to the consensus clustering should be assigned a large weight. Finally, we propose a unified multiview subspace clustering model which incorporates the graph learning from each view, the generation of basic partitions, and the fusion of consensus partition. These three components are seamlessly integrated and can be iteratively boosted by each other towards an overall optimal solution. Experiments on four benchmark datasets demonstrate the efficacy of our approach against the state-of-the-art techniques.

    更新日期:2019-11-06
  • New approach to global Mittag-Leffler synchronization problem of fractional-order quaternion-valued BAM neural networks based on a new inequality
    Neural Netw. (IF 5.785) Pub Date : 2019-11-04
    Jianying Xiao, Shiping Wen, Xujun Yang, Shouming Zhong

    In this paper, a novel kind of neural networks named fractional-order quaternion-valued bidirectional associative memory neural networks (FQVBAMNNs) is formulated. On one hand, applying Hamilton rules in quaternion multiplication which is essentially non-commutative, the system of FQVBAMNNs is separated into eight fractional-order real-valued systems. Meanwhile, the activation functions are considered to be quaternion-valued linear threshold ones which help to reduce the unnecessary computational complexity. On the other hand, based on fractional-order Lyapunov technology, a new fractional-order derivative inequality is established. Mainly by employing the new inequality technique, constructing three novel Lyapunov-Krasovskii functionals (LKFs) and designing simple linear controllers, the global Mittag-Leffler synchronization problems are investigated and the corresponding criteria are acquired for the system of FQVBAMNNs and its special cases such as fractional-order complex-valued BAM neural networks (FCVBAMNNs) and fractional-order real-valued BAM neural networks (FRVBAMNNs), respectively. Finally, two numerical examples are given to show the effectiveness and availability of the proposed results.

    更新日期:2019-11-04
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
2020新春特辑
限时免费阅读临床医学内容
ACS材料视界
科学报告最新纳米科学与技术研究
清华大学化学系段昊泓
自然科研论文编辑服务
中国科学院大学楚甲祥
中国科学院微生物研究所潘国辉
中国科学院化学研究所
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug