Elsevier

Neural Networks

Volume 123, March 2020, Pages 288-298
Neural Networks

Extreme learning machine for a new hybrid morphological/linear perceptron

https://doi.org/10.1016/j.neunet.2019.12.003Get rights and content

Highlights

  • New HMLP model having linear neurons and morphological units in hidden layer.

  • A morphological unit (dendrite) computes the infimum of an erosion and anti-dilation.

  • Circumventing non-differentiability issues, the HMLP can be trained using ELM.

  • Higher mean classification rate than related models in some benchmark problems.

  • Significant statistical difference compared to other models except the MP/CL which is very susceptible to the curse of dimensionality.

Abstract

Morphological neural networks (MNNs) can be characterized as a class of artificial neural networks that perform an operation of mathematical morphology at every node, possibly followed by the application of an activation function. Morphological perceptrons (MPs) and (gray-scale) morphological associative memories are among the most widely known MNN models. Since their neuronal aggregation functions are not differentiable, classical methods of non-linear optimization can in principle not be directly applied in order to train these networks. The same observation holds true for hybrid morphological/linear perceptrons and other related models. Circumventing these problems of non-differentiability, this paper introduces an extreme learning machine approach for training a hybrid morphological/linear perceptron, whose morphological components were drawn from previous MP models. We apply the resulting model to a number of well-known classification problems from the literature and compare the performance of our model with the ones of several related models, including some recent MNNs and hybrid morphological/linear neural networks.

Introduction

Along with morphological associative memories (MAMs), morphological perceptrons (MPs) were among the first morphological neural network (MNN) models that appeared in the literature (Ritter et al., 1999, Ritter and Sussner, 1996, Ritter and Sussner, 1997, Ritter et al., 1998, Sussner, 1998, Sussner and Esmi, 2011, Sussner and Valle, 2006). The commonality shared by all MNNs is that their nodes, called “morphological neurons”, apply a “morphological operator” before applying an activation function. Unfortunately, there is no precise definition of a morphological operator as far as applications in image processing and analysis are concerned. According to Henk Heijman’s seminal book entitled “Morphological Image Operators” (Heijmans, 1994): “Any attempt to find a formal definition of a morphological operator, however, would lead inevitably to the following dilemma: either it would be too restrictive, excluding operators that should not be excluded a priori, or it would be too general, leading to a ‘theory of everything’”.

However, when resorting to the algebraic framework of mathematical morphology (MM), one finds exact definitions of the four types of elementary operators of MM on complete lattices. These four operators are erosion, dilation, anti-erosion, and anti-dilation (Banon & Barrera, 1993). Banon and Barrera have shown that every mapping between complete lattices can be expressed both as a supremum of pair-wise infima of erosions and anti-dilations and as an infimum of pair-wise suprema of dilations and anti-erosions. In view of this fact, it is interesting to observe that the overall computations at every node in the output layer of an MP can be represented as a supremum of pair-wise infima of gray-scale erosions and anti-dilations (Sussner & Esmi, 2011). The level 0 curve of an infimum of a gray-scale erosion and an anti-dilation represents a hyperbox. Several constructive (hyperbox) learning algorithms for training MPs (Sussner, 1998), in particular competitive training algorithms (Sussner and Esmi, 2009a, Sussner and Esmi, 2009b, Sussner and Esmi, 2011), have appeared in the literature. Other hyperbox learning algorithms were used by Ritter, Urcid, Sossa and Guevara to train parameters of dendritic MNNs (Ritter and Urcid, 2003, Sossa and Guevara, 2014). In classification problems, each resulting hyperbox encloses patterns, all or most of which belong to a single class. Note that hyperbox-based algorithms were also independently developed by Kaburlasos et al. for training fuzzy lattice neurocomputing models (Kaburlasos et al., 2007, Kaburlasos and Petridis, 2000).

One of the main reasons why researchers devised constructive training algorithms in order to train morphological perceptrons or dendritic MNNs, instead of resorting to standard methods of non-linear optimization, is the fact that the morphological operations used in these models are not differentiable. In their recent approaches towards training dendritic morphological and hybrid morphological/linear neural networks via stochastic gradient descent (Hernández et al., 2018, Zamora and Sossa, 2017), Zamora, Sossa, et al. apparently dealt with locations where the partial derivatives do not exist set by setting the search directions equal to 0. In any case, the vector of search directions is not continuous, which is certainly a disadvantage. Also note that at locations where the function described by a dendrite is differentiable, all but one of its partial derivatives with respect to a weight are equal to 0.

An alternative for training the weights of morphological neurons without encountering these problems consists in smoothing the morphological operations. This approach was employed by Pessoa and Maragos in their hybrid morphological/rank/linear neural network (MRL-NN) (Pessoa & Maragos, 2000) as well as in Araújo et al.’s morphological and hybrid morphological/linear neural networks (Araújo, 2012, Araújo et al., 2017, Araújo and Sussner, 2010). Finally, note that MRL-NNs, other hybrid models, and dendritic MNNs can also be trained by using methods of evolutionary computation (Araújo and Ferreira, 2013, Arce et al., 2018).

Our approach for training hybrid morphological/linear neural networks differs greatly from all aforementioned approaches since it is based on Huang et al.’s extreme learning machine. According to Huang, Zhu, and Siew (2006), training a network using extreme learning machine (ELM) (Huang et al., 2006) is computationally inexpensive compared to evolutionary optimization (Simon, 2013) and classical neural network training algorithms and generally leads to a good generalization performance without requiring some form of regularization in order to avoid “overfitting” (Bishop, 1995, Bishop, 2006, Ripley, 1996). The hybrid morphological/linear perceptron (HMLP) introduced in this paper is a feedforward neural network model that includes hidden morphological units taken from the previous MP models. The outputs of these morphological units and traditional semi-linear neurons are then combined using linear aggregation functions in the output neurons. Therefore, an ELM-based approach can effectively be employed in order to learn the weights of these linear combinations. In this paper, we apply our HMLP approach to a number of classification problems from the literature and compare the resulting classification rates with the ones of several related models.

The paper is organized as follows: Section 2 reviews some relevant concepts of lattice theory and mathematical morphology,including a few pertinent comments on gray-scale mathematical morphology. After discussing morphological perceptrons and their relationship with some related models from the literature, as well as providing some motivation, we introduce a new hybrid morphological/linear perceptron (HMLP) in Section 3. Then we present an extreme learning machine approach for HMLP training in Section 4 and compare the classification performance achieved by our model with related models from the recent literature in Section 5. We finish the paper with some concluding remarks.

Section snippets

Some relevant concepts of lattice theory and mathematical morphology

Lattice-theoretical concepts play an important role in mathematical morphology ever since complete lattices were established as an appropriate mathematical framework for MM (Heijmans, 1994, Heijmans and Ronse, 1990, Ronse, 1990, Serra, 1988). Therefore MNNs can be viewed as a lattice computing approach towards computational  intelligence  (Kaburlasos & Kehagias, 2014). Let us proceed by reviewing some basic concepts.

A partially ordered set or poset is a non-empty set L together with a

From morphological to hybrid morphological/linear perceptrons: Some background and motivation

Note that the previous section refers to computations in R±n since this set, together with the product partial order and addition, represents a complete lattice-ordered group extension (Sussner & Esmi, 2011), which is a suitable mathematical structure in order to establish links with mathematical morphology and minimax algebra (Cuninghame-Green, 1979, Heijmans, 1994). This said, both the weight vectors and the input patterns of morphological and hybrid morphological/linear neural networks have

Training hybrid morphological/linear perceptrons using extreme learning machine

The technical term “extreme learning machine”, coined by Huang et al. (2006), essentially refers to a class of feedforward computational models whose first layer of weights can be randomly selected and whose second layer of weights can be determined as the global minimum of a certain objective function. Notwithstanding some controversies regarding the novelty and the naming of the method, ELM has proven to be very useful for training single and multiple hidden layer feedforward neural networks.

Experimental results in classification

The purpose of this section is to compare the classification performance of an HMLP-EL, trained using the strategies described in the previous section, with the ones of similar models from the recent literature, namely:

  • 1.

    Dendritic morphological neural network trained by stochastic gradient descent (DMNN) (Zamora & Sossa, 2017);

  • 2.

    Dilation/erosion linear perceptron (DELP) (Araújo, Oliveira, & Meira, 2012);

  • 3.

    Morphological/linear neural network (MLNN) (Hernández et al., 2018);

  • 4.

    Morphological perceptron

Concluding remarks

In this paper, we introduced an artificial neural network model called hybrid morphological/ linear perceptron that is equipped with classical neurons as well as morphological units. Each of these morphological units or components computes the infimum of two elementary morphological operators, namely an erosion εv and an anti-dilation δ̄w. Computational units of this type were already employed in previous models of morphological perceptrons that were trained using constructive algorithms (

Acknowledgments

This work was supported in part by CNPq under Grant No. 313145/2017-2 and by FAPESP under Grant Nos. 2018/13657-1 and 2017/10224-4. We would like to thank the anonymous reviewers, whose comments and suggestions helped to improve the paper.

References (57)

  • PessoaL.F.C. et al.

    Neural networks with hybrid morphological/rank/linear nodes: a unifying framework with applications to handwritten character recognition

    Pattern Recognition

    (2000)
  • RitterG.X. et al.

    Morphological bidirectional associative memories

    Neural Networks

    (1999)
  • RonseC.

    Why mathematical morphology needs complete lattices

    Signal Processing

    (1990)
  • SchmidhuberJ.

    Deep learning in neural networks

    Neural Networks

    (2015)
  • SossaH. et al.

    Efficient training for dendrite morphological neural networks

    Neurocomputing

    (2014)
  • SussnerP.

    Lattice fuzzy transforms from the perspective of mathematical morphology

    Fuzzy Sets and Systems

    (2016)
  • SussnerP. et al.

    Morphological perceptrons with competitive learning: lattice-theoretical framework and constructive learning algorithm

    Information Sciences

    (2011)
  • ZamoraE. et al.

    Dendrite morphological neurons trained by stochastic gradient descent

    Neurocomputing

    (2017)
  • AraújoR.A.

    A morphological perceptron with gradient-based learning for Brazilian stock market forecasting

    Neural Networks

    (2012)
  • AraújoR.A. et al.

    A dilation-erosion-linear perceptron for Bovespa index prediction

  • Araújo, R. A., & Sussner, P. (2010). An increasing hybrid morphological-linear perceptron with pseudo-gradient-based...
  • BanonG.J.F. et al.

    Minimal representations for translation-invariant set mappings by mathematical morphology

    SIAM Journal on Applied Mathematics

    (1991)
  • BartlettP.L.

    The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network

    IEEE Transactions on Information Theory

    (1998)
  • BergstraJ. et al.

    Random search for hyper-parameter optimization

    Journal of Machine Learning Research

    (2012)
  • BirkhoffG.

    Lattice theory

    (1993)
  • BishopC.M.

    Neural networks for pattern recognition

    (1995)
  • BishopC.M.

    Pattern recognition and machine learning (information science and statistics)

    (2006)
  • CampiottiI.

    HMLP-EL

    (2019)
  • Cited by (31)

    • Smooth dendrite morphological neurons

      2021, Neural Networks
      Citation Excerpt :

      Both approaches are trained with SGD, and they have the capability of extracting features. The Hybrid Morphological/Linear Perceptron (HMLP), proposed by Sussner and Campiotti (2020), is a two-layer model whose hidden layer includes morphological units and classical semi-linear neurons, and linear nodes in the output layer. HMLP is trained via an extreme learning machine approach.

    View all citing articles on Scopus
    View full text