1 Introduction

Network analysis consists of numerous tasks including community detection (Fortunato 2010), role discovery (Rossi and Ahmed 2015), link prediction (Liben-Nowell and Kleinberg 2007), etc. As relations exist between nodes that disobey the i.i.d assumption, it is non-trivial to apply traditional data mining techniques in networks directly. Network embedding (NE) fills the gap by mapping nodes in a network into a low-dimensional space according to their structural information in the network. It has been reported that using embedded node representations can achieve promising performance on many network analysis tasks (Cao et al. 2015; Grover and Leskovec 2016; Perozzi et al. 2014; Ribeiro et al. 2017).

Previous NE techniques mainly relied on eigendecomposition (Shaw and Jebara 2009; Tenenbaum et al. 2000), but the high computational complexity of eigendecomposition makes it difficult to apply in real-world networks. With the fast development of neural network techniques, unsupervised embedding algorithms have been widely used in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors in the learned embedding space, e.g., word2vec (Mikolov et al. 2013a, b) and GloVe (Pennington et al. 2014). By drawing an analogy between paths consists of several nodes on networks and word sequences in text, DeepWalk (Perozzi et al. 2014) learns node representations based on random walks using the same mechanism of word2vec. Afterwards, a sequence of studies have been conducted to improve DeepWalk either by extending the definition of neighborhood to higher-order proximity (Cao et al. 2015; Grover and Leskovec 2016; Perozzi et al. 2016; Tang et al. 2015b) or incorporating more information for node representations such as attributes (Li et al. 2017; Wang et al. 2017) and heterogeneity (Chang et al. 2015; Tang et al. 2015a).

Although a variety of NE methods have been proposed, two major limitations exist in previous NE studies: role preservation and uncertainty modeling. Previous methods focused only on one of these two limitations and while neglecting the other. In particular, for role preservation, most studies applied random walk to learn representations. However, random walk based embedding strategies and their higher-order extensions can only capture local structural information, i.e., first-order and higher-order proximity within the neighborhood of the target node (Lyu et al. 2017). Local structural information is reflected in community structures of networks. But these methods may fail in capturing global structural information, i.e., structural roles (Rossi and Ahmed 2015; Pei et al. 2018). Global structural information represents roles of nodes in networks, where two nodes have the same role if they are structurally similar from a global perspective. An example of global structural information (roles) and local structural information (communities) is shown in Fig. 1. In summary, nodes that belong to the same community require dense local connections while nodes that have the same role may have no common neighbors at all (Tu et al. 2018). Empirical evidence based on this example for illustrating this limitation will be shown in Sect. 5.2. For uncertainty modeling, most previous methods represented a node into a point vector in the learned embedding space. However, real-world networks may be noisy and imbalanced. For example, node degree distributions in real-world networks are often skewed where some low-degree nodes may contain less discriminative information (Tu et al. 2018). Point vector representations learned by these methods are deterministic (Dos Santos et al. 2016) and are not capable of modeling the uncertainties of node representations.

Fig. 1
figure 1

An example of ten nodes belonging to (1) three groups (different colors indicate different groups) based on global structural information, i.e., the structural roles and (2) two groups (groups are shown by the dashed ellipses) based on local structural information, i.e., the communities. For example, nodes 0, 1, 4, 5 and 8 belong to the same group Community 1 based on local structural perspective because they have more internal connections. Node 0 and 2 are far from each other, but they are in the same group based on global structural perspective (Color figure online)

There are a few studies trying to address these limitations in the literature. For instance, struc2vec (Ribeiro et al. 2017) builds a hierarchy to measure similarity at different scales, and constructs a multilayer graph to encode the structural similarities. SNS (Lyu et al. 2017) discovers graphlets as a pre-processing step to obtain the structural similar nodes. DRNE (Tu et al. 2018) learns network embedding by modeling regular equivalence (Wasserman and Faust 1994). However, these studies aim only to solve the problem of role preservation to some extent. Thus the limitation of uncertainty modeling remains a challenge. Dos Santos et al. (2016) and Bojchevski and Günnemann (2017) put effort in improving classification tasks by embedding nodes into Gaussian distributions but both methods only capture the neighborhood information based on random walk techniques. DVNE (Zhu et al. 2018) learns Gaussian embedding for nodes in the Wasserstein space as the latent representations to capture uncertainties of nodes, but they focus only on first- and second-order proximity of networks same to previous methods. Therefore, the problem of role preservation has not been solved in these studies.

In this paper, we propose struc2gauss, a new structural role preserving network embedding framework. struc2gauss learns node representations in the space of Gaussian distributions and performs NE based on global structural information so that it can address both limitations simultaneously. On the one hand, struc2gauss generates node context based on a global structural similarity measure to learn node representations so that global structural information can be taken into consideration. On the other hand, struc2gauss learns node representations via Gaussian embedding and each node is represented as a Gaussian distribution where the mean indicates the position of this node in the embedding space and the covariance represents its uncertainty. Furthermore, we analyze and compare two different energy functions for Gaussian embedding to calculate the closeness of two embedded Gaussian distributions, i.e., expected likelihood and KL divergence. To investigate the influence of structural information, we also compare struc2gauss to two other structural similarity measures for networks, i.e., MatchSim and SimRank.

We summarize the contributions of this paper as follows:

  • We propose a flexible structure preserving network embedding framework, struc2gauss, which learns node representations in the space of Gaussian distributions. struc2gauss is capable of preserving structural roles and modeling uncertainties.

  • We investigate the influence of different energy functions in Gaussian embedding and compare to different structural similarity measures in preserving global structural information of networks.

  • We conduct extensive experiments in node clustering and classification tasks which demonstrate the effectiveness of struc2gauss in capturing the global structural role information of networks and modeling the uncertainty of learned node representations.

The rest of the paper is organized as follows. Section 2 provides an overview of the related work. We present the problem statement in Sect. 3. Section 4 explains the technical details of struc2gauss. In Sect. 5 we then discuss our experimental study. The possible extensions of struc2gauss are discussed in Sect. 6. Finally, in Sect. 7 we draw conclusions and outline directions for future work.

2 Related work

2.1 Network embedding

Network embedding methods map nodes in a network into a low-dimensional space according to their structural information in the network. The learned node representations can boost performance in many network analysis tasks, e.g., community detection and link prediction. Previous methods mainly viewed NE as part of dimensionality reduction techniques (Goyal and Ferrara 2018). They first construct a pairwise similarity graph based on neighborhood and then embed the nodes of the graph into a lower dimensional vector space. Locally Linear Embedding (LLE) (Tenenbaum et al. 2000) and Laplacian Eigenmaps (Belkin and Niyogi 2001) are two representative methods in this category. SPE (Shaw and Jebara 2009) learns a low-rank kernel matrix to capture the structures of input graph via a set of linear inequalities as constraints. But the high computational complexity makes these methods difficult to apply in real-world networks.

With increasing attention attracted by neural network research, unsupervised neural network techniques have opened up a new world for embedding. word2vec as well as Skip-Gram and CBOW (Mikolov et al. 2013a, b) learn low-rank representations of words in text based on word context and show promising results of different NLP tasks. Based on word2vec, DeepWalk (Perozzi et al. 2014) first introduces such embedding mechanism to networks by treating nodes as words and random walks as sentences. Afterwards, a sequence of studies have been conducted to improve DeepWalk either by extending the definition of neighborhood to higher-order proximity (Cao et al. 2015; Grover and Leskovec 2016; Perozzi et al. 2016; Tang et al. 2015b) or incorporating more information for node representations such as attributes (Li et al. 2017; Wang et al. 2017) and heterogeneity (Chang et al. 2015; Tang et al. 2015a). Recently, deeper neural networks have also been introduced in NE problem to capture the non-linear characteristics of networks, such as SDNE (Wang et al. 2016). However, these approaches represent a node into a point vector in the learned embedding space and are not capable of modeling the uncertainties of node representations. To solve this problem, inspired by Vilnis and McCallum (2014), Gaussian embedding has been used in NE. Bojchevski and Günnemann (2017) learns node embeddings by leveraging Gaussian embedding to capture uncertainties. Dos Santos et al. (2016) combines Gaussian embedding and classification loss function for multi-label network classification. DVNE (Zhu et al. 2018) learns a Gaussian embedding for each node in the Wasserstein space as the latent representation so that the uncertainties can be modeled. We refer the reader to Hamilton et al. (2017b), Cui et al. (2018) and Cai et al. (2018) for more details.

Recent years have witnessed increasing interest in neural networks on graphs. Graph neural networks (Scarselli et al. 2008) can also learn node representations but using more complicated operations such as convolution. Kipf and Welling (2016) proposes a GCN model using an efficient layer-wise propagation rule based on a first-order approximation of spectral convolutions on graphs. Gilmer et al. (2017) introduces a general message passing neural network framework to interpret different previous neural models for graphs. GraphSAGE (Hamilton et al. 2017a) learns node representations in an inductive manner sampling a fixed-size neighborhood of each node, and then performing a specific aggregator over it. Embedding Propagation (EP) (Duran and Niepert 2017) learns representations of graphs by passing messages forward and backward in an unsupervised setting. Graph Attention Networks (GATs) (Velickovic et al. 2017) extend graph convolutions by utilizing masked self-attention layers to assign different importances to different nodes with different sized neighborhoods.

However, most NE methods as well as graph neural networks only concern the local structural information represented by paths consists of linked nodes, i.e., the community structures of networks. But they fail to capture global structural information, i.e., structural roles. SNS (Lyu et al. 2017), struc2vec (Ribeiro et al. 2017) and DRNE (Tu et al. 2018) are exceptions which take global structural information into consideration. SNS uses graphlet information for structural similarity calculation as a pre-propcessing step. struc2vec applies the dynamic time warping to measure similarity between two nodes’ degree sequences and builds a new multilayer graph based on the similarity. Then similar mechanism used in DeepWalk has been used to learn node representations. DRNE explicitly models regular equivalence, which is one way to define the structural role, and leverages the layer normalized LSTM (Ba et al. 2016) to learn the representations for nodes. Another related work focusing on global structural information is REGAL (Heimann et al. 2018). REGAL aims at matching nodes across different graphs so the global structural patterns should be considered. However, its target is network alignment but not representation learning. A brief summary of these NE methods is list in Table 1.

2.2 Structural similarity

Structure based network analysis tasks can be categorized into two types: structural similarity calculation and network clustering .

Calculating structural similarities between nodes is a hot topic in recent years and different methods have been proposed. SimRank (Jeh and Widom 2002) is one of the most representative notions to calculate structural similarity. It implements a recursive definition of node similarity based on the assumption that two objects are similar if they relate to similar objects. SimRank++ (Antonellis et al. 2008) adds an evidence weight which partially compensates for the neighbor matching cardinality problem. P-Rank (Zhao et al. 2009) extends SimRank by jointly encoding both in- and out-link relationships into structural similarity computation. MatchSim (Lin et al. 2009) uses maximal matching of neighbors to calculate the structural similarity. RoleSim (Jin et al. 2011) is the only similarity measure which can satisfy the automorphic equivalence properties.

Table 1 A brief summary of different NE methods

Network clusters can be based on either global or local structural information. Graph clustering based on global structural information is the problem of role discovery (Rossi and Ahmed 2015). In social science research, roles are represented as concepts of equivalence (Wasserman and Faust 1994). Graph-based methods and feature-based methods have been proposed for this task. Graph-based methods take nodes and edges as input and directly partition nodes into groups based on their structural patterns. For example, Mixed Membership Stochastic Blockmodel (Airoldi et al. 2008) infers the role distribution of each node using the Bayesian generative model. Feature-based methods first transfer the original network into feature vectors and then use clustering methods to group nodes. For example, RolX (Henderson et al. 2012) employs ReFeX (Henderson et al. 2011) to extract features of networks and then uses non-negative matrix factorization to cluster nodes. Local structural information based clustering corresponds to the problem of community detection (Fortunato 2010). A community is a group of nodes that interact with each other more frequently than with those outside the group. Thus, it captures only local connections between nodes.

3 Problem statement

We illustrated local community structure and global role structure in Sect. 1 using the example in Fig. 1. In this section, definitions of community and role will be presented and then we formally define the problem of structural role preserving network embedding.

Structural role is from social science and used to describe nodes in a network from a global perspective. Formally,

Definition 1

(Structural role) In a network, a set of nodes have the same role if they share similar structural properties (such as degree, clustering coefficient, and betweenness) and structural roles can often be associated with various functions in a network.

For example, hub nodes with high degree in a social network are more likely to be opinion leaders, whereas bridge nodes with high betweenness are gatekeepers to connect different groups. Structural roles can reflect the global structural information because two nodes which have the same role could be far from each other and have no direct links or shared neighbors. In contrast to roles, community structures focus on local connections between nodes.

Definition 2

(Community structure) In a network, communities can represent the local structures of nodes, i.e., the organization of nodes in communities, with many edges joining nodes of the same community and comparatively few edges joining nodes of different communities (Fortunato 2010). A community is a set of nodes where nodes in this set are densely connected internally.

It can be seen that the focus of community structure is the internal and local connections so it aims to capture the local structural information of networks

In this study, we only consider the global structural information, i.e., structural role information, so without mentioning it explicitly, structural information indicates the global one and the keyphrases “structural role information” and “global structural information” are used interchangeably.

Definition 3

(Structural Role Preserving Network Embedding) Given a network \(G = (V, E)\), where V is a set of nodes and E is a set of edges between the nodes, the problem of Structural Preserving Network Embedding aims to represent each node \(v\in V\) into a Gaussian distribution with mean \(\mu \) and covariance \(\varSigma \) in a low-dimensional space \({\mathbb {R}}^d\), i.e., learning a function

$$\begin{aligned} f: V\rightarrow {\mathcal {N}}(x;\mu ,\varSigma ), \end{aligned}$$

where \(\mu \in {\mathbb {R}}^d\) is the mean, \(\varSigma \in {\mathbb {R}}^{d\times d}\) is the covariance and \(d\ll |V|\). In the space \({\mathbb {R}}^d\), the global structural role information of nodes introduced in Definition 1 can be preserved, i.e., if two nodes have the same role their means should be similar, and the uncertainty of node representations can be captured, i.e., the values of variances indicate the levels of uncertainties of learned representations.

4 struc2gauss

An overview of our proposed struc2gauss framework is shown in Fig. 2. Given a network, a similarity measure is employed to calculate the similarity matrix, then the training set which consists of positive and negative pairs are sampled based on the similarity matrix. Finally, Gaussian embedding techniques are applied on the training set and generate the embedded Gaussian distributions as the node representations and uncertainties of the representations. Besides, we analyze the computational complexity and the flexibility of our struc2gauss framework.

Fig. 2
figure 2

Overview of the struc2gauss framework. struc2gauss consists of three components: similarity calculation, training set sampling and Gaussian embedding

4.1 Structural similarity calculation

It has been theoretically proved that random walk sampling based NE methods are not capable of capturing structural equivalence (Lyu et al. 2017) which is one way to model the structural roles in networks (Wasserman and Faust 1994). Thus, to capture the global structural information, we calculate the pairwise structural similarity as a pre-processing step similar to Lyu et al. (2017) and Ribeiro et al. (2017).

In the literature, a variety of structural similarity measures have been proposed to calculate node similarity based on the structures of networks, e.g., SimRank (Jeh and Widom 2002), MatchSim (Lin et al. 2009) and RoleSim (Jin et al. 2011, 2014). However, not all of these measures can capture the global structural role information and we will show the empirical evidence in the experiments in Sect. 5. Therefore, in this paper we leverage RoleSim for the structural similarity since it satisfies all the requirements of Axiomatic Role Similarity Properties for modeling the equivalence (Jin et al. 2011), i.e., the structural roles. RoleSim also generalizes Jaccard coefficient and corresponds linearly to the maximal weighted matching. RoleSim similarity between two nodes u and v is defined as:

$$\begin{aligned}&RoleSim(u,v)=(1-\beta )\max _{M(u,v)}\frac{\sum _{(x,y)\in M(u,v)}RoleSim(x,y)}{|N(u)|+|N(v)|-|M(u,v)|}+\beta \end{aligned}$$
(1)

where |N(u)| and |N(v)| are the numbers of neighbors of node u and v, respectively. M(uv) is a matching between N(u) and N(v), i.e., \(M(u, v)\subseteq N(u)\times N(v)\) is a bijection between N(u) and N(v). The parameter \(\beta \) is a decay factor where \(0< \beta < 1\). The intuition of RoleSim is that two nodes are structurally similar if their corresponding neighbors are also structurally similar. This intuition is consistent with the notion of automorphic and regular equivalence (Wasserman and Faust 1994).

In practice, RoleSim values can be computed iteratively and are guaranteed to converge. The procedure of computing RoleSim consists of three steps:

  • Step 1: Initialize matrix of RoleSim scores \(R^0\);

  • Step 2: Compute the kth iteration \(R^k\) scores for the \((k-1)\)th iteration’s values, \(R^{k-1}\) using:

    $$\begin{aligned}&R^{k}(u,v)=(1-\beta )\max _{M(u,v)}\frac{\sum _{(x,y)\in M(u,v)}R^{k-1}(x,y)}{|N(u)|+|N(v)|-|M(u,v)|}+\beta \end{aligned}$$
    (2)
  • Step 3: Repeat Step 2 until R values converge for each pair of nodes.

Note that there are other strategies can be used to capture the global structural role information except structural similarity, and these possible strategies will be discussed in Sect. 6. The advantage of RoleSim in capturing structural roles to other structural measures will also be discussed empirically in Sect. 5.6.

4.2 Training set sampling

The target of structural role preserving network embedding is to map nodes in the network to a latent space where the learned latent representations of two nodes are (1) more similar if these two nodes are structurally similar, and (2) more dissimilar if these two nodes are not structurally similar. Hence, we need to generate structurally similar and dissimilar node pairs as the training set based on the similarity we learned in Sect. 4.1. We name the structurally similar pairs of nodes the positive set and the structurally dissimilar pairs the negative set.

In detail, for node v, we rank its similarity values towards other nodes and then select top-k most similar nodes \(u_i,i=1,\ldots ,k\) to form its positive set \(\varGamma _{+}=\{(v,u_i)|i=1,\ldots ,k\}\). For the negative set, we randomly select the same number of nodes \(\{u'_i,i=1,\ldots ,k\}\) same to Vilnis and McCallum (2014) and other random walk sampling based methods (Grover and Leskovec 2016; Tang et al. 2015b; Perozzi et al. 2014), i.e., \(\varGamma _{-}=\{(v,u'_i)|i=1,\ldots ,k\}\). Therefore, k is a parameter indicating the number of positive/negative nodes per node. We will generate r positive and negative sets for each node where r is a parameter indicating the number of samples per node. The influence of these parameters will be analyzed empirically in Sect. 5.7. Note that the selection of the positive set is similar to that in DeepWalk and the difference is that we follow the similarity rank to select the positive nodes instead of random walks.

4.3 Gaussian embedding

4.3.1 Overview

Recently language modeling techniques such as word2vec have been extensively used to learn word representations in and almost all NE studies are based on these word embedding techniques. However, these NE studies map each entity to a fixed point vector in a low-dimension space so that the uncertainties of learned embeddings are ignored. Gaussian embedding aims to solve this problem by learning density-based distributed embeddings in the space of Gaussian distributions (Vilnis and McCallum 2014). Gaussian embedding has been utilized in different graph mining tasks including triplet classification on knowledge graphs (He et al. 2015), multi-label classification on heterogeneous graphs (Dos Santos et al. 2016) and link prediction and node classification on attributed graphs (Bojchevski and Günnemann 2017).

Gaussian embedding trains with a ranking-based loss based on the ranks of positive and negative samples. Following Vilnis and McCallum (2014), we choose the max-margin ranking objective which can push scores of positive pairs above negatives by a margin defined as:

$$\begin{aligned} {\mathcal {L}}=\sum _{(v,u)\in \varGamma _{+}}\sum _{(v',u')\in \varGamma _{-}}\max (0, m-{\mathcal {E}}(z_v, z_u)+{\mathcal {E}}(z_{v'}, z_{u'})) \end{aligned}$$
(3)

where \(\varGamma _{+}\) and \(\varGamma _{-}\) are the positive and negative pairs, respectively. \({\mathcal {E}}(\cdot ,\cdot )\) is the energy function which is used to measure the similarity of two distributions, \(z_v\) and \(z_u\) are the learned Gaussian distributions for nodes v and u, and m is the margin separating positive and negative pairs. In this paper, we present two different energy functions to measure the similarity of two distributions for node representation learning, i.e., expected likelihood and KL divergence based energy functions. For the learned Gaussian distribution \(z_i\sim {\mathcal {N}}(0;\mu _i,\varSigma _i)\) for node i, to reduce the computational complexity, we restrict the covariance matrix \(\varSigma _i\) to be diagonal and spherical in this work.

4.3.2 Expected likelihood based energy

Although both dot product and inner product can be used to measure similarity between two distributions, dot product only considers means and does not incorporate covariances. Thus, we use inner product to measure the similarity. Formally, the integral of inner product between two Gaussian distributions \(z_i\) and \(z_j\) (learned Gaussian embeddings for node i and j respectively), a.k.a., expected likelihood, is defined as:

$$\begin{aligned} E(z_i,z_j)&=\int _{x\in {\mathbb {R}}}{\mathcal {N}}(x;\mu _i,\varSigma _i){\mathcal {N}}(x;\mu _j,\varSigma _j)dx={\mathcal {N}}(0;\mu _i-\mu _j,\varSigma _i+\varSigma _j). \end{aligned}$$
(4)

For simplicity in computation and comparison, we use the logarithm of Eq. (4) as the final energy function:

$$\begin{aligned} {\mathcal {E}}_{EL}(z_i,z_j)&=\log E(z_i,z_j)=\log {\mathcal {N}}(0;\mu _i-\mu _j,\varSigma _i+\varSigma _j)\nonumber \\&=\frac{1}{2}\Big \{(\mu _i-\mu _j)^T(\varSigma _i+\varSigma _j)^{-1}(\mu _i-\mu _j)+\log \det (\varSigma _i+\varSigma _j)+d\log (2\pi )\Big \} \end{aligned}$$
(5)

where d is the number of dimensions. The gradient of this energy function with respect to the means \(\mu \) and covariances \(\varSigma \) can be calculated in a closed form as:

$$\begin{aligned} \frac{\partial {\mathcal {E}}_{EL}(z_i,z_j)}{\partial \mu _i}&=-\frac{\partial {\mathcal {E}}(z_i.z_j)_{EL}}{\partial \mu _j}=-\varDelta _{ij}\nonumber \\ \frac{\partial {\mathcal {E}}_{EL}(z_i,z_j)}{\partial \varSigma _i}&=\frac{\partial {\mathcal {E}}(z_i.z_j)_{EL}}{\partial \varSigma _j}=\frac{1}{2}(\varDelta _{ij}\varDelta _{ij}^T-(\varSigma _i+\varSigma _j)^{-1}), \end{aligned}$$
(6)

where \(\varDelta _{ij}=(\varSigma _i+\varSigma _j)^{-1}(\mu _i-\mu _j)\) (He et al. 2015; Vilnis and McCallum 2014). Note that expected likelihood is a symmetric similarity measure, i.e., \({\mathcal {E}}_{EL}(z_i,z_j)={\mathcal {E}}_{EL}(z_j,z_i)\).

4.3.3 KL divergence based energy

KL divergence is another straightforward way to measure the similarity between two distributions so we utilize the energy function \({\mathcal {E}}_{KL}(z_i,z_j)\) based on the KL divergence to measure the similarity between Gaussian distributions \(z_i\) and \(z_j\) (learned Gaussian embeddings for node i and j respectively):

$$\begin{aligned} {\mathcal {E}}_{KL}(z_i,z_j)&=D_{KL}(z_i,z_j)\nonumber \\&=\int _{x\in {\mathbb {R}}}{\mathcal {N}}(x;\mu _i,\varSigma _i)\log \frac{{\mathcal {N}}(x;\mu _j,\varSigma _j)}{{\mathcal {N}}(x;\mu _i,\varSigma _i)}dx\nonumber \\&=\frac{1}{2}\Big \{tr(\varSigma _i^{-1}\varSigma _j)+(\mu _i-\mu _j)^T\varSigma _i^{-1}(\mu _i-\mu _j)-\log \frac{\det (\varSigma _j)}{\det (\varSigma _i)}-d\Big \} \end{aligned}$$
(7)

where d is the number of dimensions. Similarly, we can compute the gradients of this energy function with respect to the means \(\mu \) and covariances \(\varSigma \):

$$\begin{aligned} \frac{\partial {\mathcal {E}}_{KL}(z_i,z_j)}{\partial \mu _i}&=-\frac{\partial {\mathcal {E}}_{KL}(z_i.z_j)}{\partial \mu _j}=-\varDelta _{ij}^{\prime }\nonumber \\ \frac{\partial {\mathcal {E}}_{KL}(z_i,z_j)}{\partial \varSigma _i}&=\frac{1}{2}\left( \varSigma _i^{-1}\varSigma _j\varSigma _i^{-1}+\varDelta _{ij}^{\prime }\varDelta _{ij}^{\prime T}-\varSigma _i^{-1}\right) \nonumber \\ \frac{\partial {\mathcal {E}}_{KL}(z_i,z_j)}{\partial \varSigma _j}&=\frac{1}{2}\left( \varSigma _j^{-1}-\varSigma _i^{-1}\right) \end{aligned}$$
(8)

where \(\varDelta _{ij}^{\prime }=\varSigma _i^{-1}(\mu _i-\mu _j)\).

Note that KL divergence based energy is asymmetric but we can easily extend to a symmetric similarity measure as follows:

$$\begin{aligned} {\mathcal {E}}(z_i,z_j)=\frac{1}{2}(D_{KL}(z_i,z_j)+D_{KL}(z_j,z_i)). \end{aligned}$$
(9)

4.4 Learning

To avoid the means to grow too large and ensure the covariances to be positive definite as well as reasonably sized, we regularize the means and covariances to learn the embedding (Vilnis and McCallum 2014). Due to the different geometric characteristics, two different hard constraint strategies have been used for means and covariances, respectively. Note that we only consider diagonal and spherical covariances. In particular, we have

$$\begin{aligned}&\Vert \mu _i\Vert \le C,~\forall i \end{aligned}$$
(10)
$$\begin{aligned}&c_{min}I\prec \varSigma _i \prec c_{max}I,~\forall i. \end{aligned}$$
(11)

The constraint on means guarantees them to be sufficiently small and constraint on covariances ensures that they are positive definite and of appropriate size. For example, \(\varSigma _{ii}\leftarrow \max (c_{min},\min (c_{max},\varSigma _{ii}))\) can be used to regularize diagonal covariances.

We use AdaGrad (Duchi et al. 2011) to optimize the parameters. The learning procedure is described in Algorithm 1. Initialization phase is from line 1 to 4, context generation is shown in line 7, and Gaussian embeddings are learned from line 8 to 14.

figure a

4.5 Computational complexity

The complexity of different components of struc2gauss are analyzed as follows:

  1. 1

    For structural similarity calculation using RoleSim, the computational complexity is \(O(kn^2d)\), where n is the number of nodes, k is the number of iterations and d is the average of \(y\log y\) over all node-pair bipartite graph in G (Jin et al. 2011) where \(y=|N(u)|\times |N(v)|\) for each pair of nodes u and v. The complexity \(O(y\log y)\) is from the complexity of the fast greedy algorithm offers a \(\frac{1}{2}\)-approximation of the globally optimal matching.

  2. 2

    To generate the training set based on similarity matrix, we need to sample from the most similar nodes for each node, i.e., to select k largest numbers from an unsorted array. Using heap, the complexity is \(O(n\log k)\).

  3. 3

    For Gaussian embedding, the operations include matrix addition, multiplication and inversion. In practice, as stated above, we only consider two types of covariance matrices, i.e., diagonal and spherical, so all these operations have the complexity of O(n).

Overall, the component of similarity calculation is the bottleneck of the framework. One possible and effective way to optimize this part is to set the similarity to be 0 if two nodes have a large difference in degrees. The reason is: (1) we generate the context only based on most similar nodes; and (2) two nodes are less likely to be structural similar if their degrees are very different.

5 Experiments

We evaluate struc2gauss in different tasks in order to understand its effectiveness in capturing structural information, capability in modeling uncertainties of embeddings and stability of the model towards parameters. We also study the influence of different similarity measures empirically. The source code of struc2gauss is available online.Footnote 1

5.1 Experimental setup

5.1.1 Datasets

We conduct experiments on two types of network datasets: networks with and without ground-truth labels where these labels can represent the global structural role information of nodes in the networks. For networks with labels, to compare to state-of-the-art, we use air-traffic networks from Ribeiro et al. (2017) where the networks are undirected, nodes are airports, edges indicate the existence of commercial flights and labels correspond to their levels of activities. For networks without labels, we select five real-world networks in different domains from Network Repository.Footnote 2 A brief introduction to these datasets is shown in Table 2. Note that the numbers of groups for networks without labels are determined by Minimum Description Length (MDL) (Henderson et al. 2012).

Table 2 A brief introduction to data sets

5.1.2 Baselines

We compare struc2gauss with several state-of-the-art NE methods.

  • DeepWalk (Perozzi et al. 2014): DeepWalk (Perozzi et al. 2014) learns node representations based on random walks using the same mechanism of word2vec by drawing an analogy between paths consists of several nodes on networks and word sequences in text. The structural information is captured by the paths of nodes generated by random walks.

  • node2vec (Grover and Leskovec 2016): It extends DeepWalk to learn latent representations from the node paths generated by biased random walk. Two hyper-parameters p and q are used to control the random walk to be breadth-first or depth-first. In this way, node2vec can capture the structural information in networks. Note that when \(p=q=1\), node2vec degrades to DeepWalk.

  • LINE (Tang et al. 2015b): It learns node embeddings via preserving both the local and global network structures. By extending DeepWalk, LINE aims to capture both the first-order, i.e., the neighbors of nodes, and second-order proximities, i.e., the shared neighborhood structures of nodes.

  • Embedding Propagation (EP) (Duran and Niepert 2017): EP is an unsupervised learning framework for network embedding and learns vector representations of graphs by passing two types of messages between neighboring nodes. EP, as one of graph neural networks, is similar to graph convolutional networks (GCN) (Kipf and Welling 2016). The difference is that EP is unsupervised and GCN is designed for semi-supervised learning.

  • struc2vec (Ribeiro et al. 2017): It learns latent representations for the structural identity of nodes. Due to its high computational complexity, we use the combination of all optimizations proposed in the paper for large networks.

  • graph2gauss (Vilnis and McCallum 2014): It maps each node into a Gaussian distribution where the mean indicates the position of a node in the embedded space and the covariance denotes the uncertainty of the learned representation. Bojchevski and Günnemann (2017) and Dos Santos et al. (2016) extend the original Gaussian embedding method to network embedding task.

  • DRNE (Tu et al. 2018): It learns node representations based on the concept of regular equivalence. DRNE utilizes a layer normalized LSTM to represent each node by aggregating the representations of their neighborhoods in a recursive way so that the global structural information can be preserved.

  • GraphWave (Donnat et al. 2018): It leverages heat wavelet diffusion patterns to learn a multidimensional structural embedding for each node based on the diffusion of a spectral graph wavelet centered at the node. Then the wavelets as distributions are used to capture structural similarity in graphs.

For all baselines, we use the implementation released by the original authors. For our framework struc2gauss, we test four variants: struc2gauss with expected likelihood and diagonal covariance (s2g_el_d), expected likelihood and spherical covariance (s2g_el_s), KL divergence and diagonal covariance (s2g_kl_d), and KL divergence and spherical covariance (s2g_kl_s). Note that we only use means of Gaussian distributions as the node embeddings in role clustering and classification tasks. The covariances are left for uncertainty modeling.

For other settings including parameters and evaluation metrics, different settings will be discussed in each task.

5.2 Case study: visualization in 2-D space

We use the toy example shown in Fig. 1 to demonstrate the effectiveness of struc2gauss in capturing the global structural information and the failure of other state-of-the-art techniques in this task. The toy network consists of ten nodes and they can be clustered from two different perspectives:

  • from the perspective of the global role structure, they belong to three groups, i.e., \(\{0,1,2,3\}\) (yellow color), \(\{4,5,6,7\}\) (blue color) and \(\{8,9\}\) (red color) because different groups have different structural functions in this network;

  • from the perspective of the local community structure, they belong to two groups, i.e., \(\{0,1,4,5,6,8\}\) and \(\{2,3,6,7,9\}\) because there are denser connections/more edges inside each community that outside the community.

Note that from the perspective of role discovery, these three groups of nodes can be explained to play the roles of periphery, star and bridge, respectively.

In this study, we aim to preserve the global structural information in network embedding. Figure 3 shows the learned node representations by different methods. For shared parameters in all methods, we use the same settings by default: representation dimension: 2, number of walks per node: 20, walk length: 80, skipgram window size: 5. For node2vec, we set \(p = 1\) and \(q = 2\). For graph2gauss and struc2gauss, the number of walks per node is 20 and the number of positive/negative nodes per node is 5. The constraint for means C is 2 and constraints for covariances \(c_{min}\) and \(c_{max}\) are 0.5 and 2, respectively. From the visualization results, it can be observed that:

  • Our proposed struc2gauss outperforms all other methods. Both diagonal and spherical covariances can separate nodes based on global structural information and struc2gauss with spherical covariances performs better than diagonal covariances since it can recognize star and bridge nodes better.

  • Methods aim to capture the global structural information performs better than random walk sampling based methods. For example, struc2vec can solve this problem to some extent. However, there is overlap between node 6 and 9. It has been stated that node2vec can capture the structural equivalence but the visualization shows that it still captures the local structural information similar to DeepWalk.

  • DeepWalk, LINE and graph2gauss fail to capture the global structural information because these methods are based on random walk which only captures the local community structures. DeepWalk is capable to capture the local structural information since nodes are separated into two parts corresponding to the two communities shown in Fig. 1.

Fig. 3
figure 3

Latent representations in \({\mathbb {R}}^2\) learned by a DeepWalk, b LINE, c GraRep, dnode2vec, estruc2vec, fstruc2gauss using KL divergence with diagonal covariance, gstruc2gauss using KL divergence with spherical covariance, gstruc2gauss using KL divergence with diagonal covariance, hstruc2gauss using expected likelihood with diagonal covariance, and istruc2gauss using expected likelihood with spherical covariance

5.3 Structural role clustering

The most common network mining application based on global structural information is the problem of role discovery and role discovery essentially is a clustering task. Thus, we consider this task to illustrate the potential of node representations learned by struc2gauss. We use the latent representations learned by different methods (in struc2gauss, we use means of learned Gaussian distribution) as features and K-means as the clustering algorithm to cluster nodes.

Parameters For these baselines, we use the same settings in the literature: representation dimension: 128, number of walks per node: 20, walk length: 80, skipgram window size: 10. For node2vec, we set \(p = 1\) and \(q = 2\). For graph2gauss and struc2gauss, we set the constraint for means C to be 2 and constraints for covariances \(c_{min}\) and \(c_{max}\) to be 0.5 and 2, respectively. The number of walks per node is 10, the number of positive/negative nodes per node is 120 and the representation dimension is also 128.

Evaluation metrics To quantitatively evaluate clustering performance in labeled networks, we use Normalized Mutual Information (NMI) as the evaluation metric. NMI is obtained by dividing the mutual information by the arithmetic average of the entropy of obtained cluster and ground-truth cluster. It evaluates the clustering quality based on information theory, and is defined by normalization on the mutual information between the cluster assignments and the pre-existing input labeling of the classes:

$$\begin{aligned} NMI(\mathcal {C,D})=\frac{2*\mathcal {I(C,D)}}{\mathcal {H(C)+H(D)}}, \end{aligned}$$
(12)

where obtained cluster \({\mathcal {C}}\) and ground-truth cluster \({\mathcal {D}}\). The mutual information \({\mathcal {I}}({\mathcal {C}},{\mathcal {D}})\) is defined as \(\mathcal {{\mathcal {I}}}({\mathcal {C}},{\mathcal {D}})={\mathcal {H}}({\mathcal {C}})-\mathcal {H(C|D)}\) and \({\mathcal {H}}(\cdot )\) is the entropy.

For unlabeled networks, we use normalized goodness-of-fit as the evaluation metric. goodness-of-fit can measure how well the representation of roles and the relations among these roles fit a given network (Wasserman and Faust 1994). In goodness-of-fit, it is assumed that the output of a role discovery method is an optimal model, and nodes belonging to the same role are predicted to be perfectly structurally equivalent. In real-world social networks, nodes belonging to the same role are only approximately structurally equivalent. The essence of goodness-of-fit indices is to measure how just how approximate are the approximate structural equivalences. If the optimal model holds, then all nodes belonging to the same role are exactly structurally equivalent.

In detail, given a social network with n vertices \(V=\{v_1,v_2,\ldots ,v_n\}\) and m roles, we have the adjacency matrix \(A=\{A_{ij}\in \{0,1\}|1\le i,j\le n\}\) and the role set \(R=\{R_1,R_2,\ldots ,R_m\}\), where \(v_i\in R_j\) indicates node \(v_i\) belongs to the jth role, as obtained using DyNMF. Note that R partitions V, in the sense that each \(v\in V\) belongs to exactly one \(R_i\in R\). Then the density matrix \(\varDelta \) is defined as:

$$\begin{aligned} \varDelta _{ij}= {\left\{ \begin{array}{ll} \sum _{v_k\in R_i,v_l\in R_j}A_{kl}/(|R_i|\cdot |R_j|), &{}\quad \text {if } i\ne j \\ \sum _{v_k\in R_i,v_l\in R_j}A_{kl}/(|R_i|\cdot (|R_j|-1)), &{}\quad \text {if } i=j. \end{array}\right. } \end{aligned}$$
(13)

We also define block matrix B based on the discovered roles. In fact, there are several criteria which can be used to build the block matrix including perfect fit, zeroblock, oneblock and \(\alpha \) density criterion (Wasserman and Faust 1994). Since real social network data rarely contain perfectly structural equivalent nodes (Faust and Wasserman 1992), perfect fit, zeroblock and oneblock criteria would not work well in real-world data and we use \(\alpha \) density criterion to construct the block matrix B:

$$\begin{aligned} B_{ij}= {\left\{ \begin{array}{ll} 0, &{}\quad \text {if } \varDelta _{ij}<\alpha \\ 1, &{}\quad \text {if } \varDelta _{ij}\ge \alpha \end{array}\right. } \end{aligned}$$
(14)

where \(\alpha \) is the threshold to determine the values in blocks. \(\alpha \) density criterion is based on the density of edges between nodes belong to the same role and defined as

$$\begin{aligned} \alpha =\sum _{1\le i,j\le n}A_{ij}/(n(n-1)). \end{aligned}$$
(15)

Based on the definitions of density matrix \(\varDelta \) and block matrix B, the goodness-of-fit indexe is defined as

$$\begin{aligned} e=\sum _{1\le i,j\le m}|B_{ij}-\varDelta _{ij}|. \end{aligned}$$
(16)
Table 3 NMI for node clustering in air-traffic networks using different NE methods
Fig. 4
figure 4

Goodness-of-fit of global structure preserving embedding baselines and struc2gauss with different strategies on three real-world networks. Lower value means better performance

To make the evaluation metric value in the range of [0, 1], we normalize goodness-of-fit by dividing \(r^2\) where r is number of groups/roles. For more details about goodness-of-fit indices, please refer to Wasserman and Faust (1994).

Results The NMI values for node clustering on networks with labels are shown in Table 3 and the normalized goodness-of-fit values for networks without labels are shown in Fig. 4. Note that random walk and neighbor based embedding methods, including DeepWalk, LINE, node2vec, EP and graph2gauss, aim at capturing local structural information and so are incapable of preserving structural roles. Hence, for simplicity, we will not compare them to these role preserving methods on networks without clustering labels.

From these results, some conclusions can be drawn:

  • For both types of networks with and without clustering labels, struc2gauss outperforms all other methods in different evaluation metrics. It indicates the effectiveness of struc2gauss in capturing the global structural information.

  • Comparing struc2gauss with diagonal and spherical covariances, it can be observed that spherical covariance can achieve better performance in node clustering. This finding is similar to the results of word embedding in Vilnis and McCallum (2014). A possible explanation could be: spherical covariance requires the diagonal elements to be the same which limits the representation power of covariance matrices but on the contrast enhance the representation power of the learned means. Since we only use means to represent nodes, the method with spherical covariance matrix could learn more relaxed means which leads to better performance.

  • For baselines, struc2vec, GraphWave and DRNE can capture the structural role information to some extent since their performance is better than these random walk based methods, i.e., DeepWalk and node2vec, and neighbor-based method, i.e., EP and graph2gauss, while all of them fail in capturing the global structural information for node clustering.

5.4 Structural role classification

Node classification is another widely used task for embedding evaluation. Different from previous studies which focused on community structures, our approach aims to preserve the global role structures. Thus, we evaluate the effectiveness of struc2gauss in role classification task. Same to the node clustering task in Sect. 5.3, we use the latent representations learned by different methods as features. Each dataset is separated into training set and test set (we will explore the classification performance with different percentages of training set). To focus on the learned representation, we use logistic regression as the classifier.

Fig. 5
figure 5

Average accuracy for structural role classification in Europe-air network

Structural role classification as a supervised task, the ground-truth labels are required. Thus we only use two air-traffic networks for evaluation. We compare our approach to the same state-of-the-art NE algorithms as baselines used in Sect. 5.3, i.e., DeepWalk, LINE, node2vec, EP, graph2gauss, struc2vec, GraphWave and DRNE. Same to Tu et al. (2018), we also compare to four centrality measures, i.e., closeness centrality, betweenness centrality, eigenvector centrality and k-core. Since the combination of these four measures perform best (Tu et al. 2018), we only compare the classification performance of the combination as features in this task. The parameters of baselines and struc2gauss, we use the same settings in Sect. 5.3.

The average accuracies for structural role classification in Europe-air and USA-air are shown in Figs. 5 and 6. From the results, we can observe that:

  • struc2gauss outperforms almost all other methods in both networks except DRNE in Europe-air network. In Europe-air network, struc2gauss with expected likelihood and spherical covariances, i.e., s2g_el_s, performs best. struc2gauss with KL divergence and spherical covariances, i.e., s2g_kl_s, achieves the second best performance especially when the training ratio is larger than 0.7. struc2gauss with diagonal covariances, i.e., s2g_el_d and s2g_kl_d, are on par with GraphWave, DRNE and struc2vec and outperform other methods. In the USA-air network, struc2gauss with different settings outperforms all baselines. This indicates the effectiveness of struc2gauss in modeling the structural role information. Although not the same combination of energy function and covariance form performs best in two networks, different variants of struc2gauss are always the best.

  • Among the baselines, only struc2vec, GraphWave and DRNE can capture the structural role information so that they achieve better classification accuracy than other baselines. DRNE performs the best among these baselines since it captures regular equivalence. GraphWave and struc2vec are the second best baselines because they also aim to capture structural roles.

  • Random walk and neighbor based NE methods only capture local community structures so they perform worse than struc2gauss, GraphWave, DRNE and our proposed struc2gauss. Node that methods such as DeepWalk, LINE and node2vec, although considering the first-, second- and/or higher-order proximity, still are not capable of modeling structural role information.

Fig. 6
figure 6

Average accuracy for structural role classification in USA-air network

5.5 Uncertainty modeling

Mapping a node in a network into a distribution rather than a point vector allows us to model the uncertainty of the learned representation which is another advantage of struc2gauss. Different factors can lead to uncertainties of data. It is intuitive that the more noisy edges a node has, the less discriminative information it contains, thus making its embedding more uncertain. Similarly, incompleteness of information in the network can also bring uncertainties to the representation learning. Therefore, in this section, we study two factors: noisy information and incomplete information.

To verify these hypotheses, we conduct the following experiment using Brazil-air and Europe-air networks. For noisy information, we randomly insert certain number of edges to the network and then learn the latent representations and covariances. The average variance is used to measure the uncertainties. For Brazil-air network, we range the number of noisy edges from 50 to 300 and for Europe-air it ranges from 500 to 3000. For incomplete information, we randomly delete certain number of edges to the network to make it incomplete and then learn the latent representations and covariances. Similarly, for Brazil-air network, we range the number of removed edges from 50 to 300 and for Europe-air it ranges from 500 to 3000. The other parameter settings are same to Sect. 5.3.

Fig. 7
figure 7

Uncertainties of embeddings with different levels of noise

Fig. 8
figure 8

Uncertainties of embeddings with different levels of incompleteness

The results are shown in Figs. 7 and 8. It can be observed that (1) with more noisy edges being added to the networks and (2) with more removed edges from the networks, average variance values become larger. struc2gauss with different energy functions and covariance forms have the same trend. This demonstrates that our proposed struc2gauss is able to model the uncertainties of learned node representations. It is interesting that struc2gauss with expected likelihood and diagonal covariance (s2g_el_d) always has the lowest average variance while struc2gauss with KL divergence and diagonal (s2g_kl_d) always has the largest value. This may result from the learning mechanism of different energy functions when measuring the distance between two distributions. To clarify the results, we also list the NMI for the clustering task in Tables 4 and 5. Compared to the original Gaussian embedding method, we again show the effectiveness of our method in preserving structural role and modeling uncertainties.

5.6 Influence of similarity measures

As we mentioned not all structural similarity measures can capture the global structural role information, to validate the rationale to select RoleSim as the similarity measure for structural role information, we investigate the influence of different similarity measures on learning node representations. In specific, we select two other widely used structural similarity measures, i.e., SimRank (Jeh and Widom 2002) and MatchSim (Lin et al. 2009), and we incorporate these measures by replacing RoleSim in our framework. The datasets and evaluation metrics used in this experiment are the same to Sect. 5.3. For simplicity, we only show the results of struc2gauss using KL divergence with spherical covariance in this experiment because different variants perform similarly in previous experiments.

Table 4 NMI for node clustering in Brazil-air network with different numbers of noisy edges
Table 5 NMI for node clustering in Europe-air network with different numbers of noisy edges

The NMI values for networks with labels are shown in Tables 6 and the goodness-of-fit values are shown in Fig. 9. We can come to the following conclusions:

  • RoleSim outperforms other two similarity measures in both types of networks with and without clustering labels. It indicates RoleSim can better capture the global structural information. Performance of MatchSim varies on different networks and is similar to struc2vec. Thus, it can capture the global structural information to some extent.

  • SimRank performs worse than other similarity measures as well as struc2vec (Table 3). Considering the basic assumption of SimRank that “two objects are similar if they relate to similar objects”, it computes the similarity also via relations between nodes so that the mechanism is similar to random walk based methods which have been proved not being capable of capturing the global structural information (Lyu et al. 2017).

5.7 Parameter sensitivity

We consider two types of parameters in struc2gauss: (1) parameters also used in other NE methods including latent dimensions, number of samples per node and number of positive/negative nodes per node; and (2) parameters only used in Gaussian embedding including mean constraint C and covariance constraint \(c_{max}\) (note that we fix the minimal covariance \(c_{min}\) to be 0.5 for simplicity). In order to evaluate how changes to these parameters affect performance, we conducted the same node clustering experiment on the labeled USA-air network introduced in Sect. 5.3. In the interest of brevity, we tune one parameter by fixing all other parameters. In specific, the number of latent dimensions varies from 10 to 200, the number of samples varies from 5 to 15 and the number of positive/negative nodes varies from 40 to 190. Mean constraint C is from 1 to 10, and covariance constraint \(c_{max}\) ranges from 1 to 10.

The results of parameter sensitivity are shown in Figs. 10 and 11. It can be observed from Fig. 10a, b that the trends are relatively stable, i.e., the performance is insensitive to the changes of representation dimensions and numbers of samples. The performance of clustering is improved with the increase of numbers of positive/negative nodes shown in Fig. 10c. Therefore, we can conclude that struc2guass is more stable than other methods. It has been reported that other methods, e.g., DeepWalk (Perozzi et al. 2014), LINE (Tang et al. 2015b) and node2vec (Grover and Leskovec 2016), are sensitive to many parameters. In general, more dimensions, more walks and more context can achieve better performance. However, it is difficult to search for the best combination of parameters in practice and it may also lead to overfitting. For Gaussian embedding specific parameters C and \(c_{max}\), both trends are stable, i.e., the selection of these contraints have little effect on the performance. Although with larger mean constraint C, the NMI decreases but the difference is not huge.

Table 6 NMI for node clustering in air-traffic networks of Brazil, Europe and USA using struc2gauss with different similarity measures
Fig. 9
figure 9

Goodness-of-fit of struc2gauss with different similarity measures. Lower values are better

Fig. 10
figure 10

Parameter sensitivity study

Fig. 11
figure 11

Parameter sensitivity in Gaussian distributions

5.8 Efficiency and effectiveness study

As discussed above in Sect. 4.5, the high computational complexity is one of the major issues in our method. In this experiment, we empirically study this computational issue by comparing the run-time and performance of different global structural preserving baselines and a heuristic method to accelerate the RoleSim measures. The heuristic method, named Fast struc2gauss, is introduced in Sect. 4.5: we set the similarity to be 0 if two nodes have a large difference in degrees to avoid more computing for dissimilar node pairs. For simplicity, we only test struc2gauss with KL and spherical covariance. Also, we only consider embedding methods that can preserve the structural role information as baselines, i.e., GraphWave, struc2vec and DRNE.

We conduct the experiments on the larger networks without ground-truth labels because on smaller networks the run-time differences are not significant. The run-time comparison is shown in Table 7 and the performance comparison is shown in Table 8. Note that NA in these tables because these methods reported a memory error and did not obtain any results. To make a fair comparison, all these methods are run in the same machine with 128GB memory and GPU have not been used for DRNE. From these results, it can be observed: (1) although the computational issue still exists, our method can achieve good performance compared to state-of-the-art structural role preserving network embedding methods such as GraphWAVE and struc2vec. (2) Although DRNE is much fast, its performance is worse than our method and other baselines. Moreover, it is incapable of modeling uncertainties. (3) Fast struc2gauss can effectively accelerate RoleSim computing and achieve comparable performance in role clustering.

Table 7 Run-time for different structural role preserving network embedding methods
Table 8 Performance (goodness-of-fit) of different structural role preserving network embedding methods

6 Discussion

The proposed struc2gauss is a flexible framework for node representations. As shown in Fig. 2, different similarity measures can be incorporated into this framework and empirical studies will be presented in Sect. 5.6. Furthermore, other types of methods which model structural information can be utilized in struc2gauss as well.

To illustrate the potential to incorporate different methods, we categorize different methods for capturing structural information into three types:

  • Similarity-based methods. Similarity-based methods calculate pairwise similarity based on the structural information of a given network. Related work has been reviewed in Sect. 2.2.

  • Ranking-based methods. PageRank (Page et al. 1999) and HITS (Kleinberg 1999) are two most representative ranking-based methods which learns the structural information. PageRank has been used for NE in (Ma et al. 2017).

  • Partition-based methods. This type of methods, e.g., role discovery, aims to partition nodes into disjoint or overlapping groups, e.g., REGE (Borgatti and Everett 1993) and RolX (Henderson et al. 2012).

In this paper, we focus on similarity-based methods. For ranking-based methods, we can use a fixed sliding window on the ranking list, then given a node the nodes within the window can be viewed as the context. In fact, this mechanism is similar to DeepWalk. For partition-based methods, we can consider the nodes in the same group as the context for each other.

7 Conclusions and future work

Two major limitations exist in previous NE studies: i.e., structure preservation and uncertainty modeling. Random-walk based NE methods fail in capturing global structural information and representing a node into a point vector are not capable of modeling the uncertainties of node representations.

We proposed a flexible structure preserving network embedding framework, struc2gauss, to tackle these limitations. On the one hand, struc2gauss learns node representations based on structural similarity measures so that global structural information can be taken into consideration. On the other hand, struc2gauss utilizes Gaussian embedding to represent each node as a Gaussian distribution where the mean indicates the position of this node in the embedding space and the covariance represents its uncertainty.

We experimentally compared three different structural similarity measures for networks and two different energy functions for Gaussian embedding. By conducting experiments from different perspectives, we demonstrated that struc2gauss excels in capturing global structural information, compared to state-of-the-art NE techniques such as DeepWalk, node2vec and struc2vec. It outperforms other competitor methods in role discovery task and structural role classification on several real-world networks. It also overcomes the limitation of uncertainty modeling and is capable of capturing different levels of uncertainties. Additionally, struc2gauss is less sensitive to different parameters which makes it more stable in practice without putting more effort in tuning parameters.

In the future, we will explore faster RoleSim measures for more scalable NE methods, for example, fast method to select k most similar nodes for a given node. Also, it is a promising research direction to investigate different strategies to model global structural information except structural similarity in NE tasks. Besides, other future investigations in this area include learning node representations in dynamic and temporal networks.