Introduction

Many complex processes can be viewed as dynamical systems on an underlying network structure. Network with the dynamics on it is a powerful approach for modeling a wide range of phenomena in real-world systems, where the elements are regarded as nodes and the interactions as edges (Albert and Barabási 2002; Strogatz 2001; Newman 2003). One particular interest in the field of network science is the interplay between the network topology and its dynamics(Boccaletti et al. 2006). Much attention has been paid on how collective dynamics on networks are determined by the topology of graph. However, in real cases, only the performances, i.e., the time series of nodes states are observed, but the network structure and the dynamical rules are not known. Thus, the inverse problems, i.e., inferring network topology and dynamical rules based on the observed dynamics data, is more significant. This may pave a new way to detect the internal structure of a system according to its behaviors. Furthermore, it can help us to build up the dynamical model of a complex system according to the observed performance automatically.

For example, inferring gene regulatory networks from expression data can help us to identify the major genes and reveal the functional properties of genetic networks(Gardner et al. 2003); in the study of climate changes, network reconstruction may help us to reveal the atmospheric teleconnection patterns and understand their underlying mechanisms(Boers et al. 2019); it can also find applications in reconstructing epidemic spreading processes in social networks, which is essential to identifying the source and preventing further spreading(Shen et al. 2014). Furthermore, if not only the network structure but also the dynamics can be learned very well for these systems, surrogate models of the original problems can be obtained, on which, many experiments that are hard to implement on the original systems can be operated. Another potential application is automated machine learning (AutoML)(Feurer et al. 2015; Quanming et al. 2018). At present, the main research problem of Neural Architecture Search(NAS), a sub-area of AutoML, is to find the optimal neural network architecture in a space by the search strategy, and it is essentially a network reconstruction problem, in which the optimal neural network and the dynamical rules on it can be learned according to the observed training samples as time series. In a word, reconstructions of network and dynamical rules are pivotal to a wide span of applications.

A considerable amount of methods have been proposed for reconstructing network from time series data. One class of them is based on the method of statistical inference such as Granger causality(Quinn et al. 2011; Brovelli et al. 2004), and correlation measurements(Stuart et al. 2003; Eguiluz et al. 2005; Barzel and Barabási 2013). These methods, however, can usually discover functional connectivity and may fail to reveal structural connection (Feizi et al. 2013). This means that in the reconstructed system, strongly correlated areas in function need to be also directly connected in structure. Nevertheless this requirement is seldom satisfied in many real-world systems like brain (Park and Friston 2013) and climate systems (Boers et al. 2019). Another class of methods were developed for reconstructing structural connections directly under certain assumptions. For example, methods such as driving response(Timme 2007) or compressed sensing(Wang et al. 2011; Wang et al. 2011; Wang et al. 2011; Shen et al. 2014) either require the functional form of the differential equations, or the target specific dynamics, or the sparsity of time series data. Although a model-free framework presented by Casadiego et al.(Casadiego et al. 2017) do not have these limitations, it can only be applied to dynamical systems with continuous variables so that the derivatives can be calculated. Thus, a general framework for reconstructing network topology and learning dynamics from the time series data of various types of dynamics, including continuous, discrete and binary ones, is necessary.

Recently, deep Learning has gained success in many areas such as image classification (Krizhevsky et al. 2012) and speech recognition (Hinton et al. 2012). Can we apply this state-of-the-art technique on network reconstruction problem? This is possible because Graph network framework (Battaglia et al. 2018) has enabled deep learning techniques applied on graph structures successfully by mapping graph-structured data onto Euclidean space with update and aggregation functions (Zonghan et al. 2018). With a wealth of different avenues available, GN can be tailored to perform various tasks, such as node or graph classification (Veličković et al. 2017; Zhang et al. 2018), graph generation (De Cao and Kipf 2018; Li et al. 2018; Bojchevski et al. 2018; You et al. 2018), and spatial-temporal forecasting (Jain et al. 2016; Li et al. 2017; Yu et al. 2017; Yan et al. 2018). Recently, the topic of recovering interactions and predicting physical dynamics under given interaction networks has attracted much attention. A most used approach is introduced by Battaglia et al. (Battaglia et al. 2016), representing particles as nodes and interactions as edges, then reconstruct the trajectories in a inference process on the given graph. However, most of the works in this field have focused on physical reasoning task while few dedicate to solving the inverse problem of network science: revealing network topology from observed dynamics. Some related works (Watters et al. 2017; Guttenberg et al. 2016) attempted to infer implicit interaction of the system to help with the state prediction via observation. But they did not specify the implicit interaction as the network topology of the system, therefore the network reconstruction task remains ignored. Of all literature as we known, only NRI (Neural Relational Inference) model(Kipf et al. 2018) is working on this goal. Nevertheless, only a few continuous dynamics such as spring model and Kuramoto model are studied, and discrete processes were never considered. So in the rest of this article, we will take NRI as one of our baselines and will be compared against our own model.

In this work we introduce Gumbel Graph Network (GGN), a model-free, data-driven method that can simultaneously reconstruct network topology and perform dynamics prediction from time series data of node states. It is able to attain high accuracy on both tasks under various dynamical systems, as well as multiple types of network topology. We first introduce our architecture which is called Gumbel Graph Networks in “GGN architecture” section and then give a brief overview of our experiments on three typical dynamics in “Experiments” section. In “Results” section, we show our results. Finally, some concluding remarks and discussions are given in “Conclusion” section.

GGN architecture

Problem overview

The goal of our Gumbel Graph Network is to reconstruct the interaction graph and simulate the dynamics from the observational data of N interacting objects.

Typically, we assume that the system dynamics that we are interested can be described by a differential equation dX/dt=ψ(Xt,A) or the discrete iteration Xt=ψ(Xt−1,A), where \({ X }^{t }=({ X }_{1 }^{t},...,{X}_{N}^{t })\) denotes the states of N objects at time t, and Xi is the state of the object i. ψ is the dynamical function, and A is the adjacency matrix of an unweighted directed graph. However, ψ and A are unknown for us, and they will be inferred or reconstructed from a segment of time series data, i.e., X=(Xt,...,Xt+P), where P is the number of prediction steps.

Thus, our algorithm aims to learn the network structure (Specifically, the adjacency matrix) and the dynamical model ψ simultaneously in an unsupervised way.

Framework

The general framework of our model is shown in Fig. 1. The input of the model is the feature of all nodes at time step t, and the output of the model is the feature of all nodes in the following P steps. The model consists of two modules, a network generator and a dynamics learner. The job of the generator is to generate an adjacency matrix, and the learner will use the adjacency matrix generated and Xt(feature of all nodes at time t) to predict Xt+1,...,Xt+P,(feature of all nodes from time t+1 to t+P).

Fig. 1
figure 1

Basic structure of GGN. Our framework contains two main parts. First, the Adjacency Matrix is generated by the Network Generator via Gumbel softmax sampling; then the adjacency matrix and Xt (node state at time t) are fed to Dynamics Learner to predict the node state in future P time step. The back-propagation process runs back through all the computations

The Network Generator module uses the Gumbel softmax trick(Jang et al. 2016) to generate the adjacency matrix. Details are explained in subsection 3. The goal of the Dynamics Learner is to map the features of all nodes from time t to time t+1 through generated adjacency matrix. Similar to NRI’s design(Kipf et al. 2018), our GNN comprises of 4 mapping processes between nodes and edges, which can be accomplished through MLP, CNN or RNN module. In this article, we use MLP. Details are further explained in subsection 4. To learn the complex non-linear process, we use Graph Neural Network instead of Graph Convolutional Network(Kipf and Welling 2016), since the latter does not consider the nonlinear coupling between nodes while sometimes it exists (for example, Kuramoto model).

The complexity on time and space are both O(N2).

Network generator

One of the difficulties for reconstructing a network from the data is the discreteness of the graph, such that the back-propagation technique, which is widely used in differential functions, cannot be applied.

To conquer this problem, we apply Gumbel-softmax trick (Jang et al. 2016) to reconstruct the adjacency matrix of the network directly. This technique simulates the sampling process from a discrete distribution by a continuous function such that the distributions generated from the sampling processes in real or simulation are identical. In this way, the simulated process allows for back-propagation because it is differentiable.

Network generator is a parameterized module to generate adjacency matrix. Specifically, for a network of N nodes, it uses a N×N parameterized matrix to determine the N×N elements in the adjacency matrix A, with αij denoting the probability that Aij takes value 1.

Specifically, the method to generate an adjacency matrix is shown below

$$ A_{ij}= \frac{\exp((\log(\alpha_{ij})+\xi_{ij})/\tau)}{\exp((\log(\alpha_{ij})+\xi_{ij})/\tau)+\exp((\log(1-\alpha_{ij})+\xi^{\prime}_{ij})/\tau)} \qquad, $$
(1)

where ξijs and \(\xi ^{\prime }_{ij}\)s are i.i.d. random numbers following the gumbel distribution(Nadarajah and Kotz 2004). This calculation uses a continuous function with random noise to simulate a discontinuous sampling process. And the temperature parameter τ adjusts the sharpness of the output distribution. When τ→0, Aij will take 1 with probability αij and 0 with probability 1−αij.

Since αijs are all trainable parameters, they can be adjusted according to the back propagation algorithm. Thanks to the features of Gumbel-softmax trick (Jang et al. 2016), the gradient information can be back propagated through the whole computation graph although the process of sampling random numbers is non-differentiable.

Dynamics learner

Learning with graph-structured data is a hot topic in deep learning research areas. Recently, Graph networks (GNs) (Battaglia et al. 2018) have been widely investigated and have achieved compelling performance in node classification, link prediction, etc. In general, a GN uses the graph structure A and Xt, which denotes features of all nodes at time t, as its input to learn the representation of each node. Specifically, the graph information used here is the adjacency matrix constructed by the generator. The whole dynamics learner can be presented as a function:

$$ {X}^{t}_{predict}=f({X}^{t-1},A) $$
(2)

Where Xt is the state vector of all N nodes at time step t, A is the adjacency matrix constructed by the network generator. Similar to the work (Kipf et al. 2018), we realized this function through four mappings operating in succession: Node to Edge, Edge to Edge, Edge to Node and Node to Node, as shown below. Details are explained in the caption of Fig. 2.

$$ H_{e1 }^{t-1 }=f_{v\rightarrow e }(X^{t-1 }\otimes({ X^{t-1 }) }^{T}) $$
(3)
Fig. 2
figure 2

The Structure of the Dynamics Learner. Dynamics Learner takes the graph structure (here we use Adjacency Matrix) and node states X as its input to predict node states at next time step(s). Four main parts operate in succession to accomplish the whole process: Node to Edge, Edge to Edge, Edge to Node and Node to Node

$$ H_{e2 }^{t-1 }=f_{e }(H_{e1 }^{t-1 }) $$
(4)
$$ H_{v1 }^{t }=f_{e\rightarrow v }(A * H_{e2 }^{t-1 }) $$
(5)
$$ H_{v2 }^{t }=f_{v }(H_{v1 }^{t }) $$
(6)

Where, H. are hidden layers, Operation ⊗ is pair-wised concatenation, represented by the formula vv={〈vi,vj〉}N×N, resulting in a matrix where each element is a node pair. The operation is similar to the Kronecker Product except that we replace the internal multiplication with concatenation. Element-wised product * of Adjacency matrix and the result of Edge to Edge mapping sets elements 0 if there is no connection between two nodes and Reduced sum operation will aggregate edge information to the node. The two trainable mapping functions fev and fve are represented by neural networks.

Finally, we introduce skip-connection in ResNet (He et al. 2015) to improve the gradient flow through the network, which enhances the performance of the Dynamics Learner. Xt denotes the nodes’ states at time t. foutput is another MLP. This process can be presented as a function

$$ X^{t }_{predict}=f_{output }(\left[ X^{t-1 },H_{v2 }^{t} \right])+X^{t-1 } $$
(7)

Where [.,.] denotes the concatenation operator, note that this operation, as well as the skip-connection trick are optional. We use these method only in experiments on Kuramoto. To make multi-step predictions, we feed in the output states and reiterate until we get the prediction sequence \(X_{predict }=({ X }_{predict }^{1},..., { X }_{predict }^{T })\). Then we back propagate the loss between model prediction and the ground truth.

Training

Having introduced all the components, we now present the training process as algorithm below. In the training process, we feed one step trajectory value: Xt as its input, and their succeeding states, namely (Xt+1,...,Xt+P) as the targets.

The dynamics learner and the network generator are altering optimized in each epoch. We first optimize the dynamics learner for Sd rounds with the network generator fixed, back propagating the loss to the dynamics learner in each round. Then the network generator is trained with the same loss function as the dynamics learner for Sn rounds. In each round, the trained dynamics learner make predictions with newly generated adjacency matrix. As for loss function, when the time series to be predicted is a discrete sequence with finite countable symbols, the cross-entropy objective function is adopted otherwise the mean square errors are used. Note that instead of training the dynamics learner and the network generator simultaneously, we train them for different number of rounds per epoch. The main reason behind this training process is the observation that the dynamics and network structures evolve in different paces in real systems. Networks usually change slower than the dynamics. Small changes on network structure may lead to dramatic changes on node dynamics. Therefore, by alternating the training processes in different rounds per epoch, we can adjust both learning processes to appropriate paces.

In practice, Sd and Sn vary case by case. Although we chose them mainly through hyper-parameter tuning, there is a general observation that the more complex the dynamics is, the larger Sd it requires. For example, for Boolean Network model mentioned below, which exhibiting binary dynamics, the Sd is 10; while for Kuramoto model, which is highly nonlinear, the Sd needs to be around 30 to achieve a good result.

Experiments

An example

At first, we will show how GGN works and in what accuracy, we use a 10-body mass-spring interaction system as an example. Suppose in a two-dimensional plane, there are 10 masses linked each other by springs, and the connection density is 0.2. The masses can move according to the spring dynamics if the initial positions and velocities are given. And we will use the data of the position and velocity of each particle generated by the simulation to reconstruct their connections and predict their future positions.

In this experiment, we set Sn=5 and Sd=50, and use 5k training samples,1k validation samples and 1k test samples with each sample containing 3-steps trajectory of each mass. In each sample, one initial condition is adopted. The result shows that GGN can reconstruct the adjacency matrix with 97.8% accuracy, and can predict the next step positions with a fairly small error 2.97∗10−5. Figure 3 visualizes the trajectories of simulation model and predictions.

Fig. 3
figure 3

Results of network reconstruction and path prediction in a 10-body spring mass system. The ground-truth and predicted trajectories are plotted by solid and dash lines. Network reconstruction results are shown in the form of bold solid lines with different colors

It can be seen that GGN model can reconstruct the many-body problem in two-dimensional plane with high accuracy, and it can predict the node state in the future time accurately.

Experiments on simulated models

To systematically test the power of GGN, we experimented it on three types of simulated models: Boolean Network (Kauffman 1969), Kuramoto (Kuramoto 1975), and Coupled Map Lattice (Kaneko 1992; 1989), which exhibit binary, continuous, and discrete trajectories, respectively. A schematic diagram of these systems is shown in Fig. 4. Here we attempt to train our model to learn the dynamics and reconstruct the interactions between particles, or the adjacency matrices, under all three circumstances.

Fig. 4
figure 4

Time evolution of node values in three types of network simulations models. Kuramoto (a), Coupled Map Lattice (b) and Boolean Network (c)

Furthermore, we test the performance of GGN on different parameters with three main experiments: one concerns different net size and different level of chaos (subsection 3.1); one features different type of network topology (subsection 3.2), and one studies the relationship between data size and accuracy (subsection 3.3). Our full code implementations are as shown on Github [https://github.com/bnusss/GGN].

Boolean network

Boolean Network is a widely studied model that is often used to model gene regulatory networks. In Boolean Network system, every variable has a possible value of 0 or 1 and a Boolean function is assigned to the node. The function takes the states of its neighbors as inputs and returns a binary value determining the state of the current node.

In simulation. We set the structure of the network as a directed graph with the degree of each node as K, and different K determines whether the network will evolve chaotically or non-chaotically. All nodes follow the same randomly generated table of dynamical rules. The training data we generated contains 5k pairs of state transition sequences. Meanwhile, we simulated 1k validation set and 1k test set.

Kuramoto model

The Kuramoto model (Kuramoto 1975) is a nonlinear system of phase-coupled oscillators, and it is often used to describe synchronization. Specifically, we study the system

$$ \frac{d \phi_{i}}{dt} = \omega_{i} + k\sum_{j \neq i} A_{ij} \sin(\phi_{i} - \phi_{j}) $$
(8)

Where ωi are intrinsic frequencies sampled from a given distribution g(ω), and here we use a uniform distribution on [1,10); k is the coupling strength; Aij∈{0,1} are the elements of N×N adjacency matrix, and for undirected random networks we study, Aij=Aji. The Kuramoto network have two types of dynamics, synchronized and non-synchronized. According to studies by Restrepo et. al (Restrepo et al. 2005), the transition from coherence to incoherence can be captured by a critical coupling strength kc=k0/λ, where k0=2/πg(m), with m being the symmetric center of g(ω), and λ is the largest eigenvalue of the adjacent matrix. The network synchronizes if k>kc, and otherwise fails to synchronize. We simulate and study both coherent and incoherent cases.

For simulation, we solve the 1D differential equation with 4th-order Runge-Kutta method with a step size of 0.01. Our training sets include 5k samplings, validation set 1k, and test set 1k, each sampling covers dϕi/dt and sin(ϕi) in 10 time-steps.

Coupled map lattice

Coupled map lattices represent a dynamical model with discrete time and continuous state variables(Kaneko 1992; 1989),it is widely used to study the chaotic dynamics of spatially extended systems. The model is originally defined on a chain with a periodic boundary condition but can be easily extended to any type of topology:

$$ x_{t+1}(i)=(1-s)f(x_{t}(i))+\frac{s}{\text{deg}(i)}\sum_{j \in \text{neighbor}(i)} f(x_{t}(j)), $$
(9)

where s is the coupling constant and deg(i) is the degree of node i. We choose the following logistic map function:

$$ f(x) = \lambda x(1-x). $$
(10)

We simulated N∈{10,30} coupled map lattices with initial states x0(i) sampling uniformly from [0,1] for random regular graphs. Notice that when setting coupling constant s=0, the system reduces to N independent logistic map. The training sets also include 5k samplings, 1k validation set, and 1k test set, each sampling covers xi in 10 time-steps.

Results

In each experiment listed below, we set the hyper-parameters Sn and Sd of the Boolean Network model to 20 and 10, respectively, while in Coupled Map Lattice model and Kuramoto model they are 5 and 30. In Coupled Map Lattice model and Kuramoto model, the prediction steps P is 9, which means that the current state is used to predict the node state of the next 9 time steps, while in the Boolean Network, it is set to 1. In all the experiments, we’ve set the hidden size in all the MLP networks of the dynamics learner module of the GGN model to 256. All the presented results are the mean value over five times of repeated experiments. The horizontal lines: “-” in the table indicates that the amount of data exceeds the model processing limitation, the model becomes so unstable that outputs may present as “nan” during training.

We compare our model with following baseline algorithms:

  • LSTM(Long Short-Term Memory Network) is a well-known recurrent neural network and has been shown to be very suitable for sequence prediction problems. To do network reconstruction with LSTM, previous work (Kipf et al. 2018) used thresholded correlation matrix to represent the adjacency matrix. But according to our experiments, this method would only yield all-zero or all-one matrices, therefore cannot serve as a satisfactory way of deriving adjacency matrices. Hence, we use LSTM only for node state prediction.However, this method cannot obtain meaningful results as in (Kipf et al. 2018) because different network generating methods are used. Therefore, we ignore the network reconstruction accuracy while only the state prediction is reported for LSTM.

  • NRI(Neural Relational Inference Model) is able to reconstruct the underlying network structure and predict the node state in future time steps simultaneously by observing the node state sequence. We compare our model against it in both tasks. Here we use settings similar to that in Kipf’s original paper(Kipf et al. 2018): all our experiments use MLP decoders, and with the Kuramoto model, we use CNN encoder and other models the MLP encoder.

We use the following indicators to evaluate the results of the experiments:

  • TPR(true positive rate) measures the proportion of positive instances that are correctly identified. We consider 1 in the adjacency matrix as a positive element, whereas 0 as a negative one.

  • FPR(false positive rate) computes the proportion of negative instances that are incorrectly identified in the adjacency matrix generated.

  • MSE(mean square error) measures the average of the squares of the errors, that is the average squared difference between the estimated values and data. The MSE we showed below is the average mean square error of next P time steps.

  • ACC(net) is the proportion of correctly identified elements of the Adjacency Matrix.

  • ACC(dyn), In our experiment on the Boolean Network, we use indices ACC(dyn) to measure the proportion of nodes whose states are predicted accurately in the next time step.

Experiments with different dynamics

In our experiments, we set the network topology of a Boolean Network as a directed graph, with the indegree of each node being k, and k determines whether the system is chaotic or not. For all systems, we set k=2 for non-chaotic cases; and for systems with 10,30,100 nodes, k is set to 7,5 and 4, respectively, to obtain chaotic dynamics. As shown in Table 1, the GGN model recovers the ground-truth interaction graph with an accuracy significantly higher than competing method, and the recovery rate in non-chaotic regimes is better than those in chaotic regime.

Table 1 Results with Boolean Network

Here we presented our results obtained on coupled map lattice model in Table 2. In our experiments, the network topology is random 4-regular graph, and we set coupling constant s=0.2 fixed. Because r≈3.56995 is the onset of chaos in the logistic map, we chose r=3.5 and r=3.6 to represent non-chaotic and chaotic dynamics respectively. For a random 4-regular graph with 10 nodes, our GGN model has obtained approximately 100% accuracy in the task of network reconstruction. For a system with 30 nodes, it is still able to achieve a high accuracy and the performance obtained on non-chaotic dynamics is better than that on chaotic dynamics.

Table 2 Results with CML model

In the experiment concerning Kuramoto Model, we used Erdos-Renyi random graph with a connection possibility of 0.5. As the onset of synchronization is at k=kc (in our cases, kc=1.20 for 10 nodes system and kc=0.41 for 30 nodes system), we chose k=1.1kc and k=0.9kc to represent coherent and incoherent dynamics respectively. Here we used data of only two dimensions (speed and amplitude), as opposed to four dimensions in NRI’s original setting (speed, phase, amplitude and intrinsic frequency), so the performance of NRI model here is lower than that presented in Kipf’s original paper (Kipf et al. 2018). The results are shown in Table 3. Similar to BN and CML, our GGN model attains better accuracy in coherent cases, which are more regular than incoherent ones.

Table 3 Results with Kuramoto model

To sum up, the above experiments clearly demonstrates the compelling competence and generality of our GGN model. As shown in the tables, GGN achieves high accuracy on all three network simulations, and its performance remains good when the number of nodes increases and the system transform from non-chaotic to chaotic. Although we note that our model achieves relatively lower accuracy in chaotic cases, and it is not perfectly stable under the current data size (in CML and Kuramoto experiments, there is a 1/5 chance that performance would decrease), the overall results are satisfactory.

Reconstruction accuracy with network structure

In this section, we used 100-node Boolean Network with the Voter dynamics (Li et al. 2015) (for a node with degree k, and m neighbours in state 1, in the next time step it has a probability of m/k to be in state 1, and a probability of (km)/k to be in state 0) to study how network structure affects the network reconstruction performance of our GGN model. Specifically, we studied WS networks and BA networks, and examined how the reconstruction accuracy would change under different network parameters. We also experimented with two different data sizes: 500 and 5000 pairs of state transition sequences, to see how network structure would affect the needed amount of data.

In the first two experiments, we studied WS networks of different control parameters. In the former case(see Fig. 5a), the independent variable of the experiment is re-connection possibility p. We note that the reconstruction accuracy declines slowly between p=[10−4,10−2], but drops sharply when p is larger than 10−2. As the average distance of the network drops quickly before 10−2, but our reconstruction accuracy remains roughly the same, it seems that the reconstruction accuracy is insensible to it. On the other hand, the Clustering Coefficient of the network drops quickly when p is larger than 10−2, while declining slowly when p is smaller (Watts and Strogatz 1998), which correspond with our curves of accuracy. Therefore, we may conclude that the reconstruction accuracy is directly affected by Clustering Coefficient in WS networks. However, as the data size increases, the performance is significantly augmented under all different values of p, and the slope is greatly reduced. So increasing the data size can effectively solve the problem brought by increasing re-connection possibility.

Fig. 5
figure 5

Accuracy of reconstruction with different parameters in WS network structures. a: WS networks under different re-connection possibility p (while neighbours=20); b: WS networks under different number of neighbours (while p=0.3); We experimented with two different data sizes: 500 and 5000 pairs of state transition sequences, represented in each plot by green and orange line, respectively

In the later case(see Fig. 5b), we studied WS networks of different number of neighbours. Here the situation is much simpler: as number of neighbours increases, the complexity of network also goes up, and it in turn makes learning more difficult. So the need for data is increasing along with the number of neighbours.

Then we studied the reconstruction accuracy of different number of connections of each node in BA networks in the later experiment(see Fig. 6). The result is similar to the one in which we studied the relation of reconstruction accuracy and different number of neighbors in WS network, but here, increasing the data size receives a smaller response than in the WS networks. That is probably because in BA networks, a few nodes can greatly affect the dynamics of the whole network, which makes the complexity even higher, therefore the need for data would be greater for BA networks.

Fig. 6
figure 6

Accuracy of reconstruction in BA networks under different number of connections of each node

Reconstruction accuracy with data size

In this section we study the dependency between the amount of data and the accuracy of reconstruction. We performed our experiments on CML model with chaotic dynamics. As illustrated in Fig. 7, the accuracy of reconstruction significantly improves when feeding more data to the model. We also noticed that an insufficient amount of data can lead to a high deviation, which means that our method can either produce results with high accuracy or fail.

Fig. 7
figure 7

Accuracy of reconstruction versus the size of training data. The amount of training data is ranging from about 102 to 103 with each sampling evolving for T=100 time steps. The results are obtained on CML model with 10 nodes and the network topology is random 4-regular graphs. We set s=0.2 and r=3.6 to generate chaotic dynamics

Conclusion

In this work we introduced GGN, a model-free, purely data-driven method that can simultaneously reconstruct network topology and perform dynamic prediction from time series data of node state. Without any prior knowledge of the network structure, it is able to complete both tasks with high accuracy.

In a series of experiments, we demonstrated that GGN is able to be applied to a variety of dynamical systems, including continuous, discrete, and even binary ones. And we found that in most cases, GGN can reconstruct the network better from non-chaotic data. In order to further explore GGN’s properties and to better know its upper limit, we conducted experiments under different network topology and different data volumes. The results show that the network reconstruction ability of our model is strongly correlated with the complexity of dynamics and the Clustering Coefficient of the network. It is also demonstrated that increasing the data size can significantly improve GGN’s net reconstruction performance, while a small data size can result in large deviation and unstable performance.

However, we are well aware that the current work has some limitations. It now focuses only on static graph and Markovian dynamics. Besides, as the limitation of our computation power, the maximum network size we can process is limited up to 100 nodes. Several possible approaches may help us to conquer these problems. First, if we can parameterize the network generator in a dynamical way, that is, to allow the generator parameters change along time, we can break through the limitations of static graphs. Second, if we replace the MLP network with RNN in the framework, learning of non-Markovian dynamics is possible. Third, to improve the scalability of our framework, node by node reconstruction of network can be adopted to save the space complexity. Another good way to improve the size limitation is to use graph convolution network (GCN) to model dynamics learner. GCN has been proved to be very useful in a large variety of field although it can not simulate some complex nonlinear process quite well from the experimental. In future works, we will further enhance the capacity of our model so as to break through these limitations.