Skip to main content

Advertisement

Log in

Evaluating graph resilience with tensor stack networks: a Keras implementation

  • S.I. : Emerging Trends of Applied Neural Computation - E_TRAINCO
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In communication networks resilience or structural coherency, namely the ability to maintain total connectivity even after some data links are lost for an indefinite time, is a major design consideration. Evaluating resilience is a computationally challenging task since it often requires examining a prohibitively high number of connections or of node combinations, depending on the structural coherency definition. In order to study resilience, communication systems are treated in an abstract level as graphs where the existence of an edge depends heavily on the local connectivity properties between the two nodes. Once the graph is derived, its resilience is evaluated by a tensor stack network (TSN). TSN is an emerging deep learning classification methodology for big data which can be expressed either as stacked vectors or as matrices, such as images or oversampled data from multiple-input and multiple-output digital communication systems. As their collective name suggests, the architecture of TSNs is based on tensors, namely higher-dimensional vectors, which simulate the simultaneous training of a cluster of ordinary multilayer feedforward neural networks (FFNNs). In the TSN structure the FFNNs are also interconnected and, thus, at certain steps of the training process they learn from the errors of each other. An additional advantage of the TSN training process is that it is regularized, resulting in parsimonious classifiers. The TSNs are trained to evaluate how resilient a graph is, where the real structural strength is assessed through three established resiliency metrics, namely the Estrada index, the odd Estrada index, and the clustering coefficient. Although the approach of modelling the communication system exclusively in structural terms is function oblivious, it can be applied to virtually any type of communication network independently of the underlying technology. The classification achieved by four configurations of TSNs is evaluated through six metrics, including the F1 metric as well as the type I and type II errors, derived from the corresponding contingency tables. Moreover, the effects of sparsifying the synaptic weights resulting from the training process are explored for various thresholds. Results indicate that the proposed method achieves a very high accuracy, while it is considerably faster than the computation of each of the three resilience metrics. Concerning sparsification, after a threshold the accuracy drops, meaning that the TSNs cannot be further sparsified. Thus, their training is very efficient in that respect.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Abadi M (2016a) TensorFlow: learning functions at scale. ACM SIGPLAN Not. 51(9):1–1

    Google Scholar 

  2. Mea A (2016b) TensorFlow: a system for large-scale machine learning. OSDI 16:265–283

    Google Scholar 

  3. Alenazi MJ, Sterbenz JP (2015) Comprehensive comparison and accuracy of graph metrics in predicting network resilience. In: DRCN, IEEE, pp 157–164

  4. Bengua JA, Phien HN, Tuan HD (2015) Optimal feature extraction and classification of tensors via matrix product state decomposition. In: ICBD, IEEE, pp 669–672

  5. Benson AR, Gleich DF, Leskovec J (2015) Tensor spectral clustering for partitioning higher-order network structures. In: ICDM, SIAM, pp 118–126

  6. Bergstra J et al (2011) Theano: Deep learning on GPUs with Python. In: NIPS BigLearning workshop vol 3, pp 1–48

  7. Biguesh M, Gershman AB (2006) Training-based MIMO channel estimation: a study of estimator tradeoffs and optimal training signals. IEEE Trans Signal Process 54(3):884–893

    MATH  Google Scholar 

  8. Bishop CM (1995) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7(1):108–116

    Google Scholar 

  9. Blackmore S (2000) The meme machine. Oxford Universtiy Press, Oxford

    Google Scholar 

  10. Chandrasekhar AG, Jackson MO (2014) Tractable and consistent random graph models. Technical report, National Bureau of Economic Research

  11. Collobert R, Kavukcuoglu K, Farabet C (2011) torch7: A MATLAB-like environment for machine learning. In: BigLearn, NIPS workshop

  12. Deng L (2014) A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Trans Signal Inf Process 3:2 3https://doi.org/10.1017/atsip.2013.9

    Article  Google Scholar 

  13. Deng L, Yu D (2011) Deep convex net: A scalable architecture for speech pattern classification. In: Twelfth annual conference of the International Speech Communication Association

  14. Deng L, Hutchinson B, Yu D (2012) Parallel training for deep stacking networks. In: Thirteenth annual conference of the International Speech Communication Association

  15. Deng L, He X, Gao J (2013) Deep stacking networks for information retrieval. In: ICASSP, IEEE

  16. Deng L (2013) Recent advances in deep learning for speech research at Microsoft. In: ICASSP, IEEE

  17. Drakopoulos G, Gourgaris P, Kanavos A, Makris C (2016a) A fuzzy graph framework for initializing k-means. IJAIT 25(6):1–21

    Google Scholar 

  18. Drakopoulos G, Kontopoulos S, Makris C (2016) Eventually consistent cardinality estimation with applications in biodata mining. In: SAC, ACM

  19. Drakopoulos G, Kanavos A, Karydis I, Sioutas S, Vrahatis AG (2017) Tensor-based semantically-aware topic clustering of biomedical documents. Computation 5(3):34

    Google Scholar 

  20. Drakopoulos G, Kanavos A, Mylonas P, Sioutas S (2017) Defining and evaluating Twitter influence metrics: a higher order approach in Neo4j. SNAM 71(1):52

    Google Scholar 

  21. Drakopoulos G, Kanavos A, Tsolis D, Mylonas P, Sioutas S (2017) Towards a framework for tensor ontologies over Neo4j: representations and operations. In: IISA

  22. Drakopoulos G, Liapakis X, Tzimas G, Mylonas P (2018) A graph resilience metric based on paths: higher order analytics with GPU. In: ICTAI, IEEE

  23. Drakopoulos G, Stathopoulou F, Kanavos A, Paraskevas M, Tzimas G, Mylonas P, Iliadis L (2019) A genetic algorithm for spatiosocial tensor clustering: exploiting TensorFlow potential. Evol Syst

  24. Dunlavy DM, Kolda TG, Acar E (2010) Poblano v1. 0: A MATLAB toolbox for gradient-based optimization

  25. Estrada E, Higham DJ (2010) Network properties revealed through matrix functions. SIAM Rev 52(4):696–714

    MathSciNet  MATH  Google Scholar 

  26. Fisher DH (1987) Knowledge acquisition via incremental conceptual clustering. Mach Learn 2(2):139–172

    Google Scholar 

  27. Golub GH, Hansen PC, O’Leary DP (1999) Tikhonov regularization and total least squares. J Matrix Anal Appl 21(1):185–194

    MathSciNet  MATH  Google Scholar 

  28. Goodman DF, Brette R (2009) The brian simulator. Front Neurosci 3(2):192

    Google Scholar 

  29. Grubb A, Bagnell JA (2013) Stacked training for overfitting avoidance in deep networks. In: ICML workshops, p 1

  30. Gulli A, Pal S (2017) Deep learning with keras. PACKT Publishing Ltd, Birmingham

    Google Scholar 

  31. Ho TY, Lam PM, Leung CS (2008) Parallelization of cellular neural networks on GPU. Pattern Recognit 41(8):2684–2692

    MATH  Google Scholar 

  32. Hutchinson B, Deng L, Yu D (2013) Tensor deep stacking networks. TPAMI 35(8):1944–1957

    Google Scholar 

  33. Ip WH, Wang D (2011) Resilience and friability of transportation networks: evaluation, analysis and optimization. IEEE Syst J 5(2):189–198

    Google Scholar 

  34. Jang H, Park A, Jung K (2008) Neural network implementation using CUDA and OpenMP. In: DICTA’08, IEEE, pp 155–161

  35. Jia Y (2014) Caffe: convolutional architecture for fast feature embedding. In: International conference on multimedia. ACM, pp 675–678

  36. Kanavos A, Drakopoulos G, Tsakalidis A (2017) Graph community discovery algorithms in Neo4j with a regularization-based evaluation metric. In: WEBIST

  37. Kohonen T (1998) The self-organizing map. Neurocomputing 21(1):1–6

    MathSciNet  MATH  Google Scholar 

  38. Kolda T (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500

    MathSciNet  MATH  Google Scholar 

  39. Kontopoulos S, Drakopoulos G (2014) A space efficient scheme for graph representation. In: ICTAI, IEEE

  40. Kumar R, Sahni A, Marwah D (2015) Real time big data analytics dependence on network monitoring solutions using tensor networks and its decompositions. Netw Complex Syst 5(2)

  41. Larsson EG et al (2014) Massive MIMO for next generation wireless systems. IEEE Commun Mag 52(2):186–195

    Google Scholar 

  42. Jea L (2010) Kronecker graphs: an approach to modeling networks. JMLR 11:985–1042

    MathSciNet  Google Scholar 

  43. Li J, Chang H, Yang J (2015) Sparse deep stacking network for image classification. In: AAAI, pp 3804–3810

  44. Li L, Boulware D (2015) High-order tensor decomposition for large-scale data analysis. In: ICBD, IEEE, pp 665–668

  45. Liberti JC, Rappaport TS (1996) A geometrically based model for line-of-sight multipath radio channels. Veh Technol Conf 2:844–848

    Google Scholar 

  46. Lin S et al (2016) ATPC: adaptive transmission power control for wireless sensor networks. TOSN 12(1):6

    MathSciNet  Google Scholar 

  47. Loguinov D, Casas J, Wang X (2005) Graph-theoretic analysis of structured peer-to-peer systems: routing distances and fault resilience. IEEE/ACM TON 13(5):1107–1120

    Google Scholar 

  48. Loyka SL (2001) Channel capacity of MIMO architecture using the exponential correlation matrix. IEEE Commun Lett 5(9):369–371

    Google Scholar 

  49. Lusher D, Koskinen J, Robins G (2013) Exponential random graph models for social networks: theory, methods, and applications. Cambridge University Press, Cambridge

    Google Scholar 

  50. Malewicz G (2010) Pregel: a system for large-scale graph processing. In: CIKM, ACM, pp 135–146

  51. Matthews DG (2017) GPflow: a Gaussian process library using tensorflow. JMLR 18(1):1299–1304

    MathSciNet  MATH  Google Scholar 

  52. Xea M (2016) MLlib: machine learning in Apache spark. JMLR 17(1):1235–1241

    MathSciNet  Google Scholar 

  53. Nageswaran JM (2009) A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors. Neural Netw 22(5):791–800

    Google Scholar 

  54. Najjar W, Gaudiot JL (1990) Network resilience: a measure of network fault tolerance. ToC 2(1):174–181

    Google Scholar 

  55. Ngo HQ, Larsson EG, Marzetta TL (2013) Energy and spectral efficiency of very large multiuser MIMO systems. ToC 61(4):1436–1449

    Google Scholar 

  56. Oh KS, Jung K (2004) GPU implementation of neural networks. Pattern Recognit 37(6):1311–1314

    MATH  Google Scholar 

  57. Palangi H, Ward RK, Deng L (2013) Using deep stacking network to improve structured compressed sensing with multiple measurement vectors. In: ICASSP, pp 3337–3341

  58. Papalexakis EE, Faloutsos C (2015) Fast efficient and scalable core consistency diagnostic for the PARAFAC decomposition for big sparse tensors. In: ICASSP, pp 5441–5445

  59. Papalexakis EE, Pelechrinis K, Faloutsos C (2014) Spotting misbehaviors in location-based social networks using tensors. In: WWW, pp 551–552

  60. Pellionisz A, Llinás R (1979) Brain modeling by tensor network theory and computer simulation. The cerebellum: Distributed processor for predictive coordination. Neuroscience 4(3):323–348

    Google Scholar 

  61. Priest DM (1991) Algorithms for arbitrary precision floating point arithmetic. In: Tenth symposium on computer arithmetic. IEEE, pp 132–143

  62. Hea R (1992) Neural computation and self-organizing maps: an introduction. Addison-Wesley Reading, Boston

    Google Scholar 

  63. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    Google Scholar 

  64. Seshadhri C, Pinar A, Kolda TG (2011) An in-depth study of stochastic Kronecker graphs. In: ICDM, SIAM, pp 587–596

  65. Seshadhri C, Pinar A, Kolda TG (2013) An in-depth analysis of stochastic Kronecker graphs. JACM 60(2):13

    MathSciNet  MATH  Google Scholar 

  66. Shi Y, Niranjan U, Anandkumar A, Cecka C (2016) Tensor contractions with extended BLAS kernels on CPU and GPU. In: HiPC, IEEE, pp 193–202

  67. Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: NIPS, pp 3104–3112

  68. Vasilescu MAO, Terzopoulos D (2002) Multilinear analysis of image ensembles: Tensorfaces. In: European conference on computer vision. Springer, pp 447–460

  69. Vázquez A, Moreno Y (2003) Resilience to damage of graphs with degree correlations. Phys Rev E 67(1):15–101

    Google Scholar 

  70. Vedaldi A, Lenc K (2015) Matconvnet: Convolutional neural networks for MATLAB. In: International conference on multimedia. ACM, pp 689–692

  71. Vervliet N, Debals O, De Lathauwer L (2016) TensorLab 3.0—numerical optimization strategies for large-scale constrained and coupled matrix-tensor factorization. In: Asilomar conference on signals, systems and computers. IEEE, pp 1733–1738

  72. Wang M et al (2018) Disentangling the modes of variation in unlabelled data. TPAMI 40(11):2682–2695

    Google Scholar 

  73. Wolpert DH (1992) Stacked generalization. Neural Netw 5(2):241–259

    Google Scholar 

  74. Wong D, Cox DC (1999) Estimating local mean signal power level in a Rayleigh fading environment. TVT 48(3):956–959

    Google Scholar 

  75. Wongsuphasawat K (2018) Visualizing dataflow graphs of deep learning models in TensorFlow. Trans Vis Comput Graph 24(1):1–12

    Google Scholar 

  76. Yu D, Deng L, Seide F (2013) The deep tensor neural network with applications to large vocabulary speech recognition. Trans Audio Speech Language Process 21(2):388–396

    Google Scholar 

  77. Zeng R, Wu J, Senhadji L, Shu H (2015) Tensor object classification via multilinear discriminant analysis network. In: ICASSP, IEEE, pp 1971–1975

Download references

Acknowledgements

The authors acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Georgios Drakopoulos.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Drakopoulos, G., Mylonas, P. Evaluating graph resilience with tensor stack networks: a Keras implementation. Neural Comput & Applic 32, 4161–4176 (2020). https://doi.org/10.1007/s00521-020-04790-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-04790-1

Keywords

Mathematics Subject Classification

Navigation