Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Human information processing in complex networks

An Author Correction to this article was published on 07 July 2020

This article has been updated

Abstract

Humans communicate using systems of interconnected stimuli or concepts—from language and music to literature and science—yet it remains unclear how, if at all, the structure of these networks supports the communication of information. Although information theory provides tools to quantify the information produced by a system, traditional metrics do not account for the inefficient ways that humans process this information. Here, we develop an analytical framework to study the information generated by a system as perceived by a human observer. We demonstrate experimentally that this perceived information depends critically on a system’s network topology. Applying our framework to several real networks, we find that they communicate a large amount of information (having high entropy) and do so efficiently (maintaining low divergence from human expectations). Moreover, we show that such efficient communication arises in networks that are simultaneously heterogeneous, with high-degree hubs, and clustered, with tightly connected modules—the two defining features of hierarchical organization. Together, these results suggest that many communication networks are constrained by the pressures of information transmission, and that these pressures select for specific structural features.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Human behavioural experiments reveal the dependence of perceived information on network topology.
Fig. 2: Modelling human estimates of transition probabilities.
Fig. 3: The entropy and KL divergence of real communication networks.
Fig. 4: The impact of network topology on entropy and KL divergence.
Fig. 5: Hierarchically modular networks support the efficient communication of information.

Similar content being viewed by others

Data availability

Source data for Fig. 1, Supplementary Figs. 35 and Supplementary Tables 111 are provided in Supplementary Data File 1. Source data for Fig. 2 and Supplementary Fig. 1 are provided in Supplementary Data File 2. Source data for the networks in Fig. 3, Table 1 and Supplementary Figs. 69 are either publicly available or provided in Supplementary Data File 3 (see Supplementary Table 12 for details).

Code availability

The code that supports the findings of this study is available from the corresponding author upon reasonable request.

Change history

References

  1. Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948).

    MathSciNet  MATH  Google Scholar 

  2. Bar-Hillel, Y. & Carnap, R. Semantic information. Br. J. Phil. Sci. 4, 147–157 (1953).

    MathSciNet  Google Scholar 

  3. Dretske, F. I. Knowledge and the Flow of Information (MIT Press, 1981).

  4. Cohen, J. E. Information theory and music. Behav. Sci. 7, 137–163 (1962).

    MathSciNet  Google Scholar 

  5. Rosvall, M. & Bergstrom, C. T. Maps of random walks on complex networks reveal community structure. Proc. Natl Acad. Sci. USA 105, 1118–1123 (2008).

    ADS  Google Scholar 

  6. Gómez-Gardeñes, J. & Latora, V. Entropy rate of diffusion processes on complex networks. Phys. Rev. E 78, 065102 (2008).

    ADS  Google Scholar 

  7. Liben-Nowell, D. & Kleinberg, J. Tracing information flow on a global scale using Internet chain-letter data. Proc. Natl Acad. Sci. USA 105, 4633–4638 (2008).

    ADS  Google Scholar 

  8. Rosvall, M., Trusina, A., Minnhagen, P. & Sneppen, K. Networks and cities: an information perspective. Phys. Rev. Lett. 94, 028701 (2005).

    ADS  Google Scholar 

  9. Cover, T. M. & Thomas, J. A. Elements of Information Theory (John Wiley & Sons, 2012).

  10. Hilbert, M. Toward a synthesis of cognitive biases: how noisy information processing can bias human decision making. Psychol. Bull. 138, 211–237 (2012).

    Google Scholar 

  11. Laming, D. R. J. Information Theory of Choice-reaction Times (Academic Press, 1968).

  12. Koechlin, E. & Hyafil, A. Anterior prefrontal function and the limits of human decision-making. Science 318, 594–598 (2007).

    ADS  Google Scholar 

  13. Saffran, J. R., Aslin, R. N. & Newport, E. L. Statistical learning by 8-month-old infants. Science 274, 1926–1928 (1996).

    ADS  Google Scholar 

  14. Dehaene, S., Meyniel, F., Wacongne, C., Wang, L. & Pallier, C. The neural representation of sequences: from transition probabilities to algebraic patterns and linguistic trees. Neuron 88, 2–19 (2015).

    Google Scholar 

  15. Schapiro, A. C., Rogers, T. T., Cordova, N. I., Turk-Browne, N. B. & Botvinick, M. M. Neural representations of events arise from temporal community structure. Nat. Neurosci. 16, 486–492 (2013).

    Google Scholar 

  16. Kahn, A. E., Karuza, E. A., Vettel, J. M. & Bassett, D. S. Network constraints on learnability of probabilistic motor sequences. Nat. Hum. Behav. 2, 936–947 (2018).

    Google Scholar 

  17. Lynn, C. W., Kahn, A. E., Nyema, N. & Bassett, D. S. Abstract representations of events arise from mental errors in learning and memory. Nat. Commun. 11, 2313 (2020).

    ADS  Google Scholar 

  18. Lynn, C. W. & Bassett, D. S. How humans learn and represent networks. Proc. Natl Acd. Sci. USA (in the press).

  19. Karuza, E. A., Kahn, A. E. & Bassett, D. S. Human sensitivity to community structure is robust to topological variation. Complexity https://doi.org/10.1155/2019/8379321 (2019).

  20. Meyniel, F., Maheu, M. & Dehaene, S. Human inferences about sequences: a minimal transition probability model. PLoS Comput. Biol. 12, e1005260 (2016).

    ADS  Google Scholar 

  21. Tompson, S. H., Kahn, A. E., Falk, E. B., Vettel, J. M. & Bassett, D. S. Individual differences in learning social and nonsocial network structures. J. Exp. Psychol. Learn. Mem. Cogn. 45, 253–271 (2019).

    Google Scholar 

  22. Howard, M. W. & Kahana, M. J. A distributed representation of temporal context. J. Math. Psychol. 46, 269–299 (2002).

    MathSciNet  MATH  Google Scholar 

  23. Dayan, P. Improving generalization for temporal difference learning: the successor representation. Neural Comput. 5, 613–624 (1993).

    Google Scholar 

  24. Gershman, S. J., Moore, C. D., Todd, M. T., Norman, K. A. & Sederberg, P. B. The successor representation and temporal context. Neural Comput. 24, 1553–1568 (2012).

    MathSciNet  Google Scholar 

  25. Garvert, M. M., Dolan, R. J. & Behrens, T. E. A map of abstract relational knowledge in the human hippocampal-entorhinal cortex. Elife 6, e17086 (2017).

    Google Scholar 

  26. Estrada, E. & Hatano, N. Communicability in complex networks. Phys. Rev. E 77, 036111 (2008).

    ADS  MathSciNet  Google Scholar 

  27. Estrada, E., Hatano, N. & Benzi, M. The physics of communicability in complex networks. Phys. Rep. 514, 89–119 (2012).

    ADS  MathSciNet  Google Scholar 

  28. Maslov, S. & Sneppen, K. Specificity and stability in topology of protein networks. Science 296, 910–913 (2002).

    ADS  Google Scholar 

  29. Derex, M. & Boyd, R. The foundations of the human cultural niche. Nat. Commun. 6, 8398 (2015).

    ADS  Google Scholar 

  30. Momennejad, I., Duker, A. & Coman, A. Bridge ties bind collective memories. Nat. Commun. 10, 1578 (2019).

    ADS  Google Scholar 

  31. Milo, R. et al. Superfamilies of evolved and designed networks. Science 303, 1538–1542 (2004).

    ADS  Google Scholar 

  32. Foster, J. G., Foster, D. V., Grassberger, P. & Paczuski, M. Edge direction and the structure of networks. Proc. Natl Acad. Sci. USA 107, 10815–10820 (2010).

    ADS  Google Scholar 

  33. Burda, Z., Duda, J., Luck, J.-M. & Waclaw, B. Localization of the maximal entropy random walk. Phys. Rev. Lett. 102, 160602 (2009).

    ADS  Google Scholar 

  34. Cancho, R. F. I. & Solé, R. V. The small world of human language. Proc. R. Soc. Lond. B 268, 2261–2265 (2001).

    Google Scholar 

  35. Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).

    ADS  MathSciNet  MATH  Google Scholar 

  36. Newman, M. E. The structure of scientific collaboration networks. Proc. Natl Acad. Sci. USA 98, 404–409 (2001).

    ADS  MathSciNet  MATH  Google Scholar 

  37. Stumpf, M. P. & Porter, M. A. Critical truths about power laws. Science 335, 665–666 (2012).

    ADS  MathSciNet  MATH  Google Scholar 

  38. Girvan, M. & Newman, M. E. Community structure in social and biological networks. Proc. Natl Acad. Sci. USA 99, 7821–7826 (2002).

    ADS  MathSciNet  MATH  Google Scholar 

  39. Motter, A. E., De Moura, A. P., Lai, Y.-C. & Dasgupta, P. Topology of the conceptual network of language. Phys. Rev. E 65, 065102 (2002).

    ADS  Google Scholar 

  40. Eriksen, K. A., Simonsen, I., Maslov, S. & Sneppen, K. Modularity and extreme edges of the Internet. Phys. Rev. Lett. 90, 148701 (2003).

    ADS  Google Scholar 

  41. Ravasz, E. & Barabási, A.-L. Hierarchical organization in complex networks. Phys. Rev. E 67, 026112 (2003).

    ADS  MATH  Google Scholar 

  42. Deacon, T. W. The Symbolic Species: The Co-evolution of Language and the Brain (WW Norton, 1998).

  43. Dix, A. Human–Computer Interaction (Springer, 2009).

  44. Hayes, A. F. Statistical Methods for Communication Science (Routledge, 2009).

  45. Brown, P. F., Desouza, P. V., Mercer, R. L., Pietra, V. J. D. & Lai, J. C. Class-based n-gram models of natural language. Comput. Linguist. 18, 467–479 (1992).

    Google Scholar 

  46. Pachet, F., Roy, P. & Barbieri, G. Finite-length Markov processes with constraints. In Twenty-Second International Joint Conference on Artificial Intelligence (ed. Walsh, T.) 635–642 (AAAI, 2011).

  47. Meyniel, F. & Dehaene, S. Brain networks for confidence weighting and hierarchical inference during probabilistic learning. Proc. Natl Acad. Sci. USA 114, E3859–E3868 (2017).

    Google Scholar 

  48. Goh, K.-I., Kahng, B. & Kim, D. Universal behavior of load distribution in scale-free networks. Phys. Rev. Lett. 87, 278701 (2001).

    Google Scholar 

  49. Liu, Y.-Y., Slotine, J.-J. & Barabási, A.-L. Controllability of complex networks. Nature 473, 167–173 (2011).

    ADS  Google Scholar 

  50. Schall, R. Estimation in generalized linear models with random effects. Biometrika 78, 719–727 (1991).

    MATH  Google Scholar 

Download references

Acknowledgements

We thank E. Horsley, H. Ju, D. Lydon-Staley, S. Patankar, P. Srivastava and E. Teich for feedback on earlier versions of the manuscript. We thank D. Zhou for providing the code used to parse the texts. D.S.B., C.W.L. and A.E.K. acknowledge support from the John D. and Catherine T. MacArthur Foundation, the Alfred P. Sloan Foundation, the ISI Foundation, the Paul G. Allen Family Foundation, the Army Research Laboratory (W911NF-10-2-0022), the Army Research Office (Bassett-W911NF-14-1-0679, Grafton-W911NF-16-1-0474, DCIST- W911NF-17-2-0181), the Office of Naval Research, the National Institute of Mental Health (2-R01-DC-009209-11, R01-MH112847, R01-MH107235, R21-M MH-106799), the National Institute of Child Health and Human Development (1R01HD086888-01), National Institute of Neurological Disorders and Stroke (R01 NS099348) and the National Science Foundation (BCS-1441502, BCS-1430087, NSF PHY-1554488 and BCS-1631550). L.P. is supported by an NSF Graduate Research Fellowship. The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the funding agencies.

Author information

Authors and Affiliations

Authors

Contributions

C.W.L. and D.S.B. conceived the project. C.W.L. designed the framework and performed the analysis. C.W.L. and A.E.K. performed the human experiments. C.W.L. wrote the manuscript and Supplementary Information. L.P., A.E.K. and D.S.B. edited the manuscript and Supplementary Information.

Corresponding author

Correspondence to Danielle S. Bassett.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Distributions of network effects over individual subjects.

a-e, Distributions over subjects of the different reaction time effects: the entropic effect (n = 177), or the increase in reaction times for increasing produced information (a); the extended cross-cluster effects (n = 173), or the difference in reaction times between internal and cross-cluster transitions (b), between boundary and cross-cluster transitions (c), and between internal and boundary transitions (d) in the modular graph; and the modular effect (n = 84), or the difference in reaction times between the modular network and random k-4 networks (e). f-j, Distributions over subjects of the different effects on error rates: the entropic effect (f), the extended cross-cluster effects (g-i), and the modular effect (j).

Extended Data Fig. 2 Correlations between different network effects across subjects.

a, Pearson correlations between the entropic and extended cross-cluster effects on reaction times (n = 142 subjects). b, Pearson correlations between the entropic and extended cross-cluster effects on error rates (n = 142 subjects). In a and b, the modular effects on reaction times and error rates are not shown because they were measured in a different population of subjects. c, Pearson correlation between the impact on reaction time and error rate for the entropic effect (n = 177 subjects), extended cross-cluster effects (n = 173 subjects), and the modular effect (n = 84 subjects). Statistically significant correlations are indicated by p-values less than 0.001 (***), less than 0.01 (**), and less than 0.05 (*).

Extended Data Fig. 3 KL divergence of real networks for different values of η.

a, KL divergence of fully randomized versions of the real networks listed in Table S12 (\({D}_{\,\text{KL}}^{\text{rand}\,}\)) compared with the true value (\({D}_{\,\text{KL}}^{\text{real}\,}\)) as η varies from zero to one. Every real networks maintains lower KL divergence than the corresponding randomized network across all values of η. b, Difference between the KL divergence of real and fully randomized networks as a function of η. c, KL divergence of degree-preserving randomized versions of the real networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)) compared with \({D}_{\,\text{KL}}^{\text{real}\,}\) as η varies from zero to one. The real networks display lower KL divergence than the degree-preserving randomized versions across all values of η. d, Difference between the KL divergence of real and degree-preserving randomized networks as a function of η. All networks are undirected, and each line is calculated using one randomization of the corresponding real network.

Extended Data Fig. 4 KL divergence of real networks under the power-law model of human expectations.

a, KL divergence of fully randomized versions of the real networks listed in Table S12 (\({D}_{\,\text{KL}}^{\text{rand}\,}\)) compared with the true value (\({D}_{\,\text{KL}}^{\text{real}\,}\)). Expectations \(\hat{P}\) are defined as in Eq. (9) with f(t) = (t+1)α, and we allow α to vary between 1 and 10. The real networks maintain lower KL divergence than the randomized network across all values of α. b, Difference between the KL divergence of real and fully randomized networks as a function of α. c, KL divergence of degree-preserving randomized versions of the real networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)) compared with \({D}_{\,\text{KL}}^{\text{real}\,}\) as α varies from 1 to 10. The real networks display lower KL divergence than the degree-preserving randomized versions across all values of α. d, Difference between the KL divergence of real and degree-preserving randomized networks as a function of α. All networks are undirected, and each line is calculated using one randomization of the corresponding real network.

Extended Data Fig. 5 KL divergence of real networks under the factorial model of human expectations.

a, KL divergence of fully randomized versions of the real networks listed in Table S12 (\({D}_{\,\text{KL}}^{\text{rand}\,}\)) compared with the exact value (\({D}_{\,\text{KL}}^{\text{real}\,}\)). Expectations \(\hat{P}\) are defined as in Eq. (9) with f(t) = 1/t!. b, KL divergence of degree-preserving randomized versions of the real networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)) compared with \({D}_{\,\text{KL}}^{\text{real}\,}\). In both cases, the real networks maintain lower KL divergence than the randomized versions. Data points and error bars (standard deviations) are estimated from 10 realizations of the randomized networks.

Extended Data Fig. 6 Entropy and KL divergence of directed versions of real networks.

a, Entropy of directed versions of the real networks listed in Table S12 (Sreal) compared with fully randomized versions (Srand). Entropy is calculated directly from Eq. (2) with the stationary distribution \(\pi\) calculated numerically. b KL divergence of directed versions of the real networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) compared with fully randomized versions (\({D}_{\,\text{KL}}^{\text{rand}\,}\)). Expectations \(\hat{P}\) are defined as in Eq. (10) with η set to the average value 0.80 from our human experiments. c, Entropy of randomized versions of directed real networks with in- and out-degrees preserved (Sdeg) compared with Sreal. d, KL divergence of degree-preserving randomized versions of directed real networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)) compared with \({D}_{\,\text{KL}}^{\text{real}\,}\). Data points and error bars (standard deviations) are estimated from 100 realizations of the randomized networks.

Extended Data Fig. 7 Entropy and KL divergence of temporally evolving versions of real networks.

Entropy of temporally evolving versions of the real networks listed in Table S12 (Sreal) compared with fully randomized versions (Srand). Each line represents a sequence of growing networks and each symbol represents the final version of the network. b, KL divergence of evolving versions of the real networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) compared with fully randomized versions (\({D}_{\,\text{KL}}^{\text{rand}\,}\)). Expectations \(\hat{P}\) are defined as in Eq. (10) with η set to the average value 0.80 from our human experiments. c, Entropy of temporally evolving versions of real networks (Sreal) compared with degree-preserving randomized versions (Sdeg). d, KL divergence of temporally evolving versions of real networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) compared with degree-preserving randomized versions (\({D}_{\,\text{KL}}^{\text{deg}\,}\)). Across all panels, each point along the lines represents an average over five realizations of the randomized networks.

Extended Data Fig. 8 Evolution of the difference in entropy and KL divergence between real networks and randomized versions.

a, Difference between the entropy of temporally evolving real networks (Sreal) and the entropy of fully randomized versions of the same networks (Srand) plotted as a function of the fraction of the final network size. Each line represents a sequence of growing networks that culminates in one of the communication networks studied in the main text. b, Difference between the KL divergence of temporally evolving real networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) and that of fully randomized versions (\({D}_{\,\text{KL}}^{\text{rand}\,}\)) plotted as a function of the fraction of the final network size. When calculating the KL divergences, the expectations \(\hat{P}\) are defined as in Eq. (10) with η set to the average value 0.80 from our human experiments. Across both panels, each point along the lines represents an average over five realizations of the randomized networks.

Extended Data Fig. 9 Comparison of directed citation and language networks.

a, Out-degrees \({k}_{i}^{\,\text{out}\,}={\sum }_{j}{G}_{ij}\) of nodes in the arXiv Hep-Th citation network compared with the in-degrees \({k}_{i}^{\,\text{in}\,}={\sum }_{j}{G}_{ji}\) of the same nodes; we find a weak Spearman’s correlation of rs = 0.18. b, Out-degrees compared with in-degrees of nodes in the Shakespeare language (noun transition) network; we find a strong correlation rs = 0.92. c, Entries in the stationary distribution πi for different nodes in the citation network compared with the node-level entropy Si; we find a weakly negative Spearman’s correlation rs = − 0.09. d, Entries in the stationary distribution compared with node-level entropies in the language network; we find a strong Spearman’s correlation rs = 0.87.

Extended Data Fig. 10 Comparison of all-word transition networks and noun transition networks.

a-b, Difference between the KL divergence of language (word transition) networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) and degree-preserving randomized versions of the same networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)). We consider networks of transitions between all words (a) and networks of transitions between nouns (b). c-d, Difference between the average clustering coefficient of language networks (CCreal) and degree-preserving randomized versions of the same networks (CCdeg), where transitions are considered between all words (c) or only nouns (d). In all panels, data points and error bars (standard deviations) are estimated from 100 realizations of the randomized networks, and the networks are undirected.

Supplementary information

Supplementary Information

Supplementary discussion, Figs. 1–21 and Tables 1–12.

Reporting Summary

Source data

Source Data Fig. 1

Source data for Fig. 1, Supplementary Figs. 3–5 and Supplementary Tables 1–11.

Source Data Fig. 2

Source data for Fig. 2 and Supplementary Fig. 1.

Source Data Fig. 3

Source data for the networks in Fig. 3, Table 1 and Supplementary Figs. 6–9.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lynn, C.W., Papadopoulos, L., Kahn, A.E. et al. Human information processing in complex networks. Nat. Phys. 16, 965–973 (2020). https://doi.org/10.1038/s41567-020-0924-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41567-020-0924-7

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing