1 Introduction

A widespread problem in modern Data Science is how to combine multiple data types such as images, text, and numbers in a meaningful framework [15]. The traditional approach to tackle this challenge is to construct machine learning pipelines in which each data type is treated separately—sequentially or in parallel—and the partial results are combined at the end of the procedure [6, 7]. There are two problems with such a procedure. First, it leads to the development of ad-hoc solutions that are highly contingent on the dataset in question [8, 9]. Second, each model is trained independently from one another, meaning that the relationships between the different types of data are not taken into account [10, 11]. These problems show the need of developing a unified statistical framework applied simultaneously to the different types of data [12].

In this paper, we investigate the problem of clustering and finding topics in collections of written documents for which additional information is available as metadata and as hyperlinks between documents. We obtain a unified statistical framework to this problem by mapping it to the problem of inferring groups in multilayer networks. The key design for the unified framework proposed here is inspired by the connections [3, 1315] between the problems of identifying (i) topics in a collection of written documents (i.e. topic modeling) [16] and (ii) communities in complex networks (i.e. community detection) [17]. In particular, Ref. [15] shows that both problems can be tackled using Stochastic Block Models (SBM) [3, 12, 1822] and that SBMs, previously applied to find communities in complex networks, outperform and overcome many of the difficulties of the most popular unsupervised methods to infer structures from large collections of texts (topic modelling methods such as the Latent Dirichlet Allocation [23] and its generalizations). However, these approaches have been applied only to the textual part of collections of documents, ignoring additional information available about them. For instance, in datasets of scientific publications, one would consider only the text of the articles but not the citation network (used in traditional community detection methods [17]) or other metadata (such as the journal or bibliographical classification) [24, 25]. We propose here an extension of Ref. [15] and show how the diversity of information typically available about documents can be incorporated in the same framework by using multilayer SBMs [3, 10, 11]. As illustrated in Fig. 1, in addition to the bipartite Document-Word layer discussed in Ref. [15], here we incorporate a Hyperlink layer connecting the different written documents and a Metadata-Document layer that incorporates tags and other metadata. The key difference to other multilayer networks [4], as explored in Sect. 2 below, is that statistical laws [26] governing the frequency of words on documents leave fingerprints on the density of the different network layers. Our investigations in different datasets, reported in Sect. 3 for collection of Wikipedia articles and in the Supplementary Information for three other datasets, reveal that the proposed multilayer approach leads to improved results when compared to both the topic modelling approach of [15] and the usual community detection of (hyperlink) networks. Our approach leads to a more nuanced view on the communities of documents, generates a list of topics associated to the communities, and improves the link-prediction capabilities when compared to the hyperlink network alone [27]. The details on our methods can be found in the appendices, Supplementary Information, and in the repository [28].

Figure 1
figure 1

Different views on the relationship between written documents. Lower layer: a bipartite multi-graph of documents (circles) and word types (triangles), links correspond to word tokens. Middle layer: directed graph of documents (e.g., hyperlinks in Wikipedia or citations in Scientific Papers). Upper layer: a bipartite graph of documents and tags (squares), used to classify the documents

2 Multiple data sources as multilayer networks

In this section we introduce the general methodology of our paper: we introduce the types of data we are interested in (Sect. 2.1), we show how they can be represented as a multilayer network and discuss the properties of these networks (Sect. 2.2), and we describe how they can be modelled using Stochastic Block Models (Sect. 2.3).

2.1 Setting: multiple data sources

We consider a collection of \(d=1, \ldots,D\) documents and we are interested in clustering and finding underlying similarities between them using combinations of the following information:

Text (T)::

Each document contains \(k_{d}\) word tokens from a vocabulary of V word types (\(M=\sum_{d} k_{d}\) is the total number of word tokens).

Hyperlinks (H)::

Documents are linked to each other by building a (directed) graph or network.

Metadata (M)::

Documents are classified by tags or other metadata.

These characteristics are typical for textual data and networks. Here we explore three types of such datasets, summarized in Table 1. The main dataset we use to illustrate our results was extracted from the English Wikipedia, where the documents are articles (in scientific categories), the text is the content of the articles, hyperlinks are links between articles contained in the text, and metadata are tags introduced by users to classify the articles (categories). In our main example, we selected hundreds of articles in one of three scientific categories of Wikipedia (see Appendix 1 for details). Our main findings are confirmed in a second Wikipedia dataset (obtained choosing different scientific categories), in a citation dataset (documents are scientific papers, hyperlinks are citations, the text is extracted from the title and abstract, and metadata are scientific categories), and in an E-mail dataset (documents are all E-mails from the same user, hyperlinks correspond to E-mails sent between users, and the text is the content of the E-mails). These results and further details of the data are presented in the Supplementary Information 1 (see Additional file 1-Sect. 1).

Table 1 Summary of the datasets used in this paper

2.2 Data as networks

The data described above can be represented as multilayer networks. The Hyperlink layer is the most obvious one, where documents are nodes and the hyperlinks are directed edges. The Metadata layer is built by a bipartite network consisting of metadata tags and documents as nodes, whereby undirected edges correspond to documents containing a given metadata-tag. Finally, the Text layer is obtained by restricting the text analysis to the level of word frequencies (bag-of-words) and then considering the bipartite network of word (types) and documents, where the edges correspond to word tokens (i.e., the count of how often a word type appears in a document). While word-nodes and metadata tags appear only in the text and the metadata layer, all layers have document nodes in common. The novelty of our multilayer approach, in comparison to other approaches using multilayer networks, is the inclusion of the text layer. The importance of using a bipartite multigraph layer [22] to represent the text, instead of alternative “word networks” [14, 29, 30], is that it contains the complete information of word occurrence in documents and allows for a formal connection to topic-modelling methods [15, 20].

We now investigate the properties of the multilayer network described above, based on known results in networks and textual data. The most striking feature of this network is that the size of the different layers varies dramatically and scales differently with system size. A first indication of this lack of balance is seen by looking at the number of edges shown in Table 1: the number of edges in the text layer (i.e. word tokens) is substantially larger than the number of nodes or edges in all of the other layers. Such an imbalance is expected in all datasets in which the same type of data as outlined in Sect. 2.1 is present. To see this, we investigate in Fig. 2 how the average degree \(\langle k_{X} \rangle \) (number of edges/ total number of nodes) of the different node types X scale with the number of documents \(n_{D}\) (which plays the role of system size). For the document nodes in the Hyperlink layer and the Text layer we see a constant average degree, typical of sparse networks. The Metadata layer yields a trivial scaling linear with \(n_{D}\) as in dense networks because each document has one edge to a metadata node. More interestingly, the average degree of the word type nodes in the Text layer, \(\langle k_{V} \rangle \), shows a growth that scales as

$$\begin{aligned} \langle k_{V} \rangle \sim n_{D}^{\gamma }, \end{aligned}$$
(1)

with \(0 < \gamma <1\). This is between the usual limits expected for sparse (\(\gamma =0\)) and dense (\(\gamma =1\)) networks.

Figure 2
figure 2

Scaling of average degree for each class of node reveals sparse and dense layers. Scaling of the average degree \(\langle k_{X} \rangle \) with the number of documents \(n_{D}\) depends on the node types X. The average degree was computed over all nodes of the same type (see legend, where H,T,M indicates the layer) in a sample of \(n_{D}\) documents from dataset. The symbols (error bars) are the average (standard deviation) over multiple random samples of documents. The prediction for the degree of word types using Eq. (1) is also plotted for reference

We now explain the observation in Eq. (1) in terms of properties of text in general. More specifically, the type-token relationship in texts follows Heaps’ law [26, 31, 32], which states that the number of word types V scales with the word tokens M as

$$\begin{aligned} V \sim M^{\beta }, \end{aligned}$$
(2)

whereby \(0<\beta <1\) is the parameter of interest. The average degree is obtained as \(\langle k_{V} \rangle = M/V\) and \(n_{D} \propto M\) (where the proportionality constant is the average size of Wikipedia articles, in word tokens). Combining this with Eqs. (1) and (2) we obtain that \(\gamma = 1-\beta \). From the data used here, we estimate a Heaps’ exponent \(\beta = 0.56\), that leads to a prediction of \(\gamma =0.44\). This prediction is shown as a dashed line in Fig. 2 and is in good agreement (for large \(n_{D}\)) with the average degree of word nodes.

2.3 Stochastic block models

To achieve our goal of clustering documents and identifying topics considering multiple type of datasets simultaneously, we need to explore statistical patterns in the connectivity of the multilayer networks discussed above. This can be obtained using Stochastic Block Models (SBMs). The choice of SBMs is based on the existence of a successful computational and theoretical framework, reviewed in Ref. [12], that can be applied to networks with the characteristics needed in our problem: different types of networks (directed, bipartite, and multi edges), multilayer networks [11], and accounting for key ingredients to detect communities (e.g., degree correction and a nested/hierarchical generalizations [33]). Our previous analysis of bipartite word-document networks using this framework has outperformed traditional topic modelling approaches [15].

SBMs are a family of random-graph models that generate networks with adjacency matrix \(A_{\mathrm{ij}}\) with probability \(P(\boldsymbol{A}|\boldsymbol{b})\), where the vector b with entries \(b_{i} \in \{1, \ldots, B\} \), specifies the membership of nodes \(i=1,\ldots, D\) into one of B possible groups. For our multilayer network design—developed for the three types of data (H,T,M) as discussed in Sect. 2.2—we fit the SBM framework to each layer combining them by constraining document groups to be the same across all layers, i.e. with a joint probability

$$\begin{aligned} P(\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}}|\boldsymbol{b}) = P( \boldsymbol{A}_{\text{H}}|\boldsymbol{b})P(\boldsymbol{A}_{\text{T}}|\boldsymbol{b})P( \boldsymbol{A}_{\text{M}}| \boldsymbol{b}), \end{aligned}$$
(3)

where \(\boldsymbol{A}_{\text{H}}\), \(\boldsymbol{A}_{\text{T}}\) and \(\boldsymbol{A}_{\text{M}}\) are the adjacency matrices of each respective layer. In each individual layer, edges between nodes i and j are sampled from a Poisson distribution with average [20]

$$\begin{aligned} \theta _{i}\theta _{j}\omega _{b_{i},b_{j}}, \end{aligned}$$
(4)

whereby \(\omega _{\mathrm{rs}}\) is the expected number of edges between group r and s, \(b_{i}\) is the group membership of node i, and \(\theta _{i}\) is overall propensity with which a node is selected within its own group. Non-informative priors are used for the parameters θ and ω and the marginal likelihood of the SBM is computed as [34]

$$\begin{aligned} P(\boldsymbol{A}|\boldsymbol{b}) = \int P(\boldsymbol{A}|\boldsymbol{\omega }, \boldsymbol{\theta }, \boldsymbol{b})P( \boldsymbol{\omega },\boldsymbol{ \theta }|\boldsymbol{b})\,d\boldsymbol{\theta }\,d\boldsymbol{\omega }, \end{aligned}$$
(5)

Based on this, we consider the overall posterior distribution for a single partition conditioned the edges on all layers [35]

$$\begin{aligned} P(\boldsymbol{b}|\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}}) = \frac{P(\boldsymbol{A}_{\text{H}}|\boldsymbol{b})P(\boldsymbol{A}_{\text{T}}|\boldsymbol{b})P(\boldsymbol{A}_{\text{M}}|\boldsymbol{b})P(\boldsymbol{b})}{P(\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}})}. \end{aligned}$$
(6)

With this approach, not only the words but also the documents are now clustered into categories. We implement the inference using the package graph-tool [28, 3638] (see Additional file 1-Sect. 2 for details and Ref. [28] for our codes).

3 Application to Wikipedia data

In this section we apply the methodology and ideas discussed above to the Wikipedia dataset which contains articles classified by users in the categories Mathematics, Physics, and Biology. We are interested in comparing the outcomes and performance of the models discussed above applied to the different types of information in the data. We fit multiple variants of the multilayer SBM, whereby we choose different layers to be included in the model.

3.1 Description length

The performance of each model can be measured by the extent to which a model succeeds in describing (compressing) the data. This can be quantified computing its description length (DL) [39, 40]

$$\begin{aligned} \mathrm{DL}=-\log P(\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}}, \boldsymbol{b}), \end{aligned}$$
(7)

which describes the information necessary to describe both the data and the model parameters. From Eq. (6), we see that minimizing the description length is equivalent to maximizing the posterior probability \(P(\boldsymbol{b}|\boldsymbol{A}_{\text{H}}, \boldsymbol{A}_{\text{T}}, \boldsymbol{A}_{\text{M}})\).

In Table 2 we summarise the DL obtained for each model in our dataset. It is quite clear that the DL of the models containing the Text layer are much larger than those containing only the Hyperlink and Metadata layers. This is a direct consequence of the large number of word types in the data, when compared to documents or hyperlinks, the lack of balance between the layers mentioned in Sect. 2.2. This lack of balance between layers thus has important consequences for the inference of partitions and our ability to compare the different models. For instance, the effectiveness of the multilayer approach could be evaluated by comparing the DL of the multilayer model (e.g., DL of model \(H+T\)) to the sum of the DL of the single-layer models (e.g., DL of model H + DL of model T). In our case this comparison is not very informative because the DL of the combined model is dominated by the largest layer and the DL of the small layer often lies within the fluctuations obtained from multiple MCMC runs (see Additional file 1-Sect. 2). This reasoning suggests that the clustering of nodes would be dominated by the Text layer or, if the Text layer is excluded, by the Hyperlink layer which will dominate over the Metadata layer.Footnote 1 However, we will see below that there are still significant and meaningful differences in the clustering of nodes obtained using different combinations of layers. This happens because the inference problem remains non-trivial because the DL landscape contains many distinct states with similar values in the DL so that even small effects due to the H and M layers can affect the outcome.

Table 2 Description length for each combination of layers in the multilayer stochastic block model. We compute the average description length (DL), Eq. (7), for each model class alongside the standard deviation over multiple MCMC runs. We also retrieved the minimum DL (MDL) over all the runs. The DL of the Text layer exceeds the Hyperlink and Metadata layer by several orders of magnitude, thus contributing the most to the Hyperlink + Text model

3.2 Qualitative comparison of groups of documents

Community detection methods aim to find the partition of the nodes that best captures the structure in the network in a meaningful way whilst being robust to noise [12, 21]. We thus evaluate the different models by comparing the resulting partitioning of documents [35]. Specifically, we fit the Hyperlink, Text, and Hyperlink + Text model and obtain a best partition from combining multiple samples from the posterior \(P(\boldsymbol{b}|\boldsymbol{A})\) for each model to construct a point estimate, which utilises the different parts of the posterior distribution. We then project the group membership onto the Hyperlink layer (which only contains document-nodes) and retrieve the consensus partition alongside the uncertainty of the partition [41] (see Appendix 2 for details).

Our results are shown in Fig. 3 and reveal that our model is successful in retrieving different meaningful groupings of the articles depending on the available data (i.e. layers included in the model). We first notice that the classification of articles made by users—panel (a), Wikipedia label—group articles in Mathematics and Biology that are strongly linked with each other (through hyperlinks), whereas Physics articles appear intertwined in between them. When we infer the partition of nodes based only on the hyperlink network—panel (b), Hyperlink model—we obtain that our model obtains 2 groups and it is quite confident about it (uncertainty is zero, \(\sigma = 0\).). This partition resembles the partition based on Wikipedia labels. When the documents are partitioned based on their text—panel (c), Text Model-, a richer picture emerges. There is a large community that resembles closely the documents classified as Biology and one of the communities obtained using the hyperlinks layer. However, the remaining documents (most of the Mathematics and Physics articles) are now grouped in 4 categories (i.e., 5 communities in total) which are still linked to each other but more loosely than before (even though Fig. 3 shows the Hyperlink network, the Hyperlinks were not used to group documents in panels a) and c)). Finally, when hyperlinks and text are used simultaneously—panel (d)—4 communities are found, which resemble the previous ones but that also show important distinctions. This demonstrates that even if the Text layer dominates the description length, there are noticeable differences in the inferred partitions when using the hyperlinks in addition to text for clustering documents.

Figure 3
figure 3

Different models lead to different partitions of Wikipedia articles into communities. The network corresponds to Wikipedia articles (nodes) and hyperlinks (edges). The colour of the nodes corresponds to groups of document nodes: (a) from the Wikipedia labels (with annotations Mathematics, Physics, and Biology); (b)–(d) communities found from our model using different datasets (layers) and the consensus partition (see App. 2), where σ—defined in Eq. (15)—quantifies the uncertainty of the reported communities (\(0 \le \sigma \le 1\))

We now argue that the more nuanced classification of documents obtained with the Text and Hyperlink + Text models are qualitatively meaningful. For example, we can see a cluster of 5 (Physics) nodes in the bottom left of the Hyperlink model that was not identified as a separate group, but it is now picked up in the Text and Hyperlink + Text model. This cluster of nodes include Wikipedia articles on the Josephson effect, macroscopic quantum phenomena, magnetic flux quantum, macroscopic quantum self trapping, and quantum tunnelling. Even more strikingly, in the bottom of the network there is a lone (Physics) green node surrounded by (Biology) red nodes which corresponds to the Wikipedia article on isotopic labelling (a technique in the intersection of Physics and Biology). In traditional community detection methods, which use link information as an indicator of groups, such a node would be in the community of its surrounding neighbours. However, in the Hyperlink + Text model, we are able to detect the uniqueness of such a node.

3.3 Quantitative comparison between different models

In the example discussed above it was clear that the different models yielded different yet related partitions of Wikipedia articles. In order to quantify the similarity of the results of the different models, we performed a systematic comparison of the partitions generated by multiple runs of each model and computed their similarities using the maximum overlap partition (Fig. 4, see Appendix 2 for details). The results show that the partitions generated by the Hyperlink + Text model is most similar to the Text model. Similar results are obtained in our alternative datasets—see Additional file 1-Sect. 1—and using the normalised mutual information (NMI) as an alternative dissimilarity measure—see Additional file 1-Sect. 3.

Figure 4
figure 4

Maximum partition overlap of the consensus partitions between the model classes. The average and standard deviations of the maximum partition overlap between and within different models

We also compare the Hyperlink and Hyperlink + Text model in terms of their ability to predict missing edges [27, 42] (see Appendix 3 for details on our method). We found that the Hyperlink + Text model has an Area-Under-Curve (AUC) score of \(0.63 \pm 0.06\) (average ± standard deviation) and the Hyperlink model has \(0.54 \pm 0.02\), with the difference being statistically significant (\(p=0.0013\), using a 2-sample t-test). This confirms that the multilayer approach proposed here is successful in retrieving existing relationships that are missed in the network-only approach.

3.4 Lack of balance in the hyperlink-text model

The results of the previous sections are strongly influenced by the lack of balance in Hyperlink + Text model, as discussed in Sect. 2. To further illustrate this point, here we artificially reduce the unbalance of the multilayer network by sampling a fraction μ of word tokens before fitting a Hyperlink + Text model. We expect that, as we increase the fraction of words μ, the Text layer will increasingly dominate the inference. This expectation is confirmed in Fig. 5, which shows that for \(\mu \geq 0.6\), the partition overlap of the μ-hyperlink-text model is statistically indistinguishable from the partition overlap obtained using the Text-only model. That is, we see that the Text layer dominates the inference in the μ-Hyperlink + Text for \(\mu \geq 0.6\). However, as discussed above, the effect of the hyperlink layer can lead to different consensus partition.

Figure 5
figure 5

Text layer determines the partitions obtained in the multilayer model (Hyperlink + Text). Similarity (overlap of consensus partition) between Hyperlink partition and μ-Hyperlink-Text partition as a function of the subsampling parameter μ where \(\mu =0\) (\(\mu =1\)) corresponds to the case with all (none) of the word tokens removed in the Hyperlink-Text model. For a given value of μ, a random fraction \(1-\mu \) of the words were removed and the Hyperlink + Text model was then fitted for multiple iterations. The consensus partition was then computed for the Hyperlink + Text model and its partition overlap with Hyperlink model. A higher sub-sampling of text (i.e. smaller values of μ) results in the consensus partition between the Hyperlink + Text and Hyperlink model having a high degree of overlap

3.5 Topic modelling: groups of words

Since our approach provides a clustering of all nodes, we not only group documents but also words. The groups of word (types) can be interpreted as the topics of the documents linked to them, showing that our framework simultaneously solves the traditional problem of topic modeling [14, 23]. Below we show the topics obtained in our Wikipedia dataset, as an example of our generic topic-modelling methodology.

In the consensus partition of the Hyperlink-Text network (see Fig. 3) we found 12 topics (groups of word types). The most frequent words in each of these topics is shown in Table 3. Qualitatively, we see that topics are often composed of semantically related words, e.g. topics 1 and 3 contain a large number of key words associated to Biology whilst topics 5 and 10 contains a large number of jargon related to Physics.

Table 3 Groups of word types as topics. Upper table: the 20 most frequent words in the 12 topics (word groups) found in the consensus partition of the Hyperlink + Text model. Lower table: the topic proportion (8) of the four groups of documents

We now discuss the topical composition of (groups of) documents. Let \(T=B_{V}\) be the number of topics and \(B_{D}\) be the number of document groups, then the mixture proportion of topic \(t =1,\ldots,T\) in document group \(i =1,\ldots,B_{D}\) is given by

$$\begin{aligned} f_{i}^{t} = \frac{n_{i}^{t}}{\sum_{t'=1}^{T}n_{i}^{t'}}, \end{aligned}$$
(8)

where \(n_{i}^{t}\) is the number of word tokens in topic t that appeared in documents d in document-group i. The results obtained for the four document groups are shown at the bottom of Table 3. Interestingly, topic 4 cannot be identified with any specific group of documents. This suggests that the words in this topic are similar to so-called stopwords, a pre-defined set of common words considered uninformative which are typically removed from the corpus before any model is to be fitted in order to improve the model [43]. This is consistent with the finding of Ref. [15] that SBMs applied to word-document networks were able to automatically filter stop words by grouping them into a “topic” that is well connected to all documents. Our findings suggest that the same is true for multilayer models and that our approach is robust against the presence of stopwords. In fact, this stopword topic is responsible for a large fraction (40%) of the topic-proportion for all groups of documents. The underlying reason for this is the higher frequency of these words, which (due to Zipf’s law) dominate the weights of the topic mixture models [44]. To overcome this feature, and assess the over- or under-representation of topics more rigorously, we account for the overall frequency of occurrence of words in topics t as

$$\begin{aligned} \bigl\langle f^{t} \bigr\rangle = \frac{\sum_{i=1}^{B_{D}}n_{i}^{t}}{\sum_{t=1}^{T}\sum_{j=1}^{B_{D}} n_{j}^{t}}, \end{aligned}$$
(9)

and define the normalised value of the mixture proportion of topic t in document group i as

$$\begin{aligned} \tau _{i}^{t} = \frac{f_{i}^{t} - \langle f^{t} \rangle }{\langle f^{t} \rangle }. \end{aligned}$$
(10)

This normalised measure has an intuitive interpretation: \(\tau _{i}^{t} > 0\) (\(\tau _{i}^{t} < 0\)) implies that topic t is over-represented (under-represented) in document group d. In Fig. 6, we show \(\tau _{i}^{t} \) for the 12 topics and the 4 document groups, providing a much clearer view on the connection between topics and groups of documents. For example, we see that document group 2 (articles labelled as Physics) has a large over-representation of topic 10, which corresponds to the Physics topic whilst being underrepresented in document group 2 (articles labelled as Biology). Looking at the model’s newly proposed document group (group 4) we see that it has an over-representation from topics 7 and 5 (and in a less extent from topics 2, 9, and 11), confirming its hybrid category.

Figure 6
figure 6

Normalised contribution of topics to the group of documents. The normalized measure (10) was computed for all 4 document groups and 12 word groups (topics). We set a threshold of \(\tau _{i}^{t} \geq 0.2\) (\(\tau _{i}^{t} \leq -0.2\)) to define if a topic is over- (under-) represented in a document group d

4 Discussion and conclusions

In this paper, we introduced and explored a formal methodology that combines multiple data types (e.g., text, metadata, links) to perform the common tasks of clustering and inferring latent relationships between documents in text analysis. The main theoretical advantage of our methodology is that it incorporates all the different types of data into a single, consistent, statistical model. Our approach is based on an extension of multilayer Stochastic Block Models, that have been used previously to find communities in (sparse) complex networks and that is used here to perform text analysis (see Refs. [3, 18] for alternative uses of SBMs for topic modelling). On the one hand, our method extends community-detection methods to the analysis of text in the presence of multiple data types, our main finding being that: (i) universal statistical properties of texts lead to different link densities at the different layers of the network; and (ii) that the word layer plays a dominant role in the inference of partitions. On the other hand, our method can be viewed as a generalized topic modelling method that incorporates meta-data and hyperlinks, labels the communities of documents by examining the proportion of topics, and builds on the previous connections between SBMs and Latent Dirichlet Allocation [15, 20].

Our investigations on four different datasets show consistent results that reveal the potential and limitations of our approach. Our most important finding is that our methodology succeeds in using the multiple data types (e.g., a text layer) leading to more nuanced communities of documents and in increasing the ability to predict missing links. On the practical side, the lack of balance between the different layers poses challenges on how to evaluate the contributions of different layers because the description length obtained in the inference process is dominated by the text layer and variations obtained within the (Monte Carlo) inference process become larger than the contribution of alternative layers. This suggests further investigations on the role of unbalanced layers in multilayer networks, and how to deal with them within the proposed framework, as important steps to expand the success of complex-network methods to other classes of relevant datasets.