A diagnostic unified classification model for classifying multi-sized and multi-modal brain graphs using graph alignment

https://doi.org/10.1016/j.jneumeth.2020.109014Get rights and content

Highlights

  • We propose a unified brain graph classification model that can classify multi-modal multisized brain graphs.

  • We design a graph alignment strategy to a fixed graph template.

  • Our unified classification model is diagnostic while being agnostic to the connectomic data source and size.

  • Our proposed method is scalable and generalizable to different diagnostic clinical frameworks.

Abstract

Background

Presence of multimodal brain graphs derived from different neuroimaging modalities is inarguably one of the most critical challenges in building unified classification models that can be trained and tested on any brain graph regardless of its size and the modality it was derived from.

Existing methods

One solution is to learn a model for each modality independently, which is cumbersome and becomes more time-consuming as the number of modalities increases. Another traditional solution is to build a model inputting multimodal brain graphs for the target prediction task; however, this is only applicable to datasets where all samples have joint neuro-modalities.

New method

In this paper, we propose to build a unified brain graph classification model trained on unpaired multimodal brain graphs, which can classify any brain graph of any size. This is enabled by incorporating a graph alignment step where all multi-modal graphs of different sizes and heterogeneous distributions are mapped to a common template graph. Next, we design a graph alignment strategy to the target fixed-size template and further apply linear discriminant analysis (LDA) to the aligned graphs as a supervised dimensionality reduction technique for the target classification task.

Results

We tested our method on unpaired autistic and healthy brain connectomes derived from functional and morphological MRI datasets (two modalities).

Conclusion

Our results showed that our unified model method not only has great promise in solving such a challenging problem but achieves comparable performance to models trained on each modality independently.

Introduction

Multimodal magnetic resonance imaging (MRI) has introduced new exciting opportunities for understanding the brain as a complex system of interacting units in both health and disease. Based on MRI data, the brain can be represented as a connectome (i.e., graph), where the connection between different anatomical regions of interest (ROIs) is modeled. A large body of research work showed how the brain connectome gets altered by neurological and neuropsychiatric disorders, such as dementia and autism spectrum disorder (ASD) (Fornito et al., 2015, Van den Heuvel and Sporns, 2019). In network neuroscience (Bassett and Sporns, 2017), the brain connectome, which is typically regarded as a graph encoding the relationships between pairs of ROIs, presents a macroscale representation of the interactions between anatomical regions at functional, structural or morphological levels, derived from different multimodal MRI (e.g., functional resting-state MRI or T1-weighted MRI) (Bassett and Sporns, 2017). A bulk of connectome analysis studies used single modality to design unimodal machine learning frameworks for examining and classifying unimodal brain connectomes (Bi et al., 2018, Iidaka, 2014, Chen et al., 2011a, Chen et al., 2011b, Du et al., 2018, Cheng and Liu, 2017, Andersen et al., 2015, Zhang et al., 2015). For instance, a recent Kaggle competition paper (Bilgen et al., 2020) investigated the potential of cortical morphological networks in distinguishing between autistic and healthy subjects using different machine learning frameworks. Notably, single modality brain connectivity presents a reductionist view of the complexity of the brain construct, whereas multi-modal connectomic data provides complementary and more integral representations of the brain, which can be leveraged for designing learners trained on paired samples (i.e., each sample should have the same number of connectomic modalities) (Mesrob et al., 2012, Calhoun and Sui, 2016).

Although multi-modal brain connectomes present a rich dataset to design computer-aided diagnosis models; however, with their variation in resolution (i.e., connectome size or number of brain ROIs), heterogeneity, and lack of parity of samples across modalities, they can present several barriers to building learning-based classification models. In fact, it is not possible to train a conventional classification model (e.g., support vector machines) on such incoherent multimodal datasets in terms of the number of feature across samples. As an alternative solution, one can train a unimodal classifier on each modality, independently. However, this overlooks the complementarity and richness of multimodal connectomes and becomes more time-consuming as the number of modalities increases. Another solution is to combine different multimodal connectomes in the given population by simple concatenation and then train a single classifier using the concatenated feature vectors (Dai and He, 2014, Schouten et al., 2016, Raeper et al., 2018, Graa and Rekik, 2019, Corps and Rekik, 2019). However, such approach requires paired samples across modalities, i.e., all MRI modalities should be available for each subject in the population. This wastefully discards existing incomplete multi-modal MRI data as well as independent datasets collected from different sources with varying numbers of samples and available modalities.

To address these limitations, we propose a unified multimodal classification (UMC) model.1 for classifying multi-sized and multi-modal brain graphs using graph alignment (Fig. 1). By aligning brain graphs from different modalities to a common template graph, we aim not only to eliminate the size problem we encounter in training a single model, but also avert to learn multiple classification models for multi-modal brain graphs. Our method does not stipulate sample parity across modalities since the multi-modal data are collectively mapped to fixed-size graphs via a shared template regardless of the number of available multimodal brain graphs (i.e., each subject can have one or more modalities).

Drawing inspiration from the recent work (Bai et al., 2019) on transforming arbitrary-sized graphs drawn from a single modality (i.e., source) into fixed-sized aligned grid structures for training deep learning models, we further extend it to handle multi-source heterogeneous connectomic datasets using four main steps: (i) extraction of feature vector embeddings using two different strategies: depth-based representation of each node in all brain graphs based on their local neighborhoods (Bai et al., 2019) and graph-wave node embeddings via diffusion wavelets (Donnat et al., 2018), (ii) clustering feature vectors into nt clusters, each of which represents a node in the template graph to estimate, (iii) alignment of each graph from all modalities to the target template graph, and (iv) training a linear classifier using the fixed-sized aligned brain graphs. To the best of our knowledge, this study is the first to use a unified model for brain disorder classification on multi-modal data using depth-based alignment and presents several contributions. First, the proposed method can use any brain graph of any size derived from any neuroimaging modality in brain disease classification. Second, we do not need to build multiple classifiers for multi-modal data, but a unified classifier using aligned graphs. Third, multi-modal data do not have to be paired. Each available modality may include different number of brain networks collected from different cohorts. Fourth, our model can also handle data imbalance problem as the small sample size of a class (e.g., disordered) in one modality can be compensated with the large sample size of the same class in another modality. Our key contributions can be summarized as follows:

  • Methodological. We pioneered a unified classification model for classifying multi-modal brain graphs with varying sizes using depth-based alignment to a unified connectional template, which eliminates the need to build multiple learners for different modalities and does not require paired graphs across modalities.

  • Clinical. This develops the field of network neuroscience along the joint multi-source multi-modal data analysis front aiming to present a unified model for classifying disorders.

Remark 1

We set out to solve a challenging classification problem comprising multiple sub-problems. Solving them independently will certainly have a higher classification accuracy; however, our goal is to build a unified diagnostic framework that can be used with any modality of any size at any time point within the clinical routine. Simply put, our unified classification model is diagnostic while being agnostic to the connectomic data source and size, which makes it scalable and generalizable to different diagnostic clinical frameworks.

Remark 2

Our primary goal is to build a unified diagnostic framework that can be used with any modality of any size at any time point within the clinical routine, which might not allow to track the original brain connectivities between regions of interest, and thereby hinders the identification of potential biomarkers (i.e., disordered brain connections). However, in this paper, we prioritize model generalizability, adaptation and diagnosis accuracy over model interpretability and explainability.

Section snippets

Proposed method

Problem statement. Let G={G1,,GM} denote M training connectomic modalities (i.e., multimodal datasets), where each Gm={Gm1,,GmNm} denotes a set of Nm brain connectivity matrices of size nm × nm derived from neuroimaging modality m (e.g., functional or structural) with a set of brain states (e.g., healthy and disordered) sS to classify. Our goal is to design a unified classification model that can be trained on the dataset G supervised by the brain state labels available in all datasets, and

Results

Multimodal connectomic datasets. To evaluate our method, we use M = 2 connectomic datasets:

  • 2

    F dataset. The functional brain connectivity dataset is derived from resting-state fMRI composed of N1 = 517 subjects with 245 autism spectrum disorder (ASD) and 272 normal controls (NC), each with n1 = 116 ROIs. We used the Autism Brain Imaging Data Exchange (ABIDE) preprocessed public dataset.2 Several preprocessing steps were implemented by the data

Discussion

In this paper, we set out to solve a challenging neurological disorder classification problem where the input brain graphs vary in size and are drawn from different neuroimaging modalities. As a solution, we designed a unified classification model that leverages a graph-based alignment technique to a template graph of a predefined size (i.e., the number of nodes is fixed) using clustering of graph embeddings based on local node depth or diffusion energy. We evaluated our model on functional and

Conclusion

In this paper, we proposed the first-ever unified classification model for classifying multi-modal brain graphs with varying sizes using depth-based and graph-wave based alignment to a unified connectional template, which eliminates the need to build multiple learners for different modalities and does not require paired graphs across modalities. Our method can also mitigate the problem of data imbalance between classes in unimodal datasets. This work develops the field of network neuroscience

Credit author statement

Abdullah Yalcin: Methodology, Software, Formal analysis, Validation, Visualization, Writing-Original draft

Islem Rekik: Conceptualization, Supervision, Methodology, Resources, Writing- Reviewing and Editing, Funding acquisition

Conflict of interest

The authors declare no conflict of interest.

Acknowledgements

This work was funded by generous grants from the European H2020 Marie Sklodowska-Curie action (grant no. 101003403, http://basira-lab.com/normnets/) to I.R. and the Scientific and Technological Research Council of Turkey to I.R. under the TUBITAK 2232 Fellowship for Outstanding Researchers (no. 118C288, http://basira-lab.com/reprime/). However, all scientific contributions made in this project are owned and approved solely by the authors.

References (42)

  • X.A. Bi et al.

    Classification of autism spectrum disorder using random support vector machine cluster

    Front. Genet.

    (2018)
  • I. Bilgen et al.

    Machine Learning Methods for Brain Network Classification: Application to Autism Diagnosis Using Cortical Morphological Networks

    (2020)
  • M.M. Bronstein et al.

    Geometric deep learning: going beyond euclidean data

    IEEE Signal Process. Mag.

    (2017)
  • V. Calhoun et al.

    Multimodal fusion of brain imaging data: a key to finding the missing link(s) in complex mental illness

    Biol. Psychiatry: Cognit. Neurosci. Neuroimaging

    (2016)
  • W. Cao et al.

    A comprehensive survey on geometric deep learning

    IEEE Access

    (2020)
  • G. Chen et al.

    Classification of alzheimer disease, mild cognitive impairment, and normal cognitive status with large-scale network analysis based on resting-state functional mr imaging

    Radiology

    (2011)
  • R. Chen et al.

    Structural mri in autism spectrum disorder

    Pediatric Res.

    (2011)
  • D. Cheng et al.

    Classification of Alzheimer's Disease by Cascaded Convolutional Neural Networks Using Pet Images

    (2017)
  • J. Corps et al.

    Morphological brain age prediction using multi-view brain networks derived from cortical morphology in healthy and disordered participants

    Sci. Rep.

    (2019)
  • Z. Dai et al.

    Disrupted structural and functional brain connectomes in mild cognitive impairment and alzheimer's disease

    Neurosci. Bull.

    (2014)
  • C. Donnat et al.

    Learning Structural Node Embeddings via Diffusion Wavelets

    (2018)
  • Cited by (8)

    • Technologies to support the diagnosis and/or treatment of neurodevelopmental disorders: A systematic review

      2023, Neuroscience and Biobehavioral Reviews
      Citation Excerpt :

      More studies (68.8 %) used technology for diagnosis rather than treatment of NDDs. The vast majority of articles (61.1 %) focused on ASD (Baygin, 2021; Bajestani, 2019; Caly, 2021; Cavallo, 2021; Abdulhay et al., 2022; Alcañiz, 2022; Al-Hiyali, 2021; Alvarez-Jimenez, 2020; Amat, 2021; Ameis, 2020; An, 2021; Antão, 2020; Ardulov, 2021; Asgari et al., 2021; Bakheet and Maharatna, 2021; Beaumont, 2021; Brittenham, 2022; Carpenter, 2021; Casanova, 2021; Chen, 2021; Chen, 2021; Cibrian, 2020; Crimi, 2021; Crowell, 2020; De Luca, 2021; de Moraes and A.P, 2020; Direito, 2021; Eill, 2019; ElNakieb, 2021; Emanuele, 2021; Frasch, 2021; Fu, 2021; Fujino, 2021; Gabard-Durnam, 2019; Ganesh et al., 2021; Gao, 2021; Gepner, 2022; Germann, 2021; Ghosh and Guha, 2021; Górriz, 2019; Graa and Rekik, 2019; Grossi, 2019; Grossi et al., 2021; Gui, 2021; Gürbüz and Rekik, 2021; Haweel, 2021; Haweel, 2021; He, 2021; Hu, 2021; Huang, 2020; Huberty, 2021; Ingalhalikar, 2021; Jensen, 2021; Jiang, 2020; Jiang, 2020; Kang, 2019; Kang, 2021; Kashef, 2022; Khozaei, 2020; Khullar et al., 2021; Kim, 2022; Kojovic, 2021; Konicar, 2021; Kou, 2019; Kumar, 2020; Kumar and Das, 2021; Lanka, 2020; Leblanc, 2020; Li, 2021; Li et al., 2020; Li, 2019; Li, 2019; Liang et al., 2021; Long, 2021; Ma et al., 2021; Manic, 2021; Marino, 2020; Meera, 2021; Mujeeb Rahman and Monica Subashini, 2022; Nabil et al., 2021; Nag, 2020; Ni, 2021; Oliveira, 2021; Peck, 2021; Penev, 2021; Peng, 2021; Pereira, 2019; Perochon, 2021; Pham, 2020; Putra, 2021; Rafiei Milajerdi, 2021; Rinaldi, 2021; Romero-García, 2021; Ruan, 2021; Salem, 2021; Sarovic, 2020; Shao, 2021; So, 2019; So, 2020; Sosnowski, 2022; Spiegel, 2019; Spronk, 2021; Squarcina, 2021; Sun, 2021; Sun, 2021; Tawhid, 2021; Tummala, 2021; van den Berk-Smeekens, 2020; Van der Donck, 2019; Vukićević, 2019; Wang et al., 2019; Wang, 2019; Wang, 2020; Wang, 2021; Wang, 2022; Wang, 2022; Wang, 2022; Washington, 2021; Wieckowski and White, 2020; Xipolitopoulos et al., 2021; Xu, 2021; Yalçin and Rekik, 2021; Yang, 2021; Yang, 2021; Yin et al., 2021; Zhang, 2021; Zhang, 2021; Zhang and Wang, 2022; Zhao, 2020; Zhao, 2021; Zhao, 2021; Zhao, 2021; Zheng, 2020; Zorcec, 2021; Zu, 2019) followed by ADHD (24.9 %), (Ardulov, 2021; Jiang, 2020; Lanka, 2020; Spronk, 2021; Zu, 2019; Abbas, 2021; Aggensteiner, 2019; Aggensteiner, 2021; Aradhya et al., 2020; Arnold, 2021; Barth, 2021; Benzing and Schmidt, 2019; Bleich-Cohen, 2021; Boroujeni et al., 2019; Cai, 2021; Cai, 2021; Chang, 2019; Chen et al., 2019; Dallmer-Zerbe, 2020; Damiani, 2021; Das and Khanna, 2021; Deiber, 2021; Dobrakowski and Łebecka, 2020; Gallen, 2021; Gao et al., 2020; Griffiths, 2021; Groeneveld, 2019; Gu, 2021; Ha, 2022; Hadas, 2021; Häger, 2021; Hasslinger et al., 2022; Johnstone, 2021; Kaur, 2019; Khan, 2021; Kiiski, 2020; Kim, 2021; Laniel, 2020; Liu, 2021; Medina, 2021; Moghaddari et al., 2020; O’Neill, 2022; Owens, 2021; Öztekin, 2021; Purper‐Ouakil, 2022; Qi, 2021; Shema-Shiratzky, 2019; Shi, 2021; Skalski, 2021; Tang, 2022; Tor, 2021; Tosun, 2021; Wang, 2021; Zhang-James, 2021; Zhao, 2022), and learning disabilities (8.1 %) (Appadurai and Bhargavi, 2021; Devillaine, 2021; Drotár and Dobeš, 2020; Ebrahimi, 2022; Eroğlu, 2021; Formoso, 2021; Maggio, 2021; Marchesotti, 2020; Pecini, 2019; Pérez-Elvira et al., 2021; Peters, 2021; Ramezani, 2021; Rello, 2020; Rodríguez, 2021; Svensson, 2021; Usman, 2021; Zahia, 2020; Zhang, 2021). Fewer scientists focused on DCD (2.7 %) (EbrahimiSani, 2020; Grohs, 2020; Kuijpers, 2019; Neto, 2020; Neto, 2021; Smits-Engelsman et al., 2020), language disorder/language delay/specific language impairment/developmental speech-language disorders (1.8 %) (Borovsky et al., 2021; Justice et al., 2019; Sharma and Singh, 2022; Zhao, 2021), Tourette syndrome (1.4 %) (Duan, 2021; Dyke, 2019; Kahl, 2021), intellectual disability (1.4 %) (Ha, 2022; Ahn, 2021; Smith, 2021), developmental delay (0.9 %) (Lloyd, 2021; Ouyang, 2020), and Rett syndrome (0.5 %) (Fabio, 2022).

    • Template-based graph registration network for boosting the diagnosis of brain connectivity disorders

      2023, Computerized Medical Imaging and Graphics
      Citation Excerpt :

      A morphological connectivity weight between two ROIs is then computed as the absolute difference between their corresponding average cortical thickness values. Morphological brain networks have gained momentum over the last few years where brain connectivity is generated from conventional T1-weighted MRI and morphological dissimilarities are quantified between brain regions (Soussia and Rekik, 2018; Mahjoub et al., 2018; Nebli and Rekik, 2019; Bilgen et al., 2020; Yalçin and Rekik, 2021) Comparison methods.

    • A New Emotion Recognition Method based on Low-Rank Representation and Nonnegative Matrix Factorization

      2022, Proceedings - 2022 International Conference on Machine Learning, Control, and Robotics, MLCR 2022
    View all citing articles on Scopus
    View full text