Next Article in Journal
Grammatical Comprehension in Italian Children with Autism Spectrum Disorder
Previous Article in Journal
Effects of Depressive Symptoms, Feelings, and Interoception on Reward-Based Decision-Making: Investigation Using Reinforcement Learning Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Advances in the Neurocognition of Music and Language

1
Otto Hahn Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
2
Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, 8050 Zurich, Switzerland
*
Authors to whom correspondence should be addressed.
Brain Sci. 2020, 10(8), 509; https://doi.org/10.3390/brainsci10080509
Submission received: 27 July 2020 / Accepted: 30 July 2020 / Published: 2 August 2020

Abstract

:
Neurocomparative music and language research has seen major advances over the past two decades. The goal of this Special Issue “Advances in the Neurocognition of Music and Language” was to showcase the multiple neural analogies between musical and linguistic information processing, their entwined organization in human perception and cognition and to infer the applicability of the combined knowledge in pedagogy and therapy. Here, we summarize the main insights provided by the contributions and integrate them into current frameworks of rhythm processing, neuronal entrainment, predictive coding and cognitive control.

The scholarly fascination for the relationships between music and language (M&L) is as old as antiquity. To this day, continuous methodological progress and, in part, radical conceptual shifts paved the way for new directions of research. In the 1990s, technological revolutions in neuroimaging revealed partial neural overlap between the two domains [1], despite dissociable clinical deficits in M&L [2]. Together with known benefits of music for speech and language functions [3] this nurtured the idea that—once we understand what holds M&L together at their biological core—music interventions could constitute a bridge to prevent, alleviate, or even reverse speech and language disorders [4,5].
This Special Issue took stock of recent advances in the neurocognition of M&L to examine the current status of this vein of research. Sixteen research papers and reviews from 48 experts in linguistics, musicology, cognitive neuroscience, biological psychology and educational sciences demonstrate that research has been active on all fronts. As we will see, the studies follow two burgeoning trends in M&L research: First, they focused on common auditory processing of temporal regularities [6,7,8,9] that are thought to promote higher-level linguistic functions [8,10,11,12,13,14], possibly via mechanisms of neuronal entrainment [15]. Second, they explored top-down modulations of common auditory processes [16,17,18] by domain-general cognitive [19,20] and motor functions in both perception and production [21]. These topics were addressed using a broad toolkit of well-designed behavioral and computational approaches combined with functional magnetic resonance imaging (fMRI), near-infrared spectroscopy (NIRS) or electroencephalography (EEG) in different cohorts of participants.
The starting point for most of the included studies was that speech and music have similar acoustic [9,18] and structural features [6,7,8,13,15,16,17,19]. As argued in the review article of Reybrouck and Podlipniak [9], some of these sound features and their common preconceptual affective meanings may even reflect joint evolutionary roots of M&L that still prevail today, for example, in musical expressivity and speech prosody. Notably, a feature that was particularly central to half of all contributions is the temporal structure of M&L, i.e., the patterning of strong and weak syllables or beats that make up rhythm, meter and prosodic stress [6,7,8,10,11,12,13,15].
The rhythmic patterning of both speech and music has been proposed to draw on domain-general abilities which are required to perceive and process temporal features of sound [22,23]. Accordingly, three studies present data in line with common rhythm processing resources in M&L. First, Lagrois et al. [6] found that individuals with beat finding deficits in music—so called “beat-deaf” individuals—also show deficits in synchronizing their taps with speech rhythm, and more generally, in regular tapping without external rhythms. The authors argue that this pattern of deficits may arise from a basic deficiency in timekeeping mechanisms that affect rhythm perception across domains. Second, Boll-Avetisyan et al. [7] used multiple regression analyses and found that musical rhythm perception abilities predicted rhythmic grouping preferences in speech in adults with and without dyslexia. Similarly, in an EEG study, Fotidzis et al. [8] found that musical rhythmic skills predicted children’s neural sensitivity to mismatches between the speech rhythm of a written word and an auditory rhythm. Interestingly, both studies further report connections between rhythm perception in music and reading skills. Hence, these findings not only speak for a common cross-domain basis of rhythmic processes in M&L but also suggest that deficient or enhanced rhythmic abilities may have an impact on higher-level language functions.
Potential downstream effects of general rhythmic processing skills on higher-order linguistic abilities are currently being extensively investigated, particularly in the context of first language acquisition (for a recent review, see [24]). Accordingly, several studies in this Special Issue probe whether the acoustic properties of speech rhythm can serve as scaffolding for the acquisition of stable phonological representations [12], for the segmentation of words from continuous speech and the construction of lexical representations [13], for the recognition of syntactic units in sentences [10] and for reading [7,8,11]. For example, Richards and Goswami [10] explain that prosody, particularly the hierarchical structuring of stressed and unstressed syllables, provides reliable cues to the syntactic structure of speech [25] and can hence facilitate learning of syntactic language organization [26]. Early perturbations at this rhythm-syntax interface may, in turn, hinder normal language acquisition, such as in developmental language disorders (DLD). The authors found that children with DLD indeed had difficulties in noticing conflicting alignments between prosodic and syntactic boundaries in rhythmic children’s stories, and that these deficits coincided with enhanced perceptual thresholds for acoustic cues to prosodic stress. With these data at hand, Richards and Goswami support the assertion that basic processing of rhythmic-prosodic cues may be a key foundation onto which higher aspects of language are scaffolded during development.
In a similar vein, rhythmic-prosodic sensitivity has been proposed as fundamental stepping stone into literacy [27,28,29] as well as implicit driver for skilled reading [30]. Breen et al. [11] and Fotidzis et al. [8] present converging EEG evidence for implicit rhythmic processing in silent reading of words in literate adults and children. In particular, they both found a robust fronto-central negativity in response to stress patterns in written words that mismatched the rhythm of silently read limericks [11] or auditory click trains [8]. These results suggest that rhythmic context—no matter whether implicit in written text or explicit in sound—can induce expectations of prosodic word stress that facilitate visual word recognition and reading speed.
Current neurophysiological models assume that speech and music processing as well as the catalytic role of rhythm in language development are based on the synchronization of internal neuronal oscillations with temporally regular stimuli [27,31,32,33]. The review article by Myers et al. [15] summarizes the current state of knowledge about neuronal entrainment to the speech envelope reflecting quasi-regular amplitude fluctuations over time. This neural tracking occurs simultaneously at multiple time scales corresponding to the rates of phonemes, syllables and phrases [34,35]. In this context, Myers and colleagues argue that the slowest rate—corresponding to prosodic stress and rhythmic pacing in the delta range (~2Hz)—constitutes a particularly strong source of neuronal entrainment which is crucial for normal language development. Correspondingly, atypical entrainment to rhythmic prosodic cues due to deficits in fine-grained auditory perception may constitute a risk for the development of speech and language disorders such as DLD and developmental dyslexia (DD) [24,36].
If rhythmic processing disabilities are indeed the basis of speech and language disorders, then useful avenues for prevention and intervention could lie in (i) increasing the regularity of stimuli, or (ii) strengthening individual rhythmic abilities with the aim at improving neuronal entrainment [37,38,39]. Several studies in this Special Issue deal directly or indirectly with these ideas, either by exploring processing benefits of rhythmically highly regular stimuli such as songs [13,14] or poems [10,11], or by discussing potential protective or curative effects of music-based rhythm training on language skills [7,8,10,12,15,16]. Even though the results are promising, they also raise a number of questions. For example, using EEG Snijders et al. [13] found that 10-month-old infants were able to segment words in natural children’s songs. However, they did equally well in infant-directed speech. Similarly, Rossi et al. [14] found no differences between speech and songs in a combined EEG-NIRS study on semantic processing in healthy adults. Taken together, these data suggest that the presentation of verbal material as song may not be sufficient to enhance vocabulary learning or language comprehension in healthy individuals (but see [40]). The longitudinal study of Frey et al. [12] zoomed in on training effects. Using EEG, the authors demonstrate that 6 months of music but not painting training positively influenced the pre-attentive processing of voice onset time in speech in children with DD. However, no effects were found in behavioral measures of phonological processing or reading ability. This raises the questions of how much training is required and which aspects the training should include to translate to behavior, both inside and outside the laboratory setting. Clearly, the identification of optimal interventions is a joint mission for future research that goes hand in hand with the development of solid conceptual [41,42] and neurophysiological frameworks [27] to identify the key variables underlying the amelioration of speech and language processing through rhythm and music [43,44,45,46].
The studies of this Special Issue introduced so far primarily focused on links between M&L that are bottom-up driven by shared acoustic features between the two domains. The remaining articles took a different approach and examined domain-general top-down modulations of M&L from both the perspectives of perception and production. Four articles illustrate the continuous interaction between bottom-up and top-down processes. In line with significant trends in predictive coding [47,48], Daikoku [16] reviews the conceptual, computational, experimental and neural similarities of statistical learning in M&L acquisition and perception with links to rehabilitation. Bidirectional interactions between perceptual (bottom-up) and predictive (top-down) processes are a core feature in the framework of statistical learning. Experimental evidence for the top-down adjustment of M&L perception is provided by the behavioral modelling study of Silva et al. [17] who found that listeners placed break patterns in ambiguous speech-song stimuli differently depending on whether they believed they were listening to speech prosody or contemporary music. Similarly, the fMRI study of Tsai and Li [18] found that the strength with which an ambiguous stimulus was perceived as song rather than speech depended not only on the acoustics of the stimulus itself, but also on the sound category of the preceding stimulus. Finally, Mathias et al. [21] show with EEG that pianists gradually anticipated the sounds of their actions during music production, similar to mechanisms of auditory feedback control during speech production [49,50]. Taken together, these studies suggest that the listening context, one’s own motor plans as well as statistical and domain-specific expectations may influence the top-down anticipation and perception of acoustic features in speech and music.
Finally, the last two articles focus on the relevance of domain-general cognitive functions for M&L interactions. Lee et al. [19] argues that well-known syntax interference effects between M&L [51,52] may emerge from shared domain-general attentional resources. Accordingly, they show that the top-down allocation of attention similarly modulated EEG markers of syntax processing in M&L, particularly at late processing stages associated with cognitive reanalysis and integration. Otherwise, Christiner and Reiterer [20] found that links between musical aptitude and phonetic language abilities in pre-school children (i.e., imitation of foreign speech) were mediated by domain-general working memory resources. While none of these studies denies auditory-perceptual connections between M&L, they remind us that what we have seen so far is perhaps only the tip of the iceberg, with more complex entwinements still to be discovered.
To sum up, this Special Issue indicates that questions have shifted from mapping to mechanisms. Initial descriptions of M&L analogies have turned into a determined search for explanations of M&L links in human neurophysiology, general perceptual principles and cognitive computations. Accordingly, the obvious next questions are of a mechanistic nature: Can musical training enhance the neuronal entrainment to speech (and vice versa)? How exactly does entrainment promote higher-order linguistic functions? How can working memory and attention be included in the equation? These are only a few questions, but we are confident that the joint efforts of this multidisciplinary field of research will be rewarded by a better understanding of the M&L interface and the necessary tools to optimize interventions for music- and language-related dysfunctions.

Author Contributions

D.S. and S.E. edited this Special Issue, D.S. wrote the original draft of the editorial, S.E. and D.S. revised the editorial. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Patel, A.D. Music, Language, and the Brain; Oxford University Press: New York, NY, USA, 2008. [Google Scholar]
  2. Peretz, I.; Coltheart, M. Modularity of Music Processing. Nat. Neurosci. 2003, 6, 688–691. [Google Scholar] [CrossRef] [PubMed]
  3. Sparks, R.; Helm, N.; Albert, M. Aphasia Rehabilitation Resulting from Melodic Intonation Therapy. Cortex 1974, 10, 303–316. [Google Scholar] [CrossRef]
  4. Patel, A.D. Why Would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis. Front. Psychol. 2011, 2, 142. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Schön, D.; Tillmann, B. Short- and Long-Term Rhythmic Interventions: Perspectives for Language Rehabilitation. Ann. N. Y. Acad. Sci. 2015, 1337, 32–39. [Google Scholar] [CrossRef] [Green Version]
  6. Lagrois, M.-E.; Palmer, C.; Peretz, I. Poor Synchronization to Musical Beat Generalizes to Speech. Brain Sci. 2019, 9, 157. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Boll-Avetisyan, N.; Bhatara, A.; Höhle, B. Processing of Rhythm in Speech and Music in Adult Dyslexia. Brain Sci. 2020, 10, 261. [Google Scholar] [CrossRef]
  8. Fotidzis, T.S.; Moon, H.; Steele, J.R.; Magne, C.L. Cross-Modal Priming Effect of Rhythm on Visual Word Recognition and Its Relationships to Music Aptitude and Reading Achievement. Brain Sci. 2018, 8, 210. [Google Scholar] [CrossRef] [Green Version]
  9. Reybrouck, M.; Podlipniak, P. Preconceptual Spectral and Temporal Cues as Source of Meaning in Speech and Music. Brain Sci. 2019, 9, 53. [Google Scholar] [CrossRef] [Green Version]
  10. Richards, S.; Goswami, U. Impaired Recognition of Metrical and Syntactic Boundaries in Children with Developmental Language Disorders. Brain Sci. 2019, 9, 33. [Google Scholar] [CrossRef] [Green Version]
  11. Breen, M.; Fitzroy, A.B.; Oraa Ali, M. Event-Related Potential Evidence of Implicit Metric Structure during Silent Reading. Brain Sci. 2019, 9, 192. [Google Scholar] [CrossRef] [Green Version]
  12. Frey, A.; François, C.; Chobert, J.; Velay, J.-L.; Habib, M.; Besson, M. Music Training Positively Influences the Preattentive Perception of Voice Onset Time in Children with Dyslexia: A Longitudinal Study. Brain Sci. 2019, 9, 91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Snijders, T.M.; Benders, T.; Fikkert, P. Infants Segment Words from Songs—An EEG Study. Brain Sci. 2020, 10, 39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Rossi, S.; Gugler, M.F.; Rungger, M.; Galvan, O.; Zorowka, P.G.; Seebacher, J. How the Brain Understands Spoken and Sung Sentences. Brain Sci. 2020, 10, 36. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Myers, B.R.; Lense, M.D.; Gordon, R.L. Pushing the Envelope: Developments in Neural Entrainment to Speech and the Biological Underpinnings of Prosody Perception. Brain Sci. 2019, 9, 70. [Google Scholar] [CrossRef] [Green Version]
  16. Daikoku, T. Neurophysiological Markers of Statistical Learning in Music and Language: Hierarchy, Entropy, and Uncertainty. Brain Sci. 2018, 8, 114. [Google Scholar] [CrossRef] [Green Version]
  17. Silva, S.; Dias, C.; Castro, S.L. Domain-Specific Expectations in Music Segmentation. Brain Sci. 2019, 9, 169. [Google Scholar] [CrossRef] [Green Version]
  18. Tsai, C.G.; Li, C.W. Is It Speech or Song? Effect of Melody Priming on Pitch Perception of Modified Mandarin Speech. Brain Sci. 2019, 9, 286. [Google Scholar] [CrossRef] [Green Version]
  19. Lee, D.J.; Jung, H.; Loui, P. Attention Modulates Electrophysiological Responses to Simultaneous Music and Language Syntax Processing. Brain Sci. 2019, 9, 305. [Google Scholar] [CrossRef] [Green Version]
  20. Christiner, M.; Reiterer, S.M. Early Influence of Musical Abilities and Working Memory on Speech Imitation Abilities: Study with Pre-School Children. Brain Sci. 2018, 8, 169. [Google Scholar] [CrossRef] [Green Version]
  21. Mathias, B.; Gehring, W.J.; Palmer, C. Electrical Brain Responses Reveal Sequential Constraints on Planning during Music Performance. Brain Sci. 2019, 9, 25. [Google Scholar] [CrossRef] [Green Version]
  22. Kotz, S.A.; Ravignani, A.; Fitch, W.T. The Evolution of Rhythm Processing. Trends Cogn. Sci. 2018, 22, 896–910. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Jones, M.R. Time Will Tell: A Theory of Dynamic Attending; Oxford University Press: New York, NY, USA, 2019. [Google Scholar]
  24. Ladányi, E.; Persici, V.; Fiveash, A.; Tillmann, B.; Gordon, R.L. Is Atypical Rhythm a Risk Factor for Developmental Speech and Language Disorders? Wiley Interdiscip. Rev. Cogn. Sci. 2020, e1528. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Selkirk, E. Phonology and Syntax: The Relation between Sound and Structure; MIT Press: Cambridge, MA, USA, 1984. [Google Scholar]
  26. Cumming, R.; Wilson, A.; Goswami, U. Basic Auditory Processing and Sensitivity to Prosodic Structure in Children with Specific Language Impairments: A New Look at a Perceptual Hypothesis. Front. Psychol. 2015, 6, 972. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Goswami, U. A Neural Oscillations Perspective on Phonological Development and Phonological Processing in Developmental Dyslexia. Lang. Linguist. Compass 2019, 13, e12328. [Google Scholar] [CrossRef]
  28. Tierney, A.; Kraus, N. Music Training for the Development of Reading Skills. Prog. Brain Res. 2013, 207, 209–241. [Google Scholar]
  29. Wade-Woolley, L.; Heggi, L. The Contributions of Prosodic and Phonological Awareness to Reading: A Review. In Linguistic Rhythm and Literacy; Thomson, J., Jarmulowicz, L., Eds.; John Benjamins Publishing Company: Amsterdam, The Netherlands, 2016; pp. 3–24. [Google Scholar]
  30. Breen, M. Empirical Investigations of Implicit Prosody. In Explicit and Implicit Prosody in Sentence Processing: Studies in Honor of Janet Dean Fodor; Frazier, L., Gibson, E., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 177–192. [Google Scholar]
  31. Poeppel, D.; Assaneo, M.F. Speech Rhythms and Their Neural Foundations. Nat. Rev. Neurosci. 2020, 21, 322–334. [Google Scholar] [CrossRef]
  32. Lakatos, P.; Gross, J.; Thut, G. A New Unifying Account of the Roles of Neuronal Entrainment. Curr. Biol. 2019, 29, R890–R905. [Google Scholar] [CrossRef] [Green Version]
  33. Large, E.W.; Herrera, J.A.; Velasco, M.J. Neural Networks for Beat Perception in Musical Rhythm. Front. Syst. Neurosci. 2015, 9, 159. [Google Scholar] [CrossRef] [Green Version]
  34. Gross, J.; Hoogenboom, N.; Thut, G.; Schyns, P.; Panzeri, S.; Belin, P.; Garrod, S. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain. PLoS Biol. 2013, 11, e1001752. [Google Scholar] [CrossRef]
  35. Giraud, A.L.; Poeppel, D. Cortical Oscillations and Speech Processing: Emerging Computational Principles and Operations. Nat. Neurosci. 2012, 15, 511–517. [Google Scholar] [CrossRef] [Green Version]
  36. Goswami, U. A Temporal Sampling Framework for Developmental Dyslexia. Trends Cogn. Sci. 2011, 15, 3–10. [Google Scholar] [CrossRef] [PubMed]
  37. Vanden Bosch der Nederlanden, C.M.; Joanisse, M.F.; Grahn, J.A. Music as a Scaffold for Listening to Speech: Better Neural Phase-Locking to Song than Speech. Neuroimage 2020, 214, 116767. [Google Scholar] [CrossRef] [PubMed]
  38. Harding, E.E.; Sammler, D.; Henry, M.J.; Large, E.; Kotz, S.A. Cortical Tracking of Rhythm in Music and Speech. Neuroimage 2019, 185, 96–101. [Google Scholar] [CrossRef] [PubMed]
  39. Doelling, K.B.; Poeppel, D. Cortical Entrainment to Music and Its Modulation by Expertise. Proc. Natl. Acad. Sci. USA 2015, 112, E6233–E6242. [Google Scholar] [CrossRef] [Green Version]
  40. François, C.; Teixidó, M.; Takerkart, S.; Agut, T.; Bosch, L.; Rodriguez-Fornells, A. Enhanced Neonatal Brain Responses to Sung Streams Predict Vocabulary Outcomes by Age 18 Months. Sci. Rep. 2017, 7, 12451. [Google Scholar] [CrossRef] [Green Version]
  41. Tierney, A.; Kraus, N. Auditory-Motor Entrainment and Phonological Skills: Precise Auditory Timing Hypothesis (PATH). Front. Hum. Neurosci. 2014, 8, 949. [Google Scholar] [CrossRef] [Green Version]
  42. Ozernov-Palchik, O.; Patel, A.D. Musical Rhythm and Reading Development: Does Beat Processing Matter? Ann. N. Y. Acad. Sci. 2018, 1423, 166–175. [Google Scholar] [CrossRef]
  43. Besson, M.; Chobert, J.; Marie, C. Transfer of Training between Music and Speech: Common Processing, Attention, and Memory. Front. Psychol. 2011, 2, 94. [Google Scholar] [CrossRef] [Green Version]
  44. Virtala, P.; Partanen, E. Can Very Early Music Interventions Promote At-Risk Infants’ Development? Ann. N. Y. Acad. Sci. 2018, 1423, 92–101. [Google Scholar] [CrossRef] [Green Version]
  45. Elmer, S.; Dittinger, E.; Besson, M. One Step Beyond: Musical Expertise and Word Learning. In the Oxford Handbook of Voice Perception; Frühholz, S., Belin, P., Eds.; Oxford University Press: Oxford, UK, 2019; pp. 209–234. [Google Scholar]
  46. Torppa, R.; Huotilainen, M. Why and How Music Can Be Used to Rehabilitate and Develop Speech and Language Skills in Hearing-Impaired Children. Hear. Res. 2019, 380, 108–122. [Google Scholar] [CrossRef]
  47. Koelsch, S.; Vuust, P.; Friston, K. Predictive Processes and the Peculiar Case of Music. Trends Cogn. Sci. 2019, 23, 63–77. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Erickson, L.C.; Thiessen, E.D. Statistical Learning of Language: Theory, Validity, and Predictions of a Statistical Learning Account of Language Acquisition. Dev. Rev. 2015, 37, 66–108. [Google Scholar] [CrossRef] [Green Version]
  49. Palmer, C.; Pfordresher, P.Q. Incremental Planning in Sequence Production. Psychol. Rev. 2003, 110, 683–712. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Hickok, G. Computational Neuroanatomy of Speech Production. Nat. Rev. Neurosci. 2012, 13, 135–145. [Google Scholar] [CrossRef]
  51. Koelsch, S.; Gunter, T.C.; Wittfoth, M.; Sammler, D. Interaction between Syntax Processing in Language and in Music: An ERP Study. J. Cogn. Neurosci. 2005, 17, 1565–1577. [Google Scholar] [CrossRef]
  52. Slevc, L.R.; Rosenberg, J.C.; Patel, A.D. Making Psycholinguistics Musical: Self-Paced Reading Time Evidence for Shared Processing of Linguistic and Musical Syntax. Psychon. Bull. Rev. 2009, 16, 374–381. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Sammler, D.; Elmer, S. Advances in the Neurocognition of Music and Language. Brain Sci. 2020, 10, 509. https://doi.org/10.3390/brainsci10080509

AMA Style

Sammler D, Elmer S. Advances in the Neurocognition of Music and Language. Brain Sciences. 2020; 10(8):509. https://doi.org/10.3390/brainsci10080509

Chicago/Turabian Style

Sammler, Daniela, and Stefan Elmer. 2020. "Advances in the Neurocognition of Music and Language" Brain Sciences 10, no. 8: 509. https://doi.org/10.3390/brainsci10080509

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop