Skip to main content
Log in

Meaning Relations, Syntax, and Understanding

  • Original Paper
  • Published:
Axiomathes Aims and scope Submit manuscript

Abstract

This paper revisits the conception of intelligence and understanding as embodied in the Turing Test. It argues that a simple system of meaning relations drawn from words/lexical items in a natural language and framed in terms of syntax-free relations in linguistic texts can help ground linguistic inferences in a manner that can be taken to be 'understanding' in a mechanized system. Understanding in this case is a matter of running through the relevant inferences meaning relations allow for, and some of these inferences are plain deductions and some can serve to act as abductions. Understanding in terms of meaning relations also supervenes on linguistic syntax because such understanding cannot be simply reduced to syntactic relations. The current approach to meaning and understanding thus shows that this is one way, if not the only way, of (re)framing Alan Turing's original insight into the nature of thinking in computing systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Availability of Data and Material

Not applicable.

Code Availability

Not applicable.

Notes

  1. Exploring the validity or, for that matter, invalidity of the Turing Test is not the goal of this paper, and hence that question is outside the scope of the present discussion. We may note, however, that the Turing Test has targeted the understanding of natural language rather than some other cognitive task as the marker of thinking. This is what seems very relevant to the reformulation of the Turing Test in terms of the capacity to construct meaning relations to be discussed in the next section.

  2. This contrasts with the familiar hypothesis that lexical items are the atomic elements that can be shared among humans, other animals and perhaps machines, whereas the structures built out of lexical items crossing the boundary between lexical items and other functional items (for example, prepositions, tense markers) are unique to humans (see Miyagawa et al. 2014). It needs to recognized that lexical items, when taken to be atomic elements as part of a formal system, are actually conceptually empty minimal items in their formal characterization. This allows lexical items to be shared among humans and other organisms, but keeps the structures built from them from being so shared.

  3. Note that this notion of relation is way different from the relations that can be constructed, as in model-theoretic syntax, for nodes in a tree (such as precedence or dominance relations) and for categories such as NP (Noun Phrase), VP (Verb Phrase), S (Sentence) etc. which are properties of nodes (see for details, Pullum (2013)). In fact, the relations R1, … ,Rk, Rk+1, … , R encompass many dimensions (such as string adjacency, precedence, dominance and parent-of relations etc.) in terms of which linguistic constructions can be characterized.

  4. Significantly, this account has differences from frame semantics (Fillmore 1976) or scripts (Schank and Abelson 1977) because the approach here is much more granular. For instance, a meaning relation between ‘a’ and ‘river’ in ‘a beautifully painted picture of the river’ cannot be easily captured by a syntactic unit or by a frame (or by a script), say, 'being a river' which will end up mapping the whole noun phrase and its grammatical function to the frame but 'a' and 'river' in are actually discontinuous frame elements (or script elements). Besides, frames may themselves participate in compositional operations in a construction through inheritance of information from daughter signs in local trees to mother signs (see Sag 2012).

  5. It may also be observed that the freely variable structures even in idioms such as 'take (something) for granted' or 'cut (something) to the bone' (with the freely variable parts underlined) are not illicit syntactic categories or units (as they are all NPs), while the fixed expressions may be productively illicit in the language ('for granted' is possible, but 'for accepted' is probably not).

  6. Croft shows that a language such as Kilivila has a fixed order of NPs after a verb, and hence their relations to each other do not help decipher the syntactic roles of the arguments. Rather, the syntactic roles of the arguments are mapped onto their participant roles.

  7. It is plausible that the probabilistic character of abductions may be compatible with Bayesian reasoning (see Lipton 2004; McCain and Poston 2014), but that is something which is outside the scope of this paper.

References

  • Baggini J (2009) Painting the bigger picture. The Philosopher’s Magazine 8:37–39

    Google Scholar 

  • Barker C, Jacobson P (2007) Direct compositionality. Oxford University Press, New York

    Google Scholar 

  • Berwick RC, Chomsky N (2017) Why only us: Language and evolution. MIT Press, Cambridge, MA

    Google Scholar 

  • Boden M (1988) Computer models of the mind. Cambridge University Press, Cambridge

    Google Scholar 

  • Boeckx C (2015) Elementary syntactic structures: Prospects of a feature-free syntax. Cambridge University Press, Cambridge

    Google Scholar 

  • Brandom R (1994) Making it explicit. Harvard University Press, Cambridge, MA

    Google Scholar 

  • Brandom R (2007) Inferentialism and some of its challenges. Res 74(3):651–676

    Google Scholar 

  • Bresnan J (2001) Lexical functional syntax. Blackwell, Oxford

    Google Scholar 

  • Chomsky N (1995) The minimalist program. MIT Press, Cambridge, MA

    Google Scholar 

  • Chomsky N (2000) New horizons in the study of language and mind. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Clark S (2015) Vector space models of lexical meaning. In: Lappin S, Fox C (eds) Handbook of contemporary semantic theory. Blackwell, Oxford, pp 493–522

    Chapter  Google Scholar 

  • Colston HL (2019) How language makes meaning. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Croft W (2001) Radical construction grammar. Oxford University Press, New York

    Book  Google Scholar 

  • Culicover PW, Jackendoff R (2005) Simpler syntax. Oxford University Press, New York

    Book  Google Scholar 

  • Culicover PW (2013) Explaining syntax: Representations, structures, and computation. Oxford University Press, New York

    Book  Google Scholar 

  • Dalrymple M, Kaplan RM, King TH (2016) Economy of Expression as a principle of syntax. Journal of Language Modelling 3(2):377–412

    Article  Google Scholar 

  • Dalrymple M, Findlay JY (2019) Lexical functional grammar. In: Kertész A, Moravcsik E, Rákosi C (eds) Current approaches to syntax: A comparative handbook. Mouton de Gruyter, Berlin, pp 123–154

    Chapter  Google Scholar 

  • Dennett D (2013) Intuition pumps and other tools for thinking. W.W. Norton and Co, New York

    Google Scholar 

  • Descombes V (2010) Mind’s provisions: A critique of cognitivism. Princeton University Press, Princeton

    Google Scholar 

  • Douven I (2017) What is inference to the best explanation? And why should we care? In: Poston T, McCain K (eds) Best explanations: New essays on inference to the best explanation. Oxford University Press, Oxford, pp 7–24

    Google Scholar 

  • Dowty D (1979) Word meaning and Montague grammar: The semantics of verbs and times in generative semantics and in Montague’s PTQ. Springer, Dordrecht

    Book  Google Scholar 

  • Duffley P (2020) Linguistic meaning meets linguistic form. Oxford University Press, New York

    Book  Google Scholar 

  • Fillmore CJ (1976) Frame semantics and the nature of language. Annals of the New York Academy of Sciences: Conference on the Origin and Development of Language and Speech 280:20–32

    Article  Google Scholar 

  • Fodor J (2000) The mind doesn’t work that way: The scope and limits of computational psychology. MIT Press, Cambridge, MA

    Book  Google Scholar 

  • Fuller TJ (2019) Cognitive architecture, holistic inference and Bayesian networks. Mind Mach 29:373–395

    Article  Google Scholar 

  • Goldberg A (2006) Constructions at work: The nature of generalization in language. Oxford University Press, New York

    Google Scholar 

  • Goldberg A (2019) Explain me this: Creativity, competition, and the partial productivity of constructions. Princeton University Press, Princeton

    Book  Google Scholar 

  • Harnad S (1989) Minds, machines and Searle. J Exp Theor Artif Intell 1:5–25

    Article  Google Scholar 

  • Haugeland J (2002) Syntax, semantics, physics. In: Preston JM, Bishop MA (eds) Views into the Chinese room: New essays on Searle and artificial intelligence. Oxford University Press, New York, pp 379–392

    Google Scholar 

  • Hodges W (2012) Formalizing the relationship between meaning and syntax. In: Werning M, Hinzen W, Machery E (eds) The Oxford handbook of compositionality. Oxford University Press, New York, pp 245–261

    Google Scholar 

  • Hofstadter, D. R. (2016). Dull rigid human meets ace mechanical translator. In S. B. Cooper, A. Hodges (Eds.). The once and future Turing. Cambridge: Cambridge University Press.

  • Horsman C, Stepney S, Wagner RC, Kendon V (2014) When does a physical system compute? Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 470(2169):20140182

    Article  Google Scholar 

  • Jackendoff R (2002) Foundations of language: Brain, meaning, grammar, evolution. Oxford University Press, New York

    Book  Google Scholar 

  • Jackendoff R (2020) The texture of the lexicon. Oxford University Press, New York

    Google Scholar 

  • Johnson K (2015) Notational variants and invariance in linguistics. Mind and Language 30(2):162–186

    Article  Google Scholar 

  • Johnson-Laird PN (2013) The mental models perspective. In: Reisberg D (ed) The Oxford handbook of cognitive psychology. Oxford University Press, New York, pp 650–667

    Google Scholar 

  • Langacker R (1999) Grammar and conceptualization. Mouton de Gruyter, Berlin

    Book  Google Scholar 

  • Lipton P (2004) Inference to the best explanation, 2nd edn. Routledge, London

    Google Scholar 

  • McCain K, Poston T (2014) Why explanatoriness is evidentially relevant. Thought 3:145–153

    Article  Google Scholar 

  • Milkowski M (2017) Why think that the brain is not a computer? APA Newsletter on Philosophy and Computers 16(2):22–28

    Google Scholar 

  • Miyagawa S, Ojima S, Berwick RC, Okanoya K (2014) The integration hypothesis of human language evolution and the nature of contemporary languages. Frontiers in Psychology 5:1–6

    Article  Google Scholar 

  • Mondal P (2017) Natural language and possible minds: How language uncovers the cognitive landscape of nature. Brill, Leiden/Boston

    Book  Google Scholar 

  • Mondal P (2018) Lexicon, meaning relations, and semantic networks. In: Proceedings of the 2nd workshop on natural language for artificial intelligence (NL4AI 2018), Trento, Italy, pp 40−52

  • Nefdt RM (2020) A puzzle concerning compositionality in machines. Mind Mach 30:47–75

    Article  Google Scholar 

  • Osborne T (2019) Dependency grammar. In: Kertész A, Moravcsik E, Rákosi C (eds) Current approaches to syntax. Mouton de Gruyter, Berlin, pp 361–388

    Chapter  Google Scholar 

  • Peirce, C. S. (1998). The essential Peirce: Selected philosophical writings, Vol. 2 (1893–1913). Peirce Edition Project (Eds.). Bloomington: Indiana University Press.

  • Pelletier FJ (2017) Compositionality and concepts – A perspective from formal semantics and philosophy of language. In: Hampton JA, Winter Y (eds) Language, cognition, and mind, vol 3. Compositionality and concepts in linguistics and psychology. Springer, New York, pp 31–94

    Google Scholar 

  • Pietroski P (2018) Conjoining meanings: Semantics without truth values. Oxford University Press, New York

    Book  Google Scholar 

  • Politzer G (2007) Reasoning with conditionals. Topoi 26:79–95

    Article  Google Scholar 

  • Pullum GK (2013) The central question in comparative syntactic metatheory. Mind and Language 28(4):492–521

    Article  Google Scholar 

  • Pustejovsky J (1995) The generative lexicon. MIT Press, Cambridge, MA

    Google Scholar 

  • Pustejovsky J (2012) Type theory and lexical decomposition. In: Pustejovsky J et al (eds) Advances in generative lexicon theory. Springer, Berlin, pp 9–38

    Google Scholar 

  • Quine W (1953) From a logical point of view. Harvard University Press, Cambridge, MA

    Google Scholar 

  • Ramchand, G. (2019). Event structure and verbal decomposition. In R. Truswell (Ed.). The Oxford Handbook of Event Structure. New York: Oxford University Press.

  • Rapaport WJ (2000) How to pass a Turing test: Syntactic semantics, natural-language understanding, and first-person cognition. J Logic Lang Inform 9:467–490

    Article  Google Scholar 

  • Rapaport WJ (2002) Holism, conceptual-role semantics, and syntactic semantics. Minds and Machine 12:3–59

    Article  Google Scholar 

  • Sag I (2012) Sign-based construction grammar. In: Boas HC, Sag IA (eds) Sign-based construction grammar. CSLI Publications, Stanford, pp 69–202

    Google Scholar 

  • Schank RC, Abelson RP (1977) Scripts, plans, goals, and understanding. Lawrence Erlbaum, Hillsdale

    Google Scholar 

  • Scheffel J (2020) On the solvability of the mind–body problem. Axiomathes 30:289–312

    Article  Google Scholar 

  • Schweizer P (2012) The externalist foundations of a truly total Turing Test. Minds and Machine 22:191–212

    Article  Google Scholar 

  • Searle J (1980) Minds, brains and programs. Behavioral and Brain Sciences 3:417–457

    Article  Google Scholar 

  • Steedman M (2019) Combinatory categorial grammar. In: Kertész A, Moravcsik E, Rákosi C (eds) Current approaches to syntax. Mouton de Gruyter, Berlin, pp 389–420

    Chapter  Google Scholar 

  • Steedman M (2020) A formal universal of natural language grammar. Language 96:1–43

    Article  Google Scholar 

  • Szangolies J (2020) The abstraction/representation account of computation and subjective experience. Minds and Machine 30:259–299

    Article  Google Scholar 

  • Terrace, H. S. (2019). Why chimpanzees can't learn language and only humans can. New York: Columbia University Press.

  • Tesniére L (1959) Eléments de syntaxe structurale. Klincksieck, Paris

    Google Scholar 

  • Turing A (1950) Computing machinery and intelligence. Mind 59:433–460

    Article  Google Scholar 

  • Turney PD, Pantel P (2010) From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research 37:141–188

    Article  Google Scholar 

  • Warwick K, Shah H (2016) Passing the Turing test does not mean the end of humanity. Cognitive Computation 8:409–419

    Article  Google Scholar 

Download references

Acknowledgements

I’m grateful to the anonymous reviewer and the editor for their insightful comments on the article, and for their judicious assessment of the argumentation deployed in the article.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Prakash Mondal.

Ethics declarations

Conflict of interest

The author declares that have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mondal, P. Meaning Relations, Syntax, and Understanding. Axiomathes 32, 459–475 (2022). https://doi.org/10.1007/s10516-021-09534-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10516-021-09534-x

Keywords

Navigation