Abstract
This paper revisits the conception of intelligence and understanding as embodied in the Turing Test. It argues that a simple system of meaning relations drawn from words/lexical items in a natural language and framed in terms of syntax-free relations in linguistic texts can help ground linguistic inferences in a manner that can be taken to be 'understanding' in a mechanized system. Understanding in this case is a matter of running through the relevant inferences meaning relations allow for, and some of these inferences are plain deductions and some can serve to act as abductions. Understanding in terms of meaning relations also supervenes on linguistic syntax because such understanding cannot be simply reduced to syntactic relations. The current approach to meaning and understanding thus shows that this is one way, if not the only way, of (re)framing Alan Turing's original insight into the nature of thinking in computing systems.
Similar content being viewed by others
Availability of Data and Material
Not applicable.
Code Availability
Not applicable.
Notes
Exploring the validity or, for that matter, invalidity of the Turing Test is not the goal of this paper, and hence that question is outside the scope of the present discussion. We may note, however, that the Turing Test has targeted the understanding of natural language rather than some other cognitive task as the marker of thinking. This is what seems very relevant to the reformulation of the Turing Test in terms of the capacity to construct meaning relations to be discussed in the next section.
This contrasts with the familiar hypothesis that lexical items are the atomic elements that can be shared among humans, other animals and perhaps machines, whereas the structures built out of lexical items crossing the boundary between lexical items and other functional items (for example, prepositions, tense markers) are unique to humans (see Miyagawa et al. 2014). It needs to recognized that lexical items, when taken to be atomic elements as part of a formal system, are actually conceptually empty minimal items in their formal characterization. This allows lexical items to be shared among humans and other organisms, but keeps the structures built from them from being so shared.
Note that this notion of relation is way different from the relations that can be constructed, as in model-theoretic syntax, for nodes in a tree (such as precedence or dominance relations) and for categories such as NP (Noun Phrase), VP (Verb Phrase), S (Sentence) etc. which are properties of nodes (see for details, Pullum (2013)). In fact, the relations R1, … ,Rk, Rk+1, … , R∞ encompass many dimensions (such as string adjacency, precedence, dominance and parent-of relations etc.) in terms of which linguistic constructions can be characterized.
Significantly, this account has differences from frame semantics (Fillmore 1976) or scripts (Schank and Abelson 1977) because the approach here is much more granular. For instance, a meaning relation between ‘a’ and ‘river’ in ‘a beautifully painted picture of the river’ cannot be easily captured by a syntactic unit or by a frame (or by a script), say, 'being a river' which will end up mapping the whole noun phrase and its grammatical function to the frame but 'a' and 'river' in are actually discontinuous frame elements (or script elements). Besides, frames may themselves participate in compositional operations in a construction through inheritance of information from daughter signs in local trees to mother signs (see Sag 2012).
It may also be observed that the freely variable structures even in idioms such as 'take (something) for granted' or 'cut (something) to the bone' (with the freely variable parts underlined) are not illicit syntactic categories or units (as they are all NPs), while the fixed expressions may be productively illicit in the language ('for granted' is possible, but 'for accepted' is probably not).
Croft shows that a language such as Kilivila has a fixed order of NPs after a verb, and hence their relations to each other do not help decipher the syntactic roles of the arguments. Rather, the syntactic roles of the arguments are mapped onto their participant roles.
References
Baggini J (2009) Painting the bigger picture. The Philosopher’s Magazine 8:37–39
Barker C, Jacobson P (2007) Direct compositionality. Oxford University Press, New York
Berwick RC, Chomsky N (2017) Why only us: Language and evolution. MIT Press, Cambridge, MA
Boden M (1988) Computer models of the mind. Cambridge University Press, Cambridge
Boeckx C (2015) Elementary syntactic structures: Prospects of a feature-free syntax. Cambridge University Press, Cambridge
Brandom R (1994) Making it explicit. Harvard University Press, Cambridge, MA
Brandom R (2007) Inferentialism and some of its challenges. Res 74(3):651–676
Bresnan J (2001) Lexical functional syntax. Blackwell, Oxford
Chomsky N (1995) The minimalist program. MIT Press, Cambridge, MA
Chomsky N (2000) New horizons in the study of language and mind. Cambridge University Press, Cambridge
Clark S (2015) Vector space models of lexical meaning. In: Lappin S, Fox C (eds) Handbook of contemporary semantic theory. Blackwell, Oxford, pp 493–522
Colston HL (2019) How language makes meaning. Cambridge University Press, Cambridge
Croft W (2001) Radical construction grammar. Oxford University Press, New York
Culicover PW, Jackendoff R (2005) Simpler syntax. Oxford University Press, New York
Culicover PW (2013) Explaining syntax: Representations, structures, and computation. Oxford University Press, New York
Dalrymple M, Kaplan RM, King TH (2016) Economy of Expression as a principle of syntax. Journal of Language Modelling 3(2):377–412
Dalrymple M, Findlay JY (2019) Lexical functional grammar. In: Kertész A, Moravcsik E, Rákosi C (eds) Current approaches to syntax: A comparative handbook. Mouton de Gruyter, Berlin, pp 123–154
Dennett D (2013) Intuition pumps and other tools for thinking. W.W. Norton and Co, New York
Descombes V (2010) Mind’s provisions: A critique of cognitivism. Princeton University Press, Princeton
Douven I (2017) What is inference to the best explanation? And why should we care? In: Poston T, McCain K (eds) Best explanations: New essays on inference to the best explanation. Oxford University Press, Oxford, pp 7–24
Dowty D (1979) Word meaning and Montague grammar: The semantics of verbs and times in generative semantics and in Montague’s PTQ. Springer, Dordrecht
Duffley P (2020) Linguistic meaning meets linguistic form. Oxford University Press, New York
Fillmore CJ (1976) Frame semantics and the nature of language. Annals of the New York Academy of Sciences: Conference on the Origin and Development of Language and Speech 280:20–32
Fodor J (2000) The mind doesn’t work that way: The scope and limits of computational psychology. MIT Press, Cambridge, MA
Fuller TJ (2019) Cognitive architecture, holistic inference and Bayesian networks. Mind Mach 29:373–395
Goldberg A (2006) Constructions at work: The nature of generalization in language. Oxford University Press, New York
Goldberg A (2019) Explain me this: Creativity, competition, and the partial productivity of constructions. Princeton University Press, Princeton
Harnad S (1989) Minds, machines and Searle. J Exp Theor Artif Intell 1:5–25
Haugeland J (2002) Syntax, semantics, physics. In: Preston JM, Bishop MA (eds) Views into the Chinese room: New essays on Searle and artificial intelligence. Oxford University Press, New York, pp 379–392
Hodges W (2012) Formalizing the relationship between meaning and syntax. In: Werning M, Hinzen W, Machery E (eds) The Oxford handbook of compositionality. Oxford University Press, New York, pp 245–261
Hofstadter, D. R. (2016). Dull rigid human meets ace mechanical translator. In S. B. Cooper, A. Hodges (Eds.). The once and future Turing. Cambridge: Cambridge University Press.
Horsman C, Stepney S, Wagner RC, Kendon V (2014) When does a physical system compute? Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 470(2169):20140182
Jackendoff R (2002) Foundations of language: Brain, meaning, grammar, evolution. Oxford University Press, New York
Jackendoff R (2020) The texture of the lexicon. Oxford University Press, New York
Johnson K (2015) Notational variants and invariance in linguistics. Mind and Language 30(2):162–186
Johnson-Laird PN (2013) The mental models perspective. In: Reisberg D (ed) The Oxford handbook of cognitive psychology. Oxford University Press, New York, pp 650–667
Langacker R (1999) Grammar and conceptualization. Mouton de Gruyter, Berlin
Lipton P (2004) Inference to the best explanation, 2nd edn. Routledge, London
McCain K, Poston T (2014) Why explanatoriness is evidentially relevant. Thought 3:145–153
Milkowski M (2017) Why think that the brain is not a computer? APA Newsletter on Philosophy and Computers 16(2):22–28
Miyagawa S, Ojima S, Berwick RC, Okanoya K (2014) The integration hypothesis of human language evolution and the nature of contemporary languages. Frontiers in Psychology 5:1–6
Mondal P (2017) Natural language and possible minds: How language uncovers the cognitive landscape of nature. Brill, Leiden/Boston
Mondal P (2018) Lexicon, meaning relations, and semantic networks. In: Proceedings of the 2nd workshop on natural language for artificial intelligence (NL4AI 2018), Trento, Italy, pp 40−52
Nefdt RM (2020) A puzzle concerning compositionality in machines. Mind Mach 30:47–75
Osborne T (2019) Dependency grammar. In: Kertész A, Moravcsik E, Rákosi C (eds) Current approaches to syntax. Mouton de Gruyter, Berlin, pp 361–388
Peirce, C. S. (1998). The essential Peirce: Selected philosophical writings, Vol. 2 (1893–1913). Peirce Edition Project (Eds.). Bloomington: Indiana University Press.
Pelletier FJ (2017) Compositionality and concepts – A perspective from formal semantics and philosophy of language. In: Hampton JA, Winter Y (eds) Language, cognition, and mind, vol 3. Compositionality and concepts in linguistics and psychology. Springer, New York, pp 31–94
Pietroski P (2018) Conjoining meanings: Semantics without truth values. Oxford University Press, New York
Politzer G (2007) Reasoning with conditionals. Topoi 26:79–95
Pullum GK (2013) The central question in comparative syntactic metatheory. Mind and Language 28(4):492–521
Pustejovsky J (1995) The generative lexicon. MIT Press, Cambridge, MA
Pustejovsky J (2012) Type theory and lexical decomposition. In: Pustejovsky J et al (eds) Advances in generative lexicon theory. Springer, Berlin, pp 9–38
Quine W (1953) From a logical point of view. Harvard University Press, Cambridge, MA
Ramchand, G. (2019). Event structure and verbal decomposition. In R. Truswell (Ed.). The Oxford Handbook of Event Structure. New York: Oxford University Press.
Rapaport WJ (2000) How to pass a Turing test: Syntactic semantics, natural-language understanding, and first-person cognition. J Logic Lang Inform 9:467–490
Rapaport WJ (2002) Holism, conceptual-role semantics, and syntactic semantics. Minds and Machine 12:3–59
Sag I (2012) Sign-based construction grammar. In: Boas HC, Sag IA (eds) Sign-based construction grammar. CSLI Publications, Stanford, pp 69–202
Schank RC, Abelson RP (1977) Scripts, plans, goals, and understanding. Lawrence Erlbaum, Hillsdale
Scheffel J (2020) On the solvability of the mind–body problem. Axiomathes 30:289–312
Schweizer P (2012) The externalist foundations of a truly total Turing Test. Minds and Machine 22:191–212
Searle J (1980) Minds, brains and programs. Behavioral and Brain Sciences 3:417–457
Steedman M (2019) Combinatory categorial grammar. In: Kertész A, Moravcsik E, Rákosi C (eds) Current approaches to syntax. Mouton de Gruyter, Berlin, pp 389–420
Steedman M (2020) A formal universal of natural language grammar. Language 96:1–43
Szangolies J (2020) The abstraction/representation account of computation and subjective experience. Minds and Machine 30:259–299
Terrace, H. S. (2019). Why chimpanzees can't learn language and only humans can. New York: Columbia University Press.
Tesniére L (1959) Eléments de syntaxe structurale. Klincksieck, Paris
Turing A (1950) Computing machinery and intelligence. Mind 59:433–460
Turney PD, Pantel P (2010) From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research 37:141–188
Warwick K, Shah H (2016) Passing the Turing test does not mean the end of humanity. Cognitive Computation 8:409–419
Acknowledgements
I’m grateful to the anonymous reviewer and the editor for their insightful comments on the article, and for their judicious assessment of the argumentation deployed in the article.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
Not applicable.
Corresponding author
Ethics declarations
Conflict of interest
The author declares that have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Mondal, P. Meaning Relations, Syntax, and Understanding. Axiomathes 32, 459–475 (2022). https://doi.org/10.1007/s10516-021-09534-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10516-021-09534-x