Elsevier

Cognitive Psychology

Volume 123, December 2020, 101334
Cognitive Psychology

The Naïve Utility Calculus as a unified, quantitative framework for action understanding

https://doi.org/10.1016/j.cogpsych.2020.101334Get rights and content

Abstract

The human ability to reason about the causes behind other people’ behavior is critical for navigating the social world. Recent empirical research with both children and adults suggests that this ability is structured around an assumption that other agents act to maximize some notion of subjective utility. In this paper, we present a formal theory of this Naïve Utility Calculus as a probabilistic generative model, which highlights the role of cost and reward tradeoffs in a Bayesian framework for action-understanding. Our model predicts with quantitative accuracy how people infer agents’ subjective costs and rewards based on their observable actions. By distinguishing between desires, goals, and intentions, the model extends to complex action scenarios unfolding over space and time in scenes with multiple objects and multiple action episodes. We contrast our account with simpler model variants and a set of special-case heuristics across a wide range of action-understanding tasks: inferring costs and rewards, making confidence judgments about relative costs and rewards, combining inferences from multiple events, predicting future behavior, inferring knowledge or ignorance, and reasoning about social goals. Our work sheds light on the basic representations and computations that structure our everyday ability to make sense of and navigate the social world.

Introduction

“… a man, being just as hungry as thirsty, and placed in between food and drink, must necessarily remain where he is and starve to death.”

— Aristotle, On the Heavens 295b, c. 350 BCE

People naturally interpret each other’s behavior by attributing mental states such as beliefs, desires, and intentions. If, for instance, someone picks up their mug and immediately puts it back down, we can infer that they wanted to drink what they thought was in the mug, and that they realized that the mug was empty when they picked it up. These mental-state inferences help us explain why people act the way they do, predict what they’ll do next, and they guide us in how to react: If we think our friend wanted coffee, we might expect her to get up and walk to the kitchen, mug in hand. If we just took the last cup, we might try to be helpful and point out we’re out of coffee as soon we see her get up, even without direct evidence that she was, in fact, planning to get some more.

Although the ability to attribute beliefs and desires develops throughout childhood and into adolescence (Wellman, Cross, & Watson, 2001, Richardson, Lisandrelli, Riobueno-Naylor, & Saxe, 2018), its building blocks are at work from early in infancy. Before their first birthday, infants already interpret other people’s actions as meant to complete goals (Woodward, 1998, Woodward et al., 2001, Skerry et al., 2013), and they infer these goals by assuming that agents move efficiently in space (Gergely and Csibra, 2003, Gergely et al., 1995;, Csibra et al., 1999, Csibra et al., 2003, Scott and Baillargeon, 2013). Yet, adult commonsense psychology goes far beyond goal-attribution.

To illustrate this, consider a simple example from a moment in childhood many of us are all too familiar with. Suppose that you are in preschool, and you and your classmate each ask the teacher for help at the same time. The teacher looks at each of you briefly and then walks towards your classmate. Your teacher’s goal is clearly to help your classmate. But there are many ways to explain the causes behind this goal. Perhaps the teacher likes your classmate better. More likely, your classmate just happened to be closer, or louder. The teacher might know that what you need can wait. Or they may be confident that you don’t need help, even if you think you do. Each of these explanations boils down to the same goal—to help your classmate—but each explanation licenses different expectations about what the teacher will do next, and it affects how we evaluate their actions. A teacher who decides not to help you based on their personal preferences, for instance, has a different moral standing than a teacher who decides not to help you because they want you to challenge yourself.

How do we represent and infer the causes behind other people’s goals? Research suggests that inferences beyond goal attribution are supported by an expectation that agents make choices by quantifying, comparing, and maximizing subjective utilities—the difference between the costs they incur and the rewards they obtain. This Naïve Utility Calculus allows us to infer the knowledge, preferences, and moral values that explain other people’s goals (Jara-Ettinger et al., 2016, Jern et al., 2017, Kleiman-Weiner et al., 2017, Lucas et al., 2014), and empirical work suggests that even young children share these expectations (Jara-Ettinger et al., 2015, Jara-Ettinger et al., 2017, Pesowski et al., 2016, Lucas et al., 2014), with some basic form of the Naïve Utility Calculus in place in infancy (Liu, Ullman, Tenenbaum, & Spelke, 2017).

Despite robust empirical support for the Naïve Utility Calculus, a number of critical questions remain open. All of these questions reflect aspects of a single overarching concern: To what extent is this “naïve utility calculus” really best thought of as a “calculus”—a coherent, unified, quantitative and rational inferential framework? We focus on three specific aspects here. First, all studies to date reveal either the agent’s costs and ask people to infer the rewards, or vice-versa. In realistic scenarios, we often know neither, so we must jointly infer the costs and rewards from how an agent acts on one or more occasions. Does the Naïve Utility Calculus represent cost-reward tradeoffs with a coherent generative model of action that can support these more complex joint inferences? Second, does the Naïve Utility Calculus operate only on coarse representations of costs and rewards, supporting only qualitative inferences, or can it also drive fine-grained quantitative inferences by tracking exact tradeoffs between costs and rewards? Finally, and most generally, is the Naïve Utility Calculus best thought of as a unified generative model of how agents act given different costs and rewards, supporting many different judgements via mechanisms of approximate probabilistic inference, or instead as a more piecemeal collection of simple, cheap heuristics that only approximate rational inferences in special cases?

We answer these questions by formalizing the Naïve Utility Calculus in a computational model that performs approximate Bayesian inferences of costs and rewards over extended sequences of actions that unfold over time and space. Our model builds on but extends substantially beyond previous qualitative formulations (Jara-Ettinger et al., 2016, Jara-Ettinger et al., 2019, Jara-Ettinger et al., 2017), as well as simpler quantitative formulations of utility-based action understanding (e.g., Baker et al., 2017; Lucas et al., 2014, Jern et al., 2017) that do not attempt to account for inferences about multiple dimensions of cost and reward, or complex actions operating over multiple spatial and temporal scales. We then present a set of quantitative experiments that test if (1) the Naïve Utility Calculus supports joint inferences of costs and reward from observable actions; if (2) these inferences can be captured with quantitative precision; and (3) if these judgments are best explained by a unified theory structured around the single assumption that agents approximately maximize utilities. Throughout these studies, we compare our full Naïve Utility Calculus model with a number of variants ablating different aspects of the model, as well as simpler accounts that make similar qualitative but different quantitative predictions. These findings provide more direct evidence for each of the model’s main components, and suggest that action-understanding in people’s intuitive psychology is structured as a coherent, causal generative model of agents’ actions and choices, rather than as collection of special-purpose inference heuristics.

We begin by defining the notion of utility we will work with. When people observe intentional behavior, we assume that they attempt to understand it as implementing a plan intended to achieve some outcome, and that plan is chosen according to a subjective utility functionUp,o=Ro-Cp.

Here Up,o represents the utility expected from acting according to plan p expecting to successfully reach outcome o, which in its most basic form can be expressed as the difference between Ro, the subjective reward the agent expects to receive from that outcome, and Cp, the subjective cost that the agent expects to incur in executing the plan.

At the heart of the Naïve Utility Calculus is the assumption that agents act to maximize this utility function. That is, agents decide how to act by effectively estimating the utility associated with different action plans and pursuing the one that yields the highest utility. We assume, crucially, that agents stochastically estimate their subjective utilities rather than knowing the precise values. Our model is consequently approximate and probabilistic: Agents select the action plan with the highest estimated subjective utility, but this does not always correspond to the action plan with highest true subjective utility. Through this assumption, we can model how people infer the cost and rewards behind other people’s actions as Bayesian inference, positing the configuration of costs and rewards that explains the observed actions.

Here we focus on the Naïve Utility Calculus in the context of agents moving in space (such as those shown in in Fig. 3) and we use concrete notions of costs and rewards that even young children can grasp (costs associated with physical actions, and rewards associated with reaching for objects or helping agents; Woodward, 1998; Liu et al., 2017; Jara-Ettinger, Floyd, Tenenbaum, & Schulz, 2017). This allows us to perform quantitative tests of our account in basic settings without relying on linguistic information about agents’ behaviors, but the inferences that we formalize can generalize to more abstract notions of cost and reward. We return to this point in the discussion.

This Naïve Utility Calculus is an account for how people intuitively make sense of other people’s behavior, and is not meant to imply any assumption that people actually compute (let alone maximize) utilities when they act. Violations of classical utility theory in human behavior are well known: People are averse to risky choices, even when they have higher expected utilities (Kahneman & Tversky, 1979); they do not update their utility estimates appropriately (Vos Savant, 1990); and some patterns of choices cannot be explained through utility functions (Allais, 1953). These and other violations of maximal expected utility decision making are important but do not in any way compete with our claims here, which are about the intuitive theory of decision making. Indeed, these failure cases in a way support our theory. The fact that nonexperts are often surprised when confronted with phenomena showing utility theory failing as a descriptive account of decision making is exactly what we would expect if our intuitive theory of others assumes that agents maximize utilities.

Having said this, we do expect that at least in the most basic cases of human action where our intuitive theories are well developed and applied on a routine basis—for instance, making sense of people reaching for objects around them, or navigating through space in their immediate environment to reach goals—people’s choices should be to some degree approximately utility maximizing. If utility theory were entirely inapplicable in these cases, it would not have any explanatory or predictive power as an intuitive theory. Utility theory enjoyed enormous success in classical economics precisely because it appears to explain human behavior approximately and intuitively in many basic, everyday situations (Von Neumann and Morgenstern, 1944, Brown, 1986). Our goal here is to assess formally how and whether a simple version of utility theory, embedded in a Bayesian framework, can provide a strong quantitative account of how naïve human observers infer other agents’ costs and rewards from their actions.

Before presenting the Naïve Utility Calculus model formally, we walk through a set of basic predictions and assumptions underlying the account, motivated by qualitative phenomena of action understanding that can be observed in both adults and young children. We then present the model and a number of alternatives in quantitative terms, followed by a series of behavioral experiments rigorously testing these accounts against each other.

Fig. 1 shows the basic workings of the Naïve Utility Calculus in simple situations that even infants understand (Liu & Spelke, 2017; Gergely & Csibra, 2003). If we assume that agents maximize utilities, then we must expect agents to only act when the rewards outweigh the costs (otherwise, not acting at all yields a higher utility; Fig. 1a). When agents do act, the expectation that agents maximize utilities implies that they will fulfill their goals as efficiently as possible (because lower costs yield higher utilities; Fig. 1b). Fig. 1c illustrates how the Naïve Utility Calculus supports inferences about the causes behind other people’s goals. In Fig. 1, Fig. 2, both agents clearly chose the purple star over the green star. But, intuitively, the agent in Fig. 1c-1 revealed their preference more clearly relative to the agent in Fig. 1c-2. This is predicted by the Naïve Utility Calculus. The agent in Fig. 1c-1 incurred a higher cost to obtain the purple star, which can only be explained by positing a higher reward. By contrast, the agent in Fig. 1c-2 incurred a low cost, which is consistent with the agent having a weak or a strong preference for the purple star. Fig. 2, Fig. 3 show the opposite contrast: Although both agents chose the purple star over the green star, the agent in Fig. 1c-2 reveals a preference, while the agent in 1c-3 does not. This again is predicted by the Naïve Utility Calculus: The agent in Fig. 1c-3 may have preferred the green star, but not enough to be willing to jump over the obstacle to get it. Although the qualitative formulation of the Naïve Utility Calculus is relatively simple, a wide range of intuitions in action-understanding can be explained by it (see Jara-Ettinger et al., 2016 for a review of qualitative implications of the Naïve Utility Calculus and their relation to developmental research; see also Lucas et al., 2014, Jern and Kemp, 2011, Jern and Kemp, 2014, Jern et al., 2011, Jern et al., 2017).

Because different people have different preferences and abilities, the Naïve Utility Calculus can only yield reasonable inferences if costs and rewards are treated as agent-dependent. This is illustrated in Fig. 1d-e. In Fig. 1d, although one agent takes a physically longer path, we do not conclude that she failed to maximize utilities. Instead, we infer that one agent finds jumping easier than walking around the wall, whereas the other agent does not. Similarly, the event in Fig. 1e cannot be explained if we assumed that both agents have identical subjective rewards. Instead, by assuming that agents maximize utilities, we infer that the blue agent prefers the orange star whereas the purple agent may have chosen the green story only because it was closer (and even toddlers recognize that different agents can have different rewards; Repacholi and Gopnik, 1997, Doan et al., 2015, Ma and Xu, 2011).

As noted above, these intuitions trace back to early childhood. By age five, children’s reasoning about the costs and rewards behind other people’s goals suggest they assume agents maximize utilities (Jara-Ettinger, Gweon, Tenenbaum, & Schulz, 2015). Moreover, when an agent fails to maximize their true utilities, children infer that the agent must have been ignorant about her own costs or rewards, suggesting that by age four we already understand that agents maximize the utilities they expect to obtain, and not the true utilities they obtain (Jara-Ettinger, Floyd, Tenenbaum, & Schulz, 2017). At an even earlier age, inferences of this kind already play a role in social evaluations. Two-year-old toddlers judge a competent agent who refuses to help more harshly relative to a less competent agent who also refuses to help (Jara-Ettinger et al., 2015).

The developmental studies are useful in establishing the degree to which reasoning about agents’ costs and rewards is foundational and emerges early in social cognition (providing evidence for the “naïve” in Naïve Utility Calculus). Our principal goal in this work is to develop a more quantitative computational model of the Naïve Utility Calculus and test it rigorously—thus providing evidence that the “Naïve utility calculus” really is best thought of as a “calculus.” We work within the framework of probabilistic generative models (Tenenbaum et al., 2011, Goodman et al., 2014), applied to how agents make decisions and plan actions. We formalize a generative model of planning that takes as its input an agent’s subjective costs and rewards, together with situational information (e.g., the location of different objects and terrains) and, through the assumption of utility maximization, determines the agent’s goals and actions (see Fig. 2). Given this generative model, an observer can apply Bayesian inference to approximately invert the planning process and infer the costs and rewards most likely underlying an agent’s behavior.

Our work builds on a number of important earlier computational proposals. A first family of models, known as inverse decision-making, has focused on formalizing how we infer preferences behind people’s choices (Lucas et al., 2014, Jern et al., 2017, Jern and Kemp, 2014). These models also work through an assumption that agents maximize utilities. However, inverse decision-making models have only been used to test preference inferences from isolated, discrete choices (e.g., choosing between eating fish or turkey; Jern et al., 2017) rather than events with complex spatiotemporal structure like the ones we study here. In addition, these models have only been tested in situations where costs are not involved and only rewards need to be inferred. By contrast, our model uses utility-based reasoning about agents with variable costs and rewards, taking extended sequences of actions with multiple goals unfolding over time and space. This allows us to test the key hypotheses about the Naïve Utility Calculus that previous models could not answer: Can people perform joint inferences over costs and rewards? Are these inferences quantitative and fine-grained, in ways that respond to the precise structure of the spatial environment and the temporal sequence of actions? And are the rich patterns of inference that can be studied in these complex action settings all coherently explained by a single unified probabilistic generative model?

Our model is more closely related to a second family of models, known under the umbrella term inverse planning. These models formalize action understanding through Markov Decision Processes (Baker et al., 2009, Baker et al., 2017, Jara-Ettinger et al., xxxx, Ullman et al., 2009), which does allow them to address actions with spatiotemporal structure. While our model also embodies a form of inverse planning, it differs from past models in two significant ways. First, previous models do not emphasize or attempt to explain the multiple causes behind other people’s goals. Our model in contrast integrates both expectations about how agents navigate towards their goals, and expectations about how agents choose which goals to pursue in the first place. We achieve this by distinguishing between agents’ desires, goals, intentions, and actions, providing a more expressive picture of how people represent about other people’s minds. A second difference between our model and past inverse planning models is that our model focuses on the problem of jointly inferring agent-specific costs and rewards from observable actions. By contrast, past work treated costs as constant, observable, and uniform across agents, making them unsuitable for testing the key hypotheses our work aims to test. Our formulation allows us to explain how people jointly infer the costs and rewards that interact to determine an actor’s goals; how people predict the ways agents will act in new situations when environmental affordances (and corresponding costs and rewards) vary; how people reason about agents who are still learning about their environments, and learning their own costs and rewards over time; and how people make social evaluations by appealing to the costs agents choose or refuse to incur when deciding whether to help another—all of which previous models are unable to handle.

Our experiments were designed to test the model’s predictions about these cases, specifically in response to our three critical questions introduced earlier. To answer our first two questions—Does the Naïve Utility Calculus support joint cost and reward inferences from observable actions? If so, are these fine-grained quantitative inferences tracking exact tradeoffs between costs and rewards? —we presented participants with an agent acting in a novel world and we asked them to jointly infer the agent’s cost and the reward functions. We then compared the functions that participants provided with the ones that our model inferred (Experiment 1). We next tested if our model continues to quantitatively predict participant judgments in situations where behavior from multiple events needs to be combined to draw the appropriate cost and reward inferences (Experiment 2). Finally, we tested if people’s confidence in their inferences matches the confidence in our model (Experiment 3). Combined, these three experiments provide support that our model predicts participant judgments with fine-grained accuracy.

To answer our third question—is the Naïve Utility Calculus instantiated as a probabilistic generative model of how agents act given different costs and rewards? —we presented participants with a variety of action-understanding tasks that can be solved with the same generative model. We then compared participant judgments against the unified predictions of the Naïve Utility Calculus, and against a collection of simple special-case heuristics. Specifically, we tested how participants predict what an agent will do next (Experiment 4), how participants infer whether an agent is knowledgeable or ignorant (Experiment 5), and how participants make simple social evaluations (Experiment 6). We find that the Naïve Utility Calculus model outperforms simple heuristics by predicting fine-grained structure in participant judgments that the heuristics predict should be noise. All experiments were run with naïve participants who had not participated in any of our previous tasks.

Section snippets

Computational framework

Our computational model (code available at http://www.github.com/julianje/bishop) consists of two components. The first is a generative model that, given a cost and a reward function, probabilistically produces utility-maximizing behaviors. The second is a mechanism that uses the generative model to recover the costs and rewards underlying an agent’s observable actions via Bayesian inference (or more precisely, a Monte Carlo sampling-based approximation to Bayesian inference known as likelihood

Experiment 1

We begin by testing the main prediction of the Naïve Utility Calculus: people should be able to jointly infer agents’ unobservable costs and rewards by assuming that agents maximize utilities. In Experiment 1 participants watched an agent navigate a world with different kinds of terrains and care packages and they were asked to infer the cost and reward functions. To ensure that our results do not hinge on any specific type of behavior or spatial layout, we used three different geometries

Experiment 2

Experiment 1 looked at judgments from a single event. However, in the real world, we often observe people acting on multiple occasions, frequently under different contexts, and subsequent observations may lead us to revise our beliefs about what they want and what they can do. In Experiment 2 we test how participants infer agents’ underlying costs and rewards when there are multiple action events. This experiment serves two purposes: first, to test if people’s inferences over repeated events

Experiment 3

Experiments 1–2 show that people can quantitatively decompose agents’ behavior into judgments about their costs and rewards, relying on the assumption that agents maximize utilities. In Experiment 3 we compare people’s relative confidence judgments over cost and reward inferences against the confidence of our generative model. That is, rather than asking participants for point estimates (as in Experiments 1–2), we asked them to determine their relative confidence over which object had a higher

Experiment 4

In Experiment 4 we test if people can use inferred costs and rewards to predict how an agent will behave in a new situation. We then compare participants’ predictions against our full model, against a simple goal-inference model, and against two plausible heuristics that approximate our model predictions using simple queues.

Our goal-inference model is an implementation of the model in Baker et al., 2009. This model can be thought of as a simplified model of the Naïve Utility Calculus where

Experiment 5

Experiments 1 and 2 suggest that people have a Naïve Utility Calculus that enables them to infer other people’s costs and rewards and predict future events based on these inferences. In these situations, however, the agent herself always knew the costs and rewards. In more realistic circumstances, agents can be either naïve or wrong about the costs and rewards involved (e.g., Jara-Ettinger et al., 2016, Jara-Ettinger et al., 2017, Moutoussis, Dolan, & Dayan, 2016), act based on impartial

Experiment 6

Experiment 6 tests a final hypothesis of our proposal: if the Naïve Utility Calculus is instantiated as a generative model at the center of social reasoning, these inferences should also underlie how we reason about social goals. Past qualitative research with children has already shown that an expectation that agents maximize utilities underlies both social evaluations (Jara-Ettinger et al., 2015) and reasoning about communicative goals (Jara-Ettinger, Floyd, Huey, Tenenbaum, & Jara-Ettinger,

General discussion

We presented a computational model of a fundamental aspect of human social cognition: the ability to interpret other people’s actions in terms of the motivating forces behind their goals. Our results converge to the idea that our fundamental ability to make sense of other people’s actions is supported by a Naïve Utility Calculus—a mental model of others’ choices and actions that works through the assumption that agents maximize utilities. Experiment 1 showed that people can infer an agent’s

Conclusion

People reason about others’ goals and the causes behind them in terms of a Naïve Utility Calculus: We expect agents to maximize utilities, we conceptualize these utilities in terms of costs and rewards that vary across agents and situations, and these intuitions are instantiated as a generative model that supports a wide range of probabilistic inferences about agents’ mental states, future actions, knowledge, and prosocial status.

In proposing a formal model of these capacities, we hope to have

Acknowledgments

We are grateful to Nancy Kanwisher, Rebecca Saxe, and Elizabeth Spelke for useful comments on the ideas behind this work. We thank two anonymous reviewers for critical feedback on the manuscript. This work was supported by the Simons Center for the Social Brain award number 6931582 and by a Google faculty research award. This material is based upon work supported by the Center for Brains, Minds, and Machines (CBMM), funded by NSF-STC award CCF-1231216.

References (79)

  • M. Kleiman-Weiner et al.

    Learning a commonsense moral theory

    Cognition

    (2017)
  • S. Liu et al.

    Six-month-old infants expect agents to minimize the cost of their actions

    Cognition

    (2017)
  • L. Ma et al.

    Young children’s use of statistical sampling evidence to infer the subjectivity of preferences

    Cognition

    (2011)
  • M.L. Pesowski et al.

    Young children infer preferences from a single action, but not if it is constrained

    Cognition

    (2016)
  • R. Saxe

    Against simulation: The argument from error

    Trends in Cognitive Sciences

    (2005)
  • R.S. Sutton et al.

    Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning

    Artificial Intelligence

    (1999)
  • M. Thioux et al.

    Action understanding: How, what and why

    Current Biology

    (2008)
  • A.L. Woodward

    Infants selectively encode the goal object of an actor's reach

    Cognition

    (1998)
  • M. Allais

    L'extension des théories de l'équilibre économique général et du rendement social au cas du risque

    Econometrica, Journal of the Econometric Society

    (1953)
  • C.L. Baker et al.

    Rational quantitative attribution of beliefs, desires, and percepts in human mentalizing

    Nature Human behavior.

    (2017)
  • M. Botvinick et al.

    Model-based hierarchical reinforcement learning and human action control

    Philosophical Transactions of the Royal Society B: Biological Sciences

    (2014)
  • Brown, R. (1986). Social Psychology, The Second Edition. Free...
  • Carruthers, P., & Smith, P. K. (Eds.). (1996). Theories of theories of mind. Cambridge University...
  • S. Collette et al.

    Neural computations underlying inverse reinforcement learning in the human brain

    Elife

    (2017)
  • S.M. Constantino et al.

    Learning the opportunity cost of time in a patch-foraging task

    Cognitive, Affective, & Behavioral Neuroscience

    (2015)
  • Csibra, G., Gergely, G., Bı́ró, S., Koos, O., & Brockbank, M. (1999). Goal attribution without agency cues: the...
  • Doan, T., Denison, S., Lucas, C. G., & Gopnik, A. (2015). Learning to reason about desires: An infant training study....
  • Dvijotham, K., & Todorov, E. (2010, June). Inverse optimal control with linearly-solvable MDPs. InProceedings of the...
  • M.J. Ferguson et al.

    When and how implicit first impressions can be updated

    Current Directions in Psychological Science

    (2019)
  • S.J. Gershman et al.

    Plans, habits, and theory of mind

    PloS one

    (2016)
  • M.B. Goldwater et al.

    Children’s understanding of habitual behaviour

    Developmental Science

    (2020)
  • Goodman, N. D., Tenenbaum, J. B., & Gerstenberg, T. (2014).Concepts in a probabilistic language of thought. Center for...
  • R.M. Gordon

    Folk psychology as simulation

    Mind & language

    (1986)
  • H. Gweon et al.

    Infants consider both the sample and the sampling process in inductive generalization

    Proceedings of the National Academy of Sciences

    (2010)
  • Hamrick, J. B., Smith, K. A., Griffiths, T. L., & Vul, E. (2015). Think again? The amount of mental simulation tracks...
  • Hawthorne-Madell & Goodman. (2015). So good it has to be true: Wishful thinking in theory of mind. Proceedings of the...
  • G. Hickok

    Eight problems for the mirror neuron theory of action understanding in monkeys and humans

    Journal of cognitive neuroscience

    (2009)
  • Jara-Ettinger*, J., Sun*, F., Schulz, L. E., & Tenenbaum, J. B., (in preparation). Sensitivity to the sampling process...
  • J. Jara-Ettinger et al.

    Minimal covariation data support future one-shot inferences about unobservable properties of novel agents

    Proceedings of the 39th annual conference of the Cognitive Science Society

    (2017)
  • Cited by (50)

    View all citing articles on Scopus
    View full text