Global matching in music familiarity: How musical features combine across memory traces to increase familiarity with the whole in which they are embedded
Introduction
A central question in cognitive psychology concerns how information is represented in the human mind, and how these representations contribute to cognitive processes. A long-held assumption among memory researchers is the idea that memory traces—often referred to as engrams—consist of “feature bundles,” or collections of independent elements from the original encoding episode that are connected together to form the record of the experience (Tulving and Watkins, 1975, Estes, 1950). The assumption that memory traces are sets of tied- together features from the original experience has even been described by some as a “basic pretheoretical assumption” in memory theory (Tulving & Bower, 1974, p. 269):
“A rather general and atheoretical conception of the memory trace of an event regards it as a collection of features or a bundle of information. This view has been proposed and elaborated by many writers (e.g., Anisfeld and Knapp, 1968, Bower, 1967, Bregman and Chambers, 1966, Underwood, 1969, Wickens, 1970) and is now generally accepted as one of the basic pretheoretical assumptions.”
The feature assumption is apparent in most mathematical models of human memory (Nairne, 1990, Plaut, 1995, Seidenberg, 2007, Smith et al., 1974), and is especially so in models of recognition memory (Brockdorff and Lamberts, 2000, Clark and Gronlund, 1996, Cox and Shiffrin, 2017, Hintzman, 1988, McClelland and Chappell, 1998, Shiffrin and Steyvers, 1997).
A logical question to follow the feature assumption is: What is a feature? And, following from this, how is it exactly that a feature can contribute to a memory process?
Some of the most direct evidence for a role of separable features in the memory trace for a previously experienced event comes from research on familiarity-detection during retrieval failure. The task that is used to focus on familiarity-detection during retrieval failure involves either isolating features from a studied item and using those isolated features as a test cue (e.g., Cleary and Greene, 2000, Cleary and Greene, 2001, Cleary et al., 2004, Cleary et al., 2016, Cleary et al., 2007, Kostic and Cleary, 2009), or manipulating the potential feature overlap between a novel holistic test cue and a studied item (e.g., Cleary, 2004, Cleary et al., 2009, Cleary et al., 2012, Ryals and Cleary, 2012, Ryals et al., 2013). The focus is on the ability to detect familiarity with the cue during instances of retrieval failure (failure to recall the study item to which the cue corresponds). Although the participant cannot specifically retrieve the target to which the cue corresponds, a sense of familiarity signals to the participant that the cue corresponds to something held within memory. This method has been used to systematically identify many different types of features that are present within memory traces (see Cleary, 2014, for a review). Many of the identified feature-types are described below.
Geons. Cleary et al. (2004) isolated geometric shape features (geons) using picture fragments that came from potentially studied black and white line drawings. The method used for isolating geons stemmed from Biederman's (1987) approach of leaving junction points intact in the fragments to allow for some extraction of geon information from the fragment. Participants could discriminate between unidentified geometric fragments of studied drawings and unidentified geometric fragments of unstudied drawings, but only when geon information was included in the fragments. In a comparison condition in which the same number of pixels from the original line drawing were present (10%) but no junction points were included, no ability to discriminate unidentified studied from unidentified unstudied fragments occurred. This pattern suggests that geons—basic geometric shapes found in everyday objects—are a type of feature retained in memory traces for recently studied images.
Spatial Configuration. Cleary et al. (2009) showed that the overall configuration of elements within a scene—or its overall gestalt—can actually be a type of feature retained in memory traces for recently viewed events. Using the feature overlap method, Cleary et al. showed that participants could discriminate between novel test drawings (of new unstudied scenes) that configurally mapped onto unrecalled studied scenes and test drawings that did not.
Cleary et al. (2012) later showed this pattern in an immersive virtual reality situation with scenes. Novel scenes that spatially mapped onto earlier viewed but unrecalled scenes in their configuration of elements were found to be more familiar than novel scenes that did not map onto earlier viewed scenes. This pattern has since been obtained in a number of studies involving the first-person perspective through virtual tours (Cleary and Claxton, 2018, Cleary et al., 2018, Cleary et al., 2019). Overall, the pattern suggests that the particular spatial arrangement of elements on a grid constitutes a type of feature for an experienced event that is retained in the memory trace for that event.
Graphemic Wordform Features. By manipulating feature overlap between a test cue (e.g., bashful) and a potentially studied item (e.g., bushel), Cleary (2004) demonstrated that when cued recall failed (e.g., bashful failed to be recalled in response to the cue bushel), participants were still able to discriminate between cues resembling and not resembling studied words. This finding of an ability to detect increased familiarity with cues that overlap in graphemic features with an unrecalled studied word suggests that graphemic features (orthographic and phonological components of the word) are present in the memory trace for that studied word.
Semantic Features. Though semantic features are much more abstract in nature than the features discussed thus far, there is an abundance of evidence pointing toward the likely existence of semantic features in the human knowledge-base (Chang et al., 2011, Griffiths et al., 2007, Landauer and Dumais, 1997, McCrae et al., 2005, Plaut, 1995, Seidenberg, 2007, Smith et al., 1974). Using the feature overlap method with semantic feature norms, Cleary (2004) showed that test cues (e.g., bashful) that shared semantic overlap with unrecalled studied words (e.g., shy) led to greater perceived familiarity than test cues that did not share such semantic resemblance. Cleary et al. (2016) followed up on this finding using the semantic feature overlap norms of McCrae et al. (2005) and found that increasing the level of semantic feature overlap between a test cue (e.g., cedar) and its corresponding unrecalled target words from the study phase (e.g., birch, oak, pine, willow) increased the level of perceived familiarity with that cue. These findings suggest that semantic features are yet another kind of feature retained in memory traces for recently studied items.
Letter Positions. Other research studies suggest that letter location information (or relative letter position information) is another type of feature retained in memory traces for studied words. This research has come from the approach of feature isolation, rather than feature overlap, at test. Specifically, when letters in their relative positions are isolated at test (e.g., R_ I _ _R_P) from their potentially unrecalled studied words (e.g., RAINDROP), participants can discriminate between isolated letter fragments that came from unrecalled studied words and those that came from unidentified unstudied words (Cleary and Greene, 2000, Cleary and Greene, 2001, Peynircioğlu, 1990). This pattern suggests that relative letter position information is a type of isolable feature retained in the memory trace for a recently studied word.
Phonemes. Yet other research points toward phonemes as another feature present in memory traces for studied words. Cleary (2004) found evidence for this using the feature overlap method. When a cue rhymed with but did not look like an unrecalled studied word (e.g., the cue raft for the unrecalled studied word laughed), participants could detect increased familiarity with that cue relative to when it did not rhyme with a studied word.
Cleary et al. (2007) additionally found evidence for the presence of phonemes in memory traces for recently studied words using the feature isolation method. When phonemes were digitally spliced from spoken word recordings, those isolated phoneme fragments at test were perceived to be more familiar when they came from recently heard or recently seen unrecalled studied words than when they came from unstudied words. Taken together, these results suggest that phonemes are a type of isolable feature that is retained in memory traces for recently studied words.
Rhythm and Pitch. Kostic and Cleary (2009) found evidence that rhythm and pitch are two types of isolable features that are stored in memory traces for recently heard musical pieces. Using the feature isolation method, they found that isolating a piano song’s rhythm (by having the exact rhythm tapped out in a single note on a wood block instrument) led to greater perceived familiarity when the unidentifiable rhythm came from an unrecalled piano song clip that was recently heard at study than when it came from an unidentified unstudied song clip. In short, people could discriminate between rhythms that came from recently heard songs and rhythms that had not, based solely on their feeling or sense of familiarity about the rhythm itself. The same was found for isolated pitch sequences. Isolating a piano song’s pitch sequence (by retaining the note order but attaching those notes to an arbitrary unstudied rhythm) led to greater perceived familiarity when the unidentifiable pitch sequence came from an unrecalled piano song clip that was heard at study than when it came from an unidentified unstudied song clip. In short, people could discriminate pitch sequences that came from recently heard songs and pitch sequences that had not, based solely on their feeling or sense of familiarity about the pitch sequence itself.
The consistent finding that isolable features are involved in familiarity-detection during retrieval failure fits well with global matching models of familiarity (see Clark & Gronlund, 1996, for a review). Global matching models are a class of computational models that specify how the familiarity signal may operate in human memory. According to these models, the familiarity signal is a feeling or sense that can vary in intensity and, through relatively high signal intensity, potentially alert a person to the fact that the current situation that is being experienced has been experienced in some way previously.
In global matching models, features play a central role in the computation of the familiarity signal. Every item that was encoded from the encoding phase is represented as a set of features. Whether the familiarity signal for a given test item (test probe) is higher or lower in intensity depends on the degree of match between the features present in the test probe and the features that were stored in memory from the encoding phase. A higher degree of match results in a higher intensity familiarity signal and vice versa. The term “global’ in the global matching models stems from the fact that the familiarity signal’s intensity is not just a function of the degree of feature match between a given encoded item and the test probe, but the degree of feature-match combined across all items in memory in relation to the test probe. In short, the level of feature match between the probe and all items in memory determines the familiarity intensity that results from that probe.
A particularly straightforward example of a global matching model is Hintzman's (1988) MINERVA 2 model depicted in Fig. 1. In this model, each encoded item from the study phase is laid down as a vector of features. The vector of features constitutes the memory trace, and the features themselves are represented as a series of +1s, −1s, and 0s (0s are non-encoded or missed features). The test probe is also a vector of features (see Fig. 1). Thus, the assumption is that the test item itself is decomposed into features by the mind. These test probe features are matched, on a feature-by-feature basis, with features that exist within the memory traces that have been stored in memory.
Mathematically, the feature matching process that determines the intensity of the familiarity signal (labeled echo intensity in this model) is carried out as follows. Each feature in the test probe is matched with the feature in that corresponding location in a given memory trace by multiplying the two (e.g., the first feature in the test probe vector is multiplied with the first feature in the memory trace vector). Each product of the multiplication indicates whether there was a feature match or not, as a positive product is an indication of a feature match and a negative product is an indication of a mismatch. For that particular memory trace, each product computed for each feature of that trace is then summed across the memory trace to provide what is called an Activation Value for that memory trace (represented as each A value in Fig. 1). More specifically, the value of the sum of the products across the memory trace is cubed, to preserve the sign, and that cubed sum constitutes the Activation Value for that memory trace. The Activation Value serves as an index of the degree of feature match between that particular memory trace and the test probe.
The way that this model is global in its feature matching process is that the Activation Values for each memory trace are then summed to provide the numerical value of the familiarity signal (the echo intensity) that occurs in response to that test item. This means that if multiple memory traces have a high degree of feature overlap with the test probe (hence high Activation Values), the resulting familiarity signal will be higher than if only one does.
Although it is relatively straightforward how the MINERVA 2 model can describe, at a theoretical level, what may be occurring to allow the aforementioned cases of familiarity-detection during retrieval failure (the familiarity signal is higher in intensity among test cues that have high feature overlap with a memory trace than among test cues that have lower feature overlap with any memory traces), that pattern was not what this type of model was originally devised to describe. Global matching models in general were originally devised as theoretical accounts of recognition memory, or old-new discrimination between exact repeats of studied items and new items. These models largely fell out of favor when it became clear that many of their predictions were problematic and did not hold up in studies of old-new recognition memory (e.g., see Clark & Gronlund, 1996, for a review) or of other variants of recognition memory tasks like associative recognition tasks (Gronlund & Ratcliff, 1989). Their downfall was also possibly partly due to the fact that they are single process accounts of recognition memory when it is now widely accepted that old-new recognition very likely involves more than a single process (e.g., Diana et al., 2006, Mandler, 2008, Mickes et al., 2009, Onyper et al., 2010, Wais et al., 2008, Yonelinas, 2002).
However, the fact that the feature-matching method of computing a familiarity signal’s intensity does not adequately describe old-new recognition in standard old-new recognition memory paradigms does not mean that the process itself does not exist in human cognition. In fact, it may be that a task designed specifically to probe familiarity-detection in isolation may be well-suited to testing global matching models’ ability to describe the computation of the familiarity signal that presumably enables familiarity-detection during retrieval failure in the first place.
Familiarity-detection as a Metacognitive Phenomenon. Familiarity-detection—the ability to sense something as familiar despite no ability to identify the source of that familiarity or the previous experience responsible for it—is a common experience. As Mandler (1991) once eloquently argued,
“It is an experience all of us have had at some time or another: We meet somebody at a party, know them to be familiar but do not know who they are; we recognize a melody, but fail to remember its name or when or where we have heard it before; we read a line of a poem, know it, but do not know where we have read it before, much less the title or author of the poem” (Mandler, 1991, p. 207).
Thus, familiarity as a subjective experience of human memory appears to exist, even if perhaps not necessarily well-captured in standard old-new recognition memory paradigms (for reviews of debates surrounding attempts to separate familiarity from recollection in standard old-new recognition memory paradigms, see Diana et al., 2006, Hintzman, 2011, Wais et al., 2008, Yonelinas, 2002). Most people can relate to having the experience of familiarity now and then, in much the same way that most people can relate to having other metacognitive subjective experiences of memory, like tip-of-the-tongue states (Brown, 2012, Schwartz, 2002), feelings of déjà vu (Brown, 2004, Cleary et al., 2012, Cleary and Claxton, 2018), or feelings-of-knowing (Koriat, 1993, Metcalfe et al., 1993, Schwartz and Metcalfe, 1992). Moreover, stimulus familiarity is thought to play a role in other aspects of metacognition, such as metacognitive control (Malmberg, 2008, Metcalfe, 1993, Reder, 1987).
It may be that, as with these other subjective memory states (tip-of-the-tongue and déjà vu experiences), the key to studying the sensation of familiarity is to use a task that can induce the experience in the lab. Indeed, Hintzman (2011) argues that memory researchers often become too focused on debating theoretical explanations for particular laboratory tasks (using old-new recognition tasks as one particular example), and lose sight of the real-world memory phenomena that cognitive scientists should be seeking to understand. Familiarity feelings are one such cognitive phenomenon.
To understand the subjective metacognitive feeling of familiarity, it may be necessary to focus on the sense of familiarity during instances of recall failure by using a task designed to elicit or focus on recall failure. Similar to tip-of-the-tongue experiences, experiences of familiarity in the world seem to be examples of the subjective sense of memory during retrieval failure. In the case of tip-of-the-tongue states, there is a subjective sense that a word is in memory even though the word fails to be retrieved. In the case of familiarity, there is a subjective sense that the current situation was experienced before, without being able to recall or identify that prior experience responsible for the sense of familiarity. Just as tasks aimed at studying tip-of-the-tongue states involve focusing on instances of retrieval failure, attempting to study the subjective sensation of familiarity, too, might best be arrived at by focusing on instances of retrieval failure. Focusing on instances of retrieval failure may reveal that the feature matching process specified in global matching models like MINERVA 2 are a useful account of how familiarity-detection occurs. That is, the feature-matching process specified in MINERVA 2 may be a good account of how the familiarity signal arises from a cue to allow a sense of familiarity with the cue during a failure to consciously recall the previous experience responsible for that sense of familiarity.
A Role of Global Feature Matching in Familiarity-detection. Indeed, focusing on familiarity-detection during instances of retrieval failure has revealed that, besides the general assumption of segmentation of the cue and the memory traces into isolable features, another one of the general assumptions of the feature matching process specified in MINERVA 2 holds up in describing the process likely to underlie such familiarity-detection. This assumption is that of global matching—the idea that it is the combined level of feature-match across all items in memory that produces the familiarity signal for the test cue, not just any one item. According to the MINERVA 2 model depicted in Fig. 1, if multiple memory traces have a high degree of feature match to the test probe, the resulting echo intensity (the intensity of the familiarity signal) should be higher than if only one memory trace has a high degree of feature match to the test probe. In turn, if only one memory trace has a high degree of feature match to the test probe, that trace should still lead to a higher echo intensity value than if no memory traces have a high degree of feature match to the test probe.
Ryals and Cleary (2012) empirically investigated whether this pattern would hold up among instances of familiarity-detection during retrieval failure. Using the aforementioned feature overlap method, they examined the level of reported familiarity intensity for nonword test cues (e.g., POTCHBORK) that overlapped in graphemic features with four different but unrecalled studied words that had been scattered across the encoding phase (e.g., PITCHFORK, PATCHWORK, POCKETBOOK, PULLCORK, all going unrecalled at test) versus cues that overlapped in graphemic features with only one studied word that happened to go unrecalled (e.g., only PITCHFORK was studied and it went unrecalled at test), versus cues that did not overlap in graphemic features with any studied words. They found that participants’ reported familiarity intensity with test cues increased with an increasing number of unrecalled studied items that shared graphemic features with the test cue. In short, during instances of retrieval failure, the test cue was perceived to be more familiar if it overlapped in features with more than one studied item (all of which were unrecalled) than if it overlapped with only one (that went unrecalled). This pattern suggested that the computation of the familiarity signal followed the principle of global matching. Interestingly, the pattern was less pronounced during instances of retrieval success (when any of the targets were successfully recalled in response to the cue), suggesting that other processes besides feature-matching are likely at work during instances of retrieval success, and feature-matching may be the primary driver of decisions made during instances of retrieval failure.
Cleary et al. (2016) later found the same pattern with semantic features, rather than graphemic features. They found that when a test cue (e.g., cedar) overlapped in semantic features with four unrecalled studied words (birch, oak, pine, willow), the reported cue familiarity level during retrieval failure was significantly higher than when the cue overlapped with only two studied words that happened to go unrecalled (e.g., only birch and oak were studied and both happened to be unrecalled). In turn, when the cue overlapped semantically with two unrecalled studied words, reported familiarity was significantly higher than when it overlapped with none. This follows from what the MINERVA 2 model depicted in Fig. 1 would predict.
General Goals. Familiarity-detection during retrieval failure appears to be well-described by some of the basic assumptions of global matching models like MINERVA 2. For one, familiarity-detection during retrieval failure appears to involve isolable features, and in fact, the basic methodology for studying familiarity-detection during retrieval failure has been used to systematically identify the types of features that are present in memory traces for recently experienced events (see Cleary, 2014, for a review). For another, familiarity-detection during retrieval failure appears to be at least roughly well-described by a global feature matching process, whereby the degree of feature overlap between each and every memory trace and the cue is factored into the intensity of the familiarity signal (Cleary et al., 2016, Ryals and Cleary, 2012).
A limitation to the studies that have investigated the global matching assumption of the MINERVA 2 model in its ability to account for familiarity-detection during retrieval failure is that these studies did not allow for a precise quantification of a single feature-type. For example, Ryals and Cleary (2012) relied on general graphemic overlap between the cue (e.g., POTCHBORK) and items in memory (e.g., PITCHFORK, PATCHWORK, POCKETBOOK, PULLCORK). This method allows for a very general investigation of the global matching and feature overlap assumptions of MINVERA 2, but does not allow for isolation of a specific feature-type or an assessment of how different feature-types might combine across memory traces in a quantifiable way. The same held true for the study of semantic feature overlap by Cleary et al. (2016).
If different feature-types from a whole stimulus could be isolated—an aim of Experiment 1 of the present study—would those different isolated feature-types combine across different study episodes in a predictable fashion to increase the later overall level of perceived familiarity with the whole stimulus item in which they were embedded at test? Note from Fig. 1 how the MINERVA 2 model would predict that they should. Different feature sets encoded in isolation during separate study episodes should combine to contribute to the familiarity signal in predictable ways. Specifically, if one set of features was laid down in one trace and another different set of features was laid down in another trace, the contribution of those two memory traces to the intensity of the familiarity signal should be additive. This would occur because the Activation Values are added together to contribute additively to the familiarity signal for that test item (as shown in Fig. 1). This was the aim of Experiment 2 of the present study.
Moreover, if one particular isolated feature-type was repeated multiple times across study episodes, there should also be a corresponding quantifiable increase in the level of perceived familiarity for the whole stimulus in which that feature-type is embedded at test. This is because the Activation Value for every encoding instance of that isolated feature-type at study would be expected to be very similar (not necessarily identical given the random variation in which features are attended to and successfully encoded as opposed to missing and laid down as zeros in the memory trace) and to combine additively toward increasing the overall intensity of the familiarity signal from the test cue. This means that the average familiarity signal should increase in a predictable, additive fashion when increasing repetitions of a specific isolated feature-type at encoding. This was the aim of Experiment 3 of the present study.
Two candidate feature-types that can be isolated and separated from their whole stimulus in a relatively straightforward way are rhythm and pitch from a musical piece. Rhythm and pitch can both be digitally extracted from their original musical piece using music composition software (Kostic & Cleary, 2009), making them great candidate feature-types for exploring global matching predictions about the role of isolable features in the computation of the familiarity signal. For this reason, the focus in the present study is on musical stimuli, with the features of rhythm and pitch as the two isolable feature-types under investigation.
A clear assumption present in the MINERVA 2 model shown in Fig. 1 is that the test probe itself is decomposed into its constituent features. If memory traces are stored as sets of features that are then combined to compute a familiarity signal for a test item, it should not matter if the whole test item itself was never presented at encoding. If some of its component features were studied, that should increase the test item’s familiarity. Those features should be detectable from the whole. Although this assumption has been generally supported in the aforementioned feature overlap studies (Cleary, 2004, Cleary et al., 2012, Cleary et al., 2016, Ryals and Cleary, 2012), it has not yet been directly shown with musical stimuli.
With music, the reverse has been shown: Isolated features in the test probe (either rhythm or pitch sequences in isolation) felt more familiar if they had been embedded within a whole song clip heard at the time of study, suggesting that those features were retained in the memory traces for song clips heard at study (Kostic & Cleary, 2009). However, the reverse should also hold true: Features presented in isolation at encoding should lead to later familiarity with wholes at test that contain those features. Thus, Experiment 1 of the present study sought to test the hypothesis that studying isolated sets of musical features at encoding will increase the perceived familiarity of a whole test song clip in which those features happen to be embedded. It should not matter if the entire test song clip itself was not heard at study; if some of the song’s features were encoded in isolation (when the song itself could not be consciously identified from those features), that should be enough to increase the perceived familiarity with the whole song clip at test. In Experiment 1, therefore, the encoding phase consisted of sets of isolated rhythms and isolated pitch sequences from piano song clips. For each piano song clip heard at test, either its isolated rhythm or its isolated pitch sequence (without its rhythm) were potentially heard at study. Furthermore, each test song clip’s identity was unable to be accessed at the time of that feature exposure (i.e., the song was not identified at encoding from the features, making it unlikely that an encoding episode could be consciously retrieved in response to the test item, leaving the decision to be based solely on the sense of familiarity with the test item). This logic has been used in studies of word fragment recognition without identification in a reverse form of the feature isolation procedure (e.g., Cleary & Greene, 2000, Experiments 3a and 3b).
Experiment 1 not only sought to demonstrate that embedding familiarized features (either rhythm or pitch) into a piano song clip increases the sense of familiarity with that piano song clip, but also to investigate whether the different features of rhythm and pitch are weighted comparably in the computation of the familiarity signal. Though not directly investigated in their study, Kostic and Cleary's (2009) results seem to point toward the possibility that pitch information carries more weight in the computation of the familiarity signal than rhythm information. Thus, it is possible that pitch is weighted more heavily in the computation of musical familiarity than rhythm. If so, then reported familiarity experiences should be more likely among test song clips for which the isolated pitch was studied than among test song clips for which the isolated rhythm was studied. Experiment 1 allowed for a direct investigation of this hypothesis.
Experiments 2 and 3 then sought to test a specific idea embedded within the MINERVA 2 model’s mechanisms regarding how features from different memory traces combine across traces in the computation of the familiarity signal--what we refer to from hereafter as the additivity principle. The additivity principle is the idea that the Activation Values shown in Fig. 1 combine additively in the computation of the familiarity signal. Below, we demonstrate the additivity principle using simulations of our general experimental paradigm. We include simulations for scenarios in which pitch carries more weight in the computation of the familiarity signal for a whole song clip than rhythm, as well as scenarios in which pitch and rhythm contribute comparably to the familiarity signal computation for the whole song clip.
MINERVA 2 Simulations. Given our goal of testing whether music familiarity-detection in our paradigm adheres to the additivity principle present in the MINERVA 2 model regarding how different feature-types should combine across separate memory traces in the computation of the familiarity signal, we carried out a set of simulations to first demonstrate the additivity principle within the MINERVA 2 model. It was important to also consider whether the simulations would robustly show additivity if pitch is indeed a larger contributing feature-type than rhythm to the familiarity signal computation. To simulate a scenario in which pitch information is more heavily weighted within the memory trace than rhythm information in the MINERVA 2 model, when creating the memory traces, a larger proportion of the test probe’s features can be dedicated to being “pitch-features” than “rhythm-features.” For example, for a given “whole song segment” test probe [1, −1, 0, 1, −1, −1, 0, 1, 1, 1], a pitch memory trace might consist of the first 70% of the features, with the remaining features being zeroed out [1, −1, 0, 1, −1, −1, 0, 0, 0, 0]. Conversely, the rhythm memory trace of the above test probe might consist of the last 30% of the features: [0, 0, 0, 0, 0, 0, 0, 1, 1, 1]. The ratio of rhythm-to-pitch features can be varied across the simulations to examine how varying the ratio affects the outcome in terms of whether the additivity principle still holds as this is varied. In our simulations, we examined the rhythm-to-pitch ratios of 1:1, 2:3, and 3:7.
Our critical simulations, taking into account possible varying ratios of pitch versus rhythm feature-types, were to demonstrate MINERVA 2′s additivity principle, which our Experiments 2 and 3 were designed to examine. Specifically, Experiments 2 and 3 sought to examine how feature-types contained within a single test cue but present within different memory traces combine across traces to contribute to the resulting familiarity signal. Our initial proof of principle MINERVA 2 simulations were set up to mimic the general design of these two experiments in order to demonstrate the mathematical existence of the additivity principle among mean echo intensities for these types of designs. Therefore, before describing each set of simulations, we will begin by giving an overview of the experimental design that it was meant to simulate.
In Experiment 2, four conditions were compared. In one condition, a song clip’s isolated rhythm was separately encoded from that song clip’s isolated pitch sequence, then both feature-types were embedded within the whole song clip at test. We will refer to this as the Rhythm + Pitch condition. Note that in the Rhythm + Pitch condition, the two feature-types (rhythm and pitch) were still studied in isolation, just in different study episodes—one where rhythm was studied in isolation (without the pitch), and one where pitch was studied in isolation (without the rhythm). In a second condition (Rhythm-Only), only the isolated rhythm was studied. In a third condition (Pitch-Only), only the isolated pitch sequence was studied. Finally, a fourth control condition involved having no studied features for a given test song clip. The question was how the isolated features of rhythm and pitch would combine across study episodes to contribute to the familiarity signal. That is, exactly how much would the familiarity of the cue increase in the Rhythm + Pitch condition compared to the Rhythm-Only or the Pitch-Only conditions?
The additivity principle within the MINERVA 2 model allows for specific predictions regarding how the separately encoded traces in the Rhythm + Pitch condition should combine to boost the familiarity signal. As shown in Fig. 1, having two separate memory traces, each with a different feature-type—one memory trace for the song’s rhythm and another separate memory trace for the song’s pitch sequence—should lead to greater perceived familiarity with the test song clip in which both feature-types are embedded than when the test song clip only contains one studied feature (either rhythm or pitch instead of rhythm and pitch). Because of the aforementioned additivity principle, the level of test song clip familiarity elicited by having separately encoded both rhythm and pitch features of the cue should equal that of the combined level of familiarity for studying only rhythm and studying only pitch. That is, the level of familiarity elicited by a cue for which only one feature-type was encoded (e.g., rhythm), when added to the level of familiarity elicited by a cue for which only the other feature-type was encoded (e.g., pitch), should approximately equal the level of familiarity elicited by a test song clip that contains both encoded feature-types (both rhythm and pitch separately encoded across different study episodes). This is the principle of additivity. Logically, this means that, relative to the control condition, the level of familiarity increase for Rhythm + Pitch cues should equal the level of familiarity increase for Rhythm-Only cues plus the level of familiarity increase for Pitch-Only cues. Thus, the model predicts the following:(Rhythm + Pitch cues) = (Rhythm-Only cues) + (Pitch-Only cues)
To demonstrate that this principle exists within MINERVA 2, we carried out a set of simulations that are depicted in Fig. 2. This set of simulations was intended to demonstrate that, when added together, the mean echo intensities for the Rhythm-Only and the Pitch-Only conditions approximately equal that of the Rhythm + Pitch condition. We performed our simulations using Python 3.7, including NumPy (Oliphant, 2006) and Jupyter (Kluyver, Ragan-Kelley, Pérez, Granger, Bussonnier, Frederic, & Willing, 2016), and the code for these simulations can be found at (https://github.com/dwhite54/MINERVA2).
So that our initial proof of principle simulations would closely mimic our actual experimental design while also adhering to precedent regarding how test probes corresponding to “new” or unstudied items are typically simulated (e.g., Hintzman, 1988), we included an unstudied condition in which the test probes corresponded to zero items in the memory store in these initial proof of principle simulations. However, despite existing precedent for simulating probes for unstudied items as corresponding to zero items in the memory store, as will be shown later, even unstudied items in our paradigm are not completely novel—they are features of known songs—and thus likely still overlap somewhat with musical memory representations in the knowledge-base at some baseline level; we will address this when simulating our actual data from our experiments in the individual experiment sections. For the purposes of these initial proof of principle simulations, given that our primary theoretical focus is on the comparison between the echo intensity values across the three conditions of Rhythm-Only, Pitch-Only, and Rhythm + Pitch (specifically, the question of whether mean echo intensities adhere to the formula [Rhythm + Pitch cues] = [Rhythm-Only cues] + [Pitch-Only cues]), we only depict the echo intensity density graphs for these three conditions in Fig. 2.
To closely follow our actual experimental design, for each cue condition (Rhythm-Only, Pitch-Only, Rhythm + Pitch, and unstudied features), 30 test probes were created; thus, there were a total of 120 test probes per run. We completed 120 runs (each run being a hypothetical participant) using the 120 test probes each for a total of 14,400 data points, with 3,600 data points in each of the four conditions. Thus, 3,600 data points contributed to each echo intensity density graph in Fig. 2. Each test probe was randomly generated and then used to create a corresponding memory trace (or set of memory traces) according to otherwise pre-specified criteria for each condition, described below. Thus, no two test probes or memory traces were identical across simulation runs.
Setting the number of features per memory trace and test probe is a somewhat arbitrary process, as it is unclear how many individual features are contained within a given stimulus in actuality. This uncertainty also applies to the question of how many features are contained within a song’s rhythm or a song’s pitch structure in actuality. We set each test probe and each memory trace to contain 1,000 features each.
Memory traces were created for the Rhythm-Only, Pitch-Only, and Rhythm+Pitch conditions using the randomly generated test probes, such that a proportion of the test probe’s features were present in the memory trace as if they had been studied in isolation as in our experimental paradigm. Across different simulations, we varied the rhythm-to-pitch ratios within the test probes and their corresponding memory traces. A 1:1 rhythm-to-pitch ratio would mean that 50% of the features within each test probe were considered “pitch” features and 50% were considered “rhythm” features. A test probe corresponding to the Rhythm-Only condition would have the “rhythm” half of the features contained within it map onto a memory trace containing only those particular features whereas a test probe corresponding to the Pitch-Only condition would have the “pitch” half of the features contained within it map onto a memory trace containing only those particular features. A test probe corresponding to the Rhythm + Pitch condition would have both sets of features contained within it each mapping onto a different memory trace (the “rhythm” half would map onto the memory trace containing only the “rhythm” features, and the “pitch” half would map onto the memory trace containing the “pitch” features).
To simulate a scenario in which pitch information is more heavily weighted within the memory trace than rhythm information, when creating the memory traces, a larger proportion of the test probe’s features were dedicated to being “pitch-features” than “rhythm-features” in some of the simulations. For example, in a given test probe [1, −1, 0, 1, −1, −1, 0, 1, 1, 1], a pitch memory trace might consist of the first 70% of the features, with the remaining features being zeroed out [1, −1, 0, 1, −1, −1, 0, 0, 0, 0]. Conversely, the rhythm memory trace of the above test probe might consist of the last 30% of the features: [0, 0, 0, 0, 0, 0, 0, 1, 1, 1]. We varied the ratio of rhythm-to-pitch features across the simulations to examine how this affects the outcomes; specifically, we examined the following ratios: 1:1, 2:3, and 3:7.
Our method is otherwise similar to prior MINERVA 2 simulations (e.g., Hintzman, 1988, p. 532). To mimic the imperfection of human memory, noise was incorporated into the memory traces, such that a proportion of the features’ signs were flipped (e.g., a feature of +1 might be changed to −1 to create noise; Hintzman, 1986). The amount of noise was also varied to examine if the additivity principle holds up across different noise levels (1%, 10%, and 20%).
Because the test probes for each of the 3,600 data points per condition were each randomly generated and because it is unclear how one hypothetical “participant” would differ in any meaningful way from the next (the way that actual human participants would), rather than attempting to separate data points by “participants” for the purposes of statistical analysis, all data points were entered into independent samples t-tests to assess for differences in mean echo intensities. As the priors are all equal in the model (unlike with human participants), treating all data points (individual trials) the same should not violate statistical assumptions the way that they would if individual human trials were all entered into the same analysis (as between-subject variability does not occur in the model). This also allowed for a high-powered investigation of proposed null effects between simulation conditions.
Fig. 2 represents the density graphs for each of the simulations for the conditions of theoretical interest (Rhythm-Only, Pitch-Only, and Rhythm + Pitch). The mean echo intensities are depicted in Table 1. To assess whether a given mean echo intensity differed significantly from that of another condition, we performed independent-samples t-tests. The outcomes of the statistical analyses are reported in Table 2, including the Bayes Factors analyses. Because Levene’s Test for Equality of Variances revealed unequal variances (which are also depicted in Fig. 2), t-tests for unequal variances were used (hence the varying degrees of freedom across analyses).
Across all of the noise-levels that we examined, when the rhythm-to-pitch ratio was 1:1 (see panel a in Fig. 2), as expected, the average echo intensities for the Rhythm-Only probes or Pitch-Only probes were not significantly different (see Table 1, Table 2 for the descriptive and inferential statistics, respectively). However, across all noise levels, when the ratio of rhythm-to-pitch features was adjusted to either 2:3 or 3:7, a significant difference was found between the echo intensities, such that the average echo intensities for the Pitch-Only probes were then larger than those of the Rhythm-Only probes. This mimics what we would expect to find in our data from human participants if pitch information is weighted more heavily than rhythm in the computation of the familiarity signal from the whole song clip cue (i.e., a higher instance of reported familiarity experiences for cues corresponding to an isolated pitch sequence than to cues corresponding to an isolated rhythm). If pitch carries approximately the same weight as rhythm in the computation of the familiarity signal, then we should find that familiarity elicited by either feature-type does not significantly differ (as when the rhythm-to-pitch ratio is 1:1). Whether encoded pitch information carries more weight in the computation of the familiarity signal for a whole song clip than encoded rhythm information is an empirical question that will be borne out by our data. The above simulation results demonstrate how it is possible to factor differing rhythm-to-pitch ratios into our simulations. Below, we discuss the simulations that demonstrate the primary theoretical question of interest; we include simulations that demonstrate the various outcomes when using the above different rhythm-to-pitch ratios and the above varying noise levels.
We now turn to our primary theoretical question of interest, which is that of how familiarizing both rhythm and pitch (Rhythm + Pitch) affects the echo intensities compared to familiarizing either alone (Rhythm-Only or Pitch-Only). As shown in Table 1, Table 2, at all rhythm-to-pitch ratios and all noise levels, the mean echo intensities for Rhythm-Only and Pitch-Only probes were significantly lower than those for the Rhythm + Pitch probes. These results demonstrate the generic prediction of MINERVA 2 that when a test cue contains a blend of two different feature-types that were each separately familiarized across two different memory traces (Rhythm + Pitch cues), the overall familiarity intensity produced by the test cue is higher than when the test cue contains only one familiarized feature-type among its blend of features (Rhythm-Only cues or Pitch-Only cues). More importantly, the results also demonstrate MINERVA 2′s additivity principle: The separately encoded memory traces for the two feature-types (rhythm vs. pitch) combine additively to boost the cue familiarity of the full song clip in the Rhythm + Pitch condition. To reveal this, the echo intensities were examined to determine if they adhered to the following equation:
Toward this end, we compared the obtained echo intensities for the Rhythm + Pitch condition to the values obtained by adding together the Rhythm-Only and the Pitch-Only echo intensities for each of the different rhythm-to-pitch ratios and noise levels under examination. There was no significant difference found in any of these cases, and the evidence favored the null, as shown in Table 1, Table 2 (see the [Rhythm-Only] + [Pitch-Only] vs. Rhythm + Pitch comparisons in Table 2). These simulations demonstrate that the separately encoded memory traces for each feature-type (rhythm vs. pitch) combine additively in the computation of the global familiarity signal produced by a test probe containing both feature-types in MINERVA 2. This additivity occurred regardless of the rhythm-to-pitch ratio or noise level used in the simulations. Thus, the additivity principle held up robustly. Based on these simulations, we examined whether the experimental data from human participants given isolated rhythms and/or pitch sequences across encoding episodes will also adhere to the following formula when both feature-types are contained within the test cue:
Whether this additivity assumption holds up was examined in Experiment 2, where we also present simulations of our behavioral data patterns.
The additivity principle would not be shown in the human data if, for example, the Rhythm + Pitch condition did not lead to any increase in the perceived familiarity of the song in which those features are embedded over and above either of those features in isolation. That is, if combining the two feature-types across separate encoding episodes leads to no further increase in the level of perceived familiarity with the song clip in which they are embedded, the additivity assumption of the MINERVA 2 model would not hold up.
The additivity principle would also not be supported by the data if rhythm and pitch were to combine multiplicatively in the generation of the familiarity signal; for example, if the overall perceived level of familiarity with the test song clip showed an even larger increase in familiarity than would be expected by either feature-type alone simply added together, it would suggest that there is added familiarity value to having more than one familiar feature-type in the cue. Such a pattern might exist if the whole were greater than the sum of its parts when it came to the computation of the familiarity signal—that is, if the bundling of multiple familiarized feature-types adds familiarity value over and above the familiarization of those individual feature-types. Such a multiplicative combining of features in the computation of familiarity might make evolutionary sense, insofar as the presence of multiple familiar feature-types might, in and of itself, suggest an even greater likelihood that this current situation has been experienced in the past. Thus, in actuality, unlike in MINERVA 2, there might be added value to having more than one familiarized feature-type in the cue. Experiment 2 also served as a test of this idea.
In Experiment 3, we used a slightly different approach to demonstrate the additivity principle in MINERVA 2 regarding how features combine across memory traces to produce the familiarity signal. Instead of examining how two different isolated feature-types combine across separate encoding episodes to produce the familiarity signal with a whole song clip at test, the same specific instance of a feature-type (i.e., the same rhythm) was simply repeated multiple times throughout the encoding phase. The goal was to examine the level of increase in the perceived familiarity with the whole song clip at test. As shown in Fig. 1, even a single feature-type like a song’s rhythm, if repeated across separate study episodes, should systematically and predictably boost the level of perceived familiarity with the whole in which that feature is embedded at test, specifically in an additive fashion. Our next set of simulations was aimed at demonstrating this manifestation of the additivity principle within MINERVA 2.
In Experiment 3, we examined if the repetition of a single feature-type—rhythm—across separate study episodes would additively increase the level of perceived familiarity with the whole song clip in which that rhythm was embedded at test. Toward this end, piano song clips presented at test had their rhythms familiarized either zero times (the Rhythm0x condition), one time (the Rhythm1x condition), or three times (the Rhythm3x condition) throughout the study phase. As shown in Fig. 1, relative to the control condition (Rhythm0x), the level of perceived familiarity increase for test songs containing a rhythm that was repeated three times at study (Rhythm3x) should be three times the level of familiarity increase caused by a single rhythm presentation at study (Rhythm1x). Thus, it should adhere to the following formula:
To demonstrate this manifestation of the additivity principle within the MINERVA 2 model (regarding how the same feature set repeated across memory traces should combine across those separate memory traces to affect the overall echo intensity elicited by a cue containing that subset of features), we carried out a set of simulations that are depicted in Fig. 3. This set of simulations was intended to demonstrate, in a proof of principle, that repeating an isolated feature set from a particular song (e.g., the song’s rhythm) across three different memory traces would boost the echo intensity elicited by a cue containing those features by a factor of three as described in the above equation.
For our Experiment 3 simulations, 40 test probes were randomly generated for each of the critical conditions under examination (Rhythm0x, Rhythm1x, Rhythm3x), each consisting of 1,000 features, for a total of 120 test probes. Like for our previous simulations, memory traces were created from each randomly generated test probe, this time for the Rhythm1x and Rhythm3x conditions, by using a proportion of the corresponding test probe’s features. Although pitch sequences were not the focus of Experiment 3, a full piano song clip as the test cue would hypothetically still contain both pitch and rhythm features; therefore, we examined the same rhythm-to-pitch ratios and noise levels in our Experiment 3 simulations as for Experiment 2′s simulations described above (and as depicted in Table 3, Table 4).
Our simulations were meant to mimic the isolation of rhythm features from the full song clip that would serve as the test cue, and the presentation of those isolated rhythms at study, as was done in our Experiment 3. As in our previous proof of principle simulation, our focus was again on the two conditions of theoretical interest--the comparison of the Rhythm1x condition to the Rhythm3x condition to test for adherence to the above formula. Simulations for our full behavioral data patterns will be presented in Experiment 3′s section. For each memory trace assigned to the Rhythm3x condition, three duplicates (total) of that memory trace were created. Once the test probes and their memory traces were created, overall echo intensities for each condition were calculated and the process was repeated again for a total of 120 simulations to mimic the actual experimental design.
The density graphs created from the simulations are shown in Fig. 3. The mean echo intensities are depicted in Table 3. To assess whether a given mean echo intensity differed significantly from that of another condition, we performed independent-samples t-tests. The outcomes of the statistical analyses are reported in Table 4, including the Bayes Factors analyses. Because Levene’s Test for Equality of Variances revealed unequal variances (which are also depicted in Fig. 3), t-tests for unequal variances were used (hence the varying degrees of freedom across analyses).
We first compared the echo intensities obtained for the Rhythm1x condition with the echo intensities obtained for the Rhythm3x condition. As can be seen in Table 4, across all rhythm-to-pitch ratios and noise levels examined, the echo intensities were significantly higher for the Rhythm3x condition than the Rhythm1x condition, as expected. When a memory trace is repeated three times, the resulting familiarity signal is greater in intensity than if the memory trace occurs only once. Turning now to our primary theoretical question, we compared the mean echo intensity for the Rhythm3x condition with the mean value obtained by multiplying the echo intensity in the Rhythm1x condition by a factor of three to determine if these values adhere to the following formula:
Indeed, as shown in Table 4, the evidence across all of our rhythm-to-pitch ratios and noise levels favored the null according to Bayes Factors analyses. Possibly due to the extremely high power resulting from the many iterations of our simulation runs, some p-values approached or reached significance; however, even in these cases, the Bayes Factor analyses indicated that the evidence favored the null.
Collectively, these proof of principle simulations demonstrate that a particular isolated feature set (e.g., a song’s isolated rhythm) repeated across three separate memory traces will boost the familiarity signal with a cue in which those features are embedded (e.g., a full song clip in which that rhythm occurs) by roughly a factor of three relative to when that particular isolated feature set occurs in only one memory trace. The purpose of Experiment 3 was to examine whether the behavioral data conform to this pattern, and to then simulate that specific behavioral data pattern using MINERVA 2. Specifically, do the data conform to the formula:
The above-mentioned additivity assumption would be violated if, for example, repeating the same feature-type (in this case rhythm) throughout the encoding phase leads to no increase in the perceived familiarity of the song in which it is embedded. That is, if once a feature-type is familiarized, repeating that feature leads to no further increase in the level of perceived familiarity with the song clip in which it is embedded, the predictions of the MINERVA 2 model would not be supported.
The assumption would also be violated if the repeated memory traces for the separate rhythm encodings were to combine multiplicatively instead of additively in the generation of the familiarity signal for the test song clip. For example, if the overall perceived level of familiarity with the test song clip showed an even larger increase when the feature-type was repeated three times than would be expected by merely adding together each iteration of familiarity increase expected by a single encoding instance of that feature, it would suggest that there is added familiarity value to repeatedly familiarizing a single feature-type within the cue. Such a pattern might exist if the whole were greater than the sum of its parts when it came to the computation of the familiarity signal—that is, if the repeating of a single feature-type across episodes adds familiarity value over and above the familiarization level of each instance. For example, if there is a mechanism discovered in Experiment 2 for added value of bundling multiple feature-types over and above that predicted by the mere adding together of their expected individual familiarity levels (i.e., a “whole is greater than the sum of its parts” component to the computation of familiarity), that mechanism might be expected to manifest across repetitions of the same feature-type too. In this way, Experiment 3 provides a different means of examining the same question posed in Experiment 2.
Section snippets
Experiment 1
Experiment 1 sought to first establish that embedding familiarized song features (that had been earlier encoded in isolation) within later song clips at test would increase the perceived familiarity of those test song clips. Toward this end, participants studied either isolated song rhythms (a song clip’s rhythm tapped out in a single note on a wood block instrument) or isolated pitch sequences (a song clip’s note order extracted from its original rhythm by adhering those notes to an arbitrary
Results and Discussion
The data were analyzed using traditional null hypothesis significance testing (NHST) and Bayes Factors analyses. Using Bayes Factors analyses alongside NHST allowed us to assess whether the evidence favored the null hypothesis instead of merely failing to reject it (e.g., Kruschke, 2013). In the following results sections, we report Bayes Factors (BFs), which quantify the strength of the evidence for the alternative (BF10) and the null (BF01) hypotheses. Using the recommendations provided by
Experiment 2
Having demonstrated in Experiment 1 that embedding a familiarized feature into a whole song segment at test increases participants’ perceived familiarity with the song segment, Experiment 2 sought to use this basic feature familiarization methodology to test the hypothesis that presenting two separate feature-types in isolation across different study episodes (i.e., presenting isolated rhythm in one study episode and the isolated pitch sequence from the same song but in a different study
Song identification rates
As in Experiment 1, identification rates for isolated features were at floor (see Table 5 for proportions of correctly identified songs at study and at test). Participants’ overall identification rates of songs from their isolated features at study was 1.4% (SD = .02). A repeated-measures ANOVA on Feature-Type indicated that there was a significant difference in identification rates, F(2, 238) = 18.69, MSE = .001, p < .001, BF01 = 1.61 × 10-6. Participants were significantly more likely to
Experiment 3
The results of Experiment 2 suggest the presence of the additivity principle in our data, as well as in model simulations that were intended to mimic our experimental data. Encoding different feature-types in isolation across study episodes led to an additive increase in perceived familiarity with a test cue containing both of those features. Another prediction pertaining to how features should combine across study episodes to contribute to the familiarity signal is that if a feature set of a
Method
Participants. One hundred twenty-eight Colorado State University undergraduate students completed the experiment for course credit. Six participants were lost due to not finishing or due to computer crashes, leaving 122 participants.
Materials. The stimuli consisted of 96 of the piano song clips and isolated rhythms used in Kostic and Cleary (2009).12
Identification rates
Identification of songs from isolated rhythms during encoding was once again at floor. However, the proportion of songs identified at study from isolated rhythms presented once (M = .01, SD = .02) was significantly lower than the proportion of songs identified from isolated rhythms presented three times (M = .01, SD = .03), t(1 2 1) = -2.07, SE = .003, p = .04, BF01 = 1.26, 95% CrI [-.36, -.01].
The proportion of piano song clips at test that had had their rhythms studied once (M = .19, SD = .13)
The role of features in the computation of the familiarity signal from music
At the heart of all human memory research is the question of how information is represented in the human mind then used within memory processes. A long-standing prevailing assumption in human memory theory, starting at the dawn of cognitive psychology as a field, is the feature assumption (e.g., Estes, 1950). The feature assumption is the idea that memory traces are essentially sets of tied-together features from the original experience that they are meant to represent or record. Though the
References (71)
The sense of recognition during retrieval failure: Implications for the nature of memory traces
Psychology of Learning and Motivation
(2014)- et al.
Familiarity from the configuration of objects in 3-dimensional space and its relation to déjà vu: A virtual reality investigation
Consciousness and Cognition
(2012) - et al.
Quantitative modeling of the neural representation of objects: How semantic feature norms can account for fMRI activation
Neuroimage
(2011) - et al.
Does the huamn mnid raed wrods as a whole?
TRENDS in Cognitive Sciences
(2004) - et al.
The memory tesseract: Mathematical equivalence between composite and separate storage memory models
Journal of Mathematical Psychology
(2017) A feeling-of-recognition without identification
Journal of Memory and Language
(1990)Strategy selection in question answering
Cognitive Psychology
(1987)- et al.
The recognition without cued recall phenomenon: Support for a feature-matching theory over a partial recollection account
Journal of Memory and Language
(2012) - et al.
Recall versus familiarity when recall fails for words and scenes: The differential roles of the hippocampus, perirhinal cortex, and category-specific cortical regions
Brain Research
(2013) The nature of recollection and familiarity: A review of 30 years of research
Journal of Memory and Language
(2002)
Association, synonymity, and directionality in false recognition
Journal of Experimental Psychology
Recognition-by-components: A theory of human image understanding
Psychological Review
All-or-none learning of attributes
Journal of Experimental Psychology
A feature-sampling account of the time course of old-new recognition judgments
Journal of Experimental Psychology: Learning, Memory, and Cognition
The déjà vu experience
The tip of the tongue state
A multicomponent theory of the memory trace
Global matching models of recognition memory: How the models match the data
Psychonomic Bulletin & Review
Orthography, phonology, and meaning: Word features that give rise to feelings of familiarity
Psychonomic Bulletin & Review
Déjà vu: An illusion of prediction
Psychological Science
Recognition without identification
Journal of Experimental Psychology: Learning, Memory, and Cognition
Memory for unidentified items: Evidence for the use of letter information in familiarity processes
Memory & Cognition
A postdictive bias associated with déjà vu
Psychonomic Bulletin & Review
Recognition without picture identification: Geons as components of the pictorial memory trace
Psychonomic Bulletin & Review
Déjà vu and the feeling of prediction: An association with familiarity strength
Memory (Special Issue on Déjà vu)
Can déjà vu result from similarity to a prior experience? Support for the similarity hypothesis of déjà vu
Psychonomic Bulletin & Review
Recognition during recall failure: Semantic feature matching as a mechanism for recognition of semantic cues when recall fails
Memory & Cognition
Auditory recognition without identification
Memory & Cognition
A dynamic approach to recognition memory
Psychological Review
Models of recognition: A review of arguments in favor of a dual-process account
Psychonomic Bulletin & Review
Toward a statistical theory of learning
Psychological Review
Topics in semantic representation
Psychological Review
“Schema abstraction” in a multiple-trace memory model
Psychological Review
Cited by (8)
A possible shared underlying mechanism among involuntary autobiographical memory and déjà vu
2023, Behavioral and Brain SciencesPiquing Curiosity: Déjà vu-Like States Are Associated with Feelings of Curiosity and Information-Seeking Behaviors
2023, Journal of IntelligenceWhat Flips Attention?
2023, Cognitive ScienceDetecting a familiar person behind the surgical mask: recognition without identification among masked versus sunglasses-covered faces
2022, Cognitive Research: Principles and ImplicationsDo first and last letters carry more weight in the mechanism behind word familiarity?
2022, Psychonomic Bulletin and ReviewCan cue familiarity during recall failure prompt illusory recollective experience?
2022, Memory and Cognition