Planting misinformation in the human mind: A 30-year investigation of the malleability of memory
+ Author Affiliations
Abstract
The misinformation effect refers to the impairment in memory for the past that arises after exposure to misleading information. The phenomenon has been investigated for at least 30 years, as investigators have addressed a number of issues. These include the conditions under which people are especially susceptible to the negative impact of misinformation, and conversely when are they resistant. Warnings about the potential for misinformation sometimes work to inhibit its damaging effects, but only under limited circumstances. The misinformation effect has been observed in a variety of human and nonhuman species. And some groups of individuals are more susceptible than others. At a more theoretical level, investigators have explored the fate of the original memory traces after exposure to misinformation appears to have made them inaccessible. This review of the field ends with a brief discussion of the newer work involving misinformation that has explored the processes by which people come to believe falsely that they experienced rich complex events that never, in fact, occurred.
In 2005 the journal Learning & Memory published the first experimental work using neuroimaging to reveal the underlying mechanisms of the “misinformation effect,” a phenomenon that had captured the interest of memory researchers for over a quarter century (Okado and Stark 2005). These new investigators used a variation of the standard three-stage procedure typical in studies of misinformation. Their subjects first saw several complex events, for example one involving a man stealing a girl's wallet. Next some of the subjects got misinformation about the event, such as the fact that the girl's arm was hurt in the process (rather than her neck). Finally the subjects were asked to remember what they saw in the original event. Many claimed that they saw the misinformation details in the original event. For example, they remembered seeing the arm being hurt, not the neck. Overall, the misinformation was remembered as being part of the original event about 47% of the time. So, expectedly, a robust impairment of memory was produced by exposure to misinformation— the misinformation effect. But the researchers' new work had a twist: They went on to show that the neural activity that occurred while the subjects processed the events and later the misinformation predicted whether a misinformation effect would occur.
In an essay that accompanied the Okado and Stark findings, I placed their results within the context of 30 years of research on behavioral aspects of the misinformation effect (Loftus 2005). Their work received much publicity, and boosted public interest in the misinformation effect as a scientific phenomenon. For example, WebMD (Hitti 2005) touted the new findings showing that brain scans can predict whether the memories would be accurate or would be infected with misinformation. And the Canadian press applauded the study as being the first to investigate how the brain encodes misinformation (Toronto Star 2005).
So what do we know about the misinformation effect after 30 years? The degree of distortion in memory observed in the Okado and Stark neuroimaging study has been found in hundreds of studies involving a wide variety of materials. People have recalled nonexistent objects such as broken glass. They have been misled into remembering a yield sign as a stop sign, hammers as screwdrivers, and even something large, like a barn, that was not part of the bucolic landscape by which an automobile happened to be driving. Details have been planted into memory for simulated events that were witnessed (e.g. a filmed accident), but also into memory for real-world events such as the planting of wounded animals (that were not seen) into memory for the scene of a tragic terrorist bombing that actually had occurred in Russia a few years earlier (Nourkova et al. 2004). The misinformation effect is the name given to the change (usually for the worse) in reporting that arises after receipt of misleading information. Over its now substantial history, many questions about the misinformation effect have been addressed, and findings bearing on a few key ones are summarized here.
- Under what conditions are people particularly susceptible to the negative impact of misinformation? (The When Question)
- Can people be warned about misinformation, and successfully resist its damaging influence?
- Are some types of people particularly susceptible? (The Who Question)
- When misinformation has been embraced by individuals, what happens to their original memory?
- What is the nature of misinformation memories?
- How far can you go with people in terms of the misinformation you can plant in memory?
The When Question
Long ago, researchers showed that certain experimental conditions are associated with greater susceptibility to misinformation. So, for example, people are particularly prone to having their memories be affected by misinformation when it is introduced after the passage of time has allowed the original event memory to fade (Loftus et al. 1978). One reason this may be true is that with the passage of time, the event memory is weakened, and thus, there is less likelihood that a discrepancy is noticed while the misinformation is being processed. In the extreme, with super-long time intervals between an event and subsequent misinformation, the event memory might be so weak that it is as if it had not been presented at all. No discrepancy between the misinformation and original memory would be detected, and the subject might readily embrace the misinformation. These ideas led to the proposal of a fundamental principle for determining when changes in recollection after misinformation would occur: the Discrepancy Detection principle (Tousignant et al. 1986). It essentially states that recollections are more likely to change if a person does not immediately detect discrepancies between misinformation and memory for the original event. Of course, it should be kept in mind that false memories can still occur even if a discrepancy is noticed. The rememberer sometimes thinks, “Gee, I thought I saw a stop sign, but the new information mentions a yield sign, I guess I must be wrong and it was a yield sign.” (Loftus and Hoffman 1989).
The other important time interval is the period between the misinformation and the test. One study asked subjects to say whether a key item was part of the event only, part of the misinformation, in both parts, or in neither. Misinformation effects occur when subjects say that the item is part of the event only, or that the item was in both parts. Overall, subjects were slightly more likely to say “both” (22%) than “event only” (17%). But the timing of the test affected these ratios. With a short interval between the misinformation and the test, subjects are less likely to claim that the misinformation item was in the event only (Higham 1998). This makes sense. If subjects have recently read the misinformation they might well remember doing so when tested and at the same time might also incorrectly believe that they also saw the misinformation detail during an original event.
Temporarily changing someone's state can increase misinformation effects. So for example, if people are led to believe that they have drunk alcohol, they are more susceptible (Assefi and Garry 2002), and when people are hypnotized, they are more susceptible (Scoboria et al. 2002). These temporary states may have the effect of disrupting the ability of subjects to detect discrepancies between the misinformation and what remains of their original memory.
Warnings
Long ago, researchers showed that warning people about the fact that they might in the future be exposed to misinformation sometimes helps them resist the misinformation. However, a warning given after the misinformation had been processed did not improve the ability to resist its damaging effects (Greene et al. 1982). The lack of effectiveness of post-misinformation warnings presumably occurred because the misinformation had already been incorporated into the memory and an altered memory now existed in the mind of the individual. The research on warnings fits well with the Discrepancy Detection principle. If people are warned prior to reading post-event information that the information might be misleading, they can better resist its influence, perhaps by increasing the likelihood that the person scrutinizes the post-event information for discrepancies.
More recent work suggests that warning people that they may have in the past been exposed to misinformation (post-misinformation warnings) may have some success, but only in limited circumstances. In one study, an immediate post-misinformation warning helped subjects resist the misinformation, but only when the misinformation was in a relatively low state of accessibility. With highly accessible misinformation, the immediate post-misinformation warnings didn't work at all. (The accessibility of misinformation can be enhanced by presenting it multiple times versus a single time). Moreover, it didn't seem to matter whether the warning was quite general or item-specific (Eakin et al. 2003). The general warning informed subjects that the narrative they had read referred to some objects and events from the slides in an inaccurate way. The specific warning explicitly mentioned the misleading details (e.g., they would be told the misinformation was about the tool). Eakin et al. explained these results with several hypotheses. They favored a suppression hypothesis, which states that when people get a warning, they suppress the misinformation and it has less ability to interfere with answering on the final test. Moreover they suggested that the entire context of the misinformation might be suppressed by the warning. Suppression might have more trouble working when misinformation is too accessible. Also, highly accessible misinformation might distract the subject from thinking to scrutinize the misinformation for discrepancies from some presumably overwhelmed original event memory.
The Who Question
Misinformation affects some people more than others. For one thing, age matters. In general young children are more susceptible to misinformation than are older children and adults (see Ceci and Bruck 1993). Moreover, the elderly are more susceptible than are younger adults (Karpel et al. 2001; Davis and Loftus 2005). These age effects may be telling us something about the role of cognitive resources, since we also know that misinformation effects are stronger when attentional resources are limited. In thinking about these age effects, it should probably be emphasized that suggestion-induced distortion in memory is a phenomenon that occurs with people of all ages, even if it is more pronounced with certain age groups.
In terms of personality variables, several have been shown to be associated with greater susceptibility to misinformation such as empathy, absorption, and self-monitoring. The more one has self-reported lapses in memory and attention, the more susceptible one is to misinformation effects. So, for example, Wright and Livingston-Raper (2002) showed that about 10% of the variance in susceptibility to misinformation is accounted for by dissociation scores that measure the frequency of such experiences as how often a person can't remember whether he did something or just thought about doing that thing (see Davis and Loftus 2005 for a review of these personality variables).
Interestingly, misinformation effects have also been obtained with some unusual subject samples, including three-month-old infants (Rovee-Collier et al. 1993), gorillas (Schwartz et al. 2004), and even with pigeons and rats (Harper and Garry 2000; M. Garry and D.N. Harper, in prep.). One challenging aspect of these studies is finding ways to determine that misinformation had taken hold in species that are unable to explicitly say so. Take pigeons, for example. They have an amazing ability to remember pictures that they were shown as long as two years earlier (Vaughan and Greene 1983, 1984). But their otherwise good memory can be disrupted by misinformation. In two different studies, Harper and Garry examined misinformation effects in pigeons by using an entirely visual paradigm (see also M. Garry and D.N. Harper, in prep.). First, the pigeons saw a light (let's say a red light). They had been trained over many many trials to peck the light to show that they had paid attention to it. After they pecked the light, it turned off. After a delay, the pigeons were exposed to post-event information, where they saw either the same colored light or a different colored light. They had to peck this light, too. Then came the test: The pigeons saw the original light and a novel colored light. If they pecked the originally correct color, they got food. If they pecked the novel color, they got no food. The pigeons were more accurate when the post-event experience did not mislead them. Moreover, like humans, pigeons are more susceptible to the misinformation if it occurs later in the original–final test interval than if it occurs early in that interval. M. Garry and D.N. Harper (in prep.) make the point that knowing that pigeons and humans respond the same way to misleading information provides more evidence that the misinformation effect is not just a simple matter of retrograde interference. Retrograde interference is a mere disruption in performance, not a biasing effect. That is, it typically makes memory worse, but does not pull for any particular wrong answer. But for pigeons, like humans, who use the misinformation differentially depending on when they are exposed to it, the misinformation appears to have a specific biasing effect too. The observation of a misinformation effect in nonverbal creatures also suggests that the misinformation effects are not a product of mere demand characteristics. That is, they are not produced by “people” who give a response just to please the experimenter, even when it is not the response they think they should give.
The fate of the original memory?
One of the most fundamental questions one can ask about memory is the question about the permanence of our long-term memories. If information makes it way into our long-term memories, does it stay there permanently even when we can't retrieve it on demand? Or do memory traces once stored become susceptible to decay or damage or alteration? In this context, we can pose the more specific question: When misinformation is accepted and incorporated into a person's recollection, what happens to their original memory? Does the misinformation impair the original memory, perhaps by altering the once-formed traces? Or does the misinformation cause retrieval impairment, possibly by making the original memory less accessible?
A lively debate developed in the 1980s when several investigators rejected the notion that misinformation causes any type of impairment of memory (McCloskey and Zaragoza 1985). Instead, they explicitly and boldly pressed the idea that misinformation had no effect on the original event memory. Misinformation, according to this view, merely influences the reports of subjects who never encoded (or for other reasons can't recall) the original event. Instead of guessing at what they saw, these subjects would be lured into producing the misinformation response. Alternatively, the investigators argued that misinformation effects could be arising because subjects remember both sources of information but select the misleading information because, after deliberation, they conclude it must be correct.
To support their position, McCloskey and Zaragoza (1985) devised a new type of test. Suppose the subjects saw a burglar pick up a hammer and received the misinformation that it was a screwdriver. The standard test would allow subjects to select between a hammer and a screwdriver. On the standard test, control subjects who had not received the misinformation would tend to select the hammer. Many subjects exposed to misinformation (called misled subjects) would, of course, select the screwdriver, producing the usual misinformation test. In the new test, called the “Modified Test,” the misinformation option is excluded as a response alternative. That is, the subjects have to choose between a hammer and a totally novel item, wrench. With the modified test, subjects were very good at selecting the original event item (hammer, in this example), leading McCloskey and Zaragoza to argue that it was not necessary to assume any memory impairment at all—neither impairment of traces nor impairment of access to traces. Yet later analyses of a collection of studies using the modified test showed that small misinformation effects were obtained even when these unusual types of tests were employed (Ayers and Reder 1998), and even when nonverbal species were the subjects of the experiments.
While space is too limited to present the myriad paradigms that were devised by investigators wishing to explore the fate of the original memory (e.g., Wagenaar and Boer 1987; Belli 1989; Tversky and Tuchin 1989), suffice it to say that the entire debate heightened appreciation for the different ways by which people come to report a misinformation item as their memory. Sometimes this occurs because they have no original memory (it was never stored or it has faded). Sometimes this occurs because of deliberation. And sometimes it appears as if the original event memories have been impaired in the process of contemplating misinformation. Moreover, the idea that you can plant an item into someone's memory (apart from whether you have impaired any previous traces) was downright interesting in its own right.
The nature of misinformation memories
Subjectively, what are misinformation memories like? One attempt to explore this issue compared the memories of a yield sign that had actually been seen in a simulated traffic accident, to the memories of other subjects who had not seen the sign but had it suggested to them (Schooler et al. 1986). The verbal descriptions of the “unreal” memories were longer, contained more verbal hedges (I think I saw...), more references to cognitive operations (After seeing the sign the answer I gave was more of an immediate impression...), and fewer sensory details. Thus statistically a group of real memories might be different from a group of unreal ones. Of course, many of the unreal memory descriptions contained verbal hedges and sensory detail, making it extremely difficult to take a single memory report and reliably classify it as real or unreal. (Much later, neurophysiological work would attempt to distinguish real from unreal memories, a point we return to later).
A different approach to the nature of misinformation memories came from the work of Zaragoza and Lane (1994) who asked this question: Do people confuse the misleading suggestions for their “real memories” of the witnessed event? They asked this question because of the real possibility that subjects could be reporting misinformation because they believed it was true, even if they had no specific memory of seeing it. After numerous experiments in which subject were asked very specific questions about their memory for the source of suggested items that they were embracing, the investigators concluded that misled subjects definitely do sometimes come to remember seeing things that were merely suggested to them. They referred to the phenomenon as the “source misattribution effect.” But they also noted that the size of the effect can vary, and emphasized that source misattributions are not inevitable after exposure to suggestive misinformation.
How much misinformation can you plant in one mind?: Rich false memories
It is one thing to change a stop sign into a yield sign, to make a person believe that a crime victim was hurt in the arm instead of the neck, or to add a detail to an otherwise intact memory. But it is quite another thing to plant an entire memory for an event that never happened. Researchers in the mid-1990s devised a number of techniques for planting whole events, or what have been called “rich false memories.” One study used scenarios made up by relatives of subjects, and planted false memories of being lost for an extended time in a shopping mall at age 6 and rescued by an elderly person (Loftus 1993; Loftus and Pickrell 1995). Other studies used similar methods to plant a false memory that as a child the subject had had an accident at a family wedding (Hyman Jr. et al. 1995), had been a victim of a vicious animal attack (Porter et al. 1999), or that he or she had nearly drowned and had to be rescued by a lifeguard (Heaps and Nash 2001).
Sometimes subjects will start with very little memory, but after several suggestive interviews filled with misinformation they will recall the false events in quite a bit of detail. In one study, a subject received the suggestion that he or she went to the hospital at age 4 and was diagnosed as having low blood sugar (Ost et al. 2005). At first the subject remembered very little: “... I can't remember anything about the hospital or the place. It was the X general hospital where my mum used to work? She used to work in the baby ward there... but I can't... no. I know if I was put under hypnosis or something I'd be able to remember it better, but I honestly can't remember.” Yet in the final interview in week 3, the subject developed a more detailed memory and even incorporated thoughts at the time into the recollection: “... I don't remember much about the hospital except I know it was a massive, huge place. I was 5 years old at the time and I was like `oh my God I don't really want to go into this place, you know it's awful'... but I had no choice. They did a blood test on me and found out that I had a low blood sugar...”
Taken together these studies show the power of this strong form of suggestion. It has led many subjects to believe or even remember in detail events that did not happen, that were completely manufactured with the help of family members, and that would have been traumatic had they actually happened.
Some investigators have called this strong form of suggestion the “familial informant false narrative procedure” (Lindsay et al. 2004); others find the term awfully cumbersome, and prefer to simply call the procedure the “lost-in-the-mall” technique, after the first study that used the procedure. Across many studies that have now utilized the “lost-in-the-mall” procedure, an average of ∼30% of subjects have gone on to produce either partial or complete false memory (Lindsay et al. 2004). Other techniques, such as those involving guided imagination (see Libby 2003 for an example), suggestive dream interpretation, or exposure to doctored photographs, have also led subjects to believe falsely that they experienced events in their distant and even in their recent past (for review, see Loftus 2003).
A concern about the recent work showing the creation of very rich false beliefs and memories is that these might reflect true experiences that have been resurrected from memory by the suggestive misinformation. To counter that concern, some investigators have tried to plant implausible or impossible false memories. In several studies subjects were led to believe that they met Bugs Bunny at a Disney Resort after exposure to fake ads for Disney that featured Bugs Bunny. An example of an ad containing the false Bugs Bunny information is shown in Figure 1; subjects simply evaluate the ad on a variety of characteristics. In one study, the single fake ad led 16% of subjects to later claim that they had met him (Braun et al. 2002), which could not have occurred because Bugs Bunny is a Warner Brothers character and would not be seen at a Disney resort. Later studies showed even higher rates of false belief, and that the ads that contained a picture of Bugs produced more false memories than ads that contained only a verbal mention (Braun-LaTour et al. 2004.) While obviously less complex, these studies dovetail nicely with real-world examples in which individuals have come to develop false beliefs or memories for experiences that are implausible or impossible (e.g., alien abduction memories, as studied by McNally and colleagues 2004).
Concluding remarks
Misinformation can cause people to falsely believe that they saw details that were only suggested to them. Misinformation can even lead people to have very rich false memories. Once embraced, people can express these false memories with confidence and detail. There is a growing body of work using neuroimaging techniques to assist in locating parts of the brain that might be associated with true and false memories, and these reveal the similarities and differences in the neural signatures (e.g., Curran et al. 2001; Fabiani et al. 2000). Those with strong interests in neuroscience will find interesting the recent neuroimaging and electrophysiological studies suggesting that sensory activity is greater for true recognition than false recognition (Schacter and Slotnick 2004). These studies suggest, more explicitly, that the hippocampus and a few other cortical regions come into play when people claim to have seen things that they didn't see. But, keep in mind that for the most part these studies are done with relatively pallid sorts of true and false memories (e.g., large collections of words or simple pictures). With the Okado and Stark (2005) neuroimaging investigation of misinformation we are one step closer to developing some techniques that might enable us to use neural activity to tell whether a report about a complex event is probably based on a true experience or whether it is based on misinformation. We are still, however, a long way from a reliable assessment when all we have is a single memory report to judge.
In the real world, misinformation comes in many forms. When witnesses to an event talk with one another, when they are interrogated with leading questions or suggestive techniques, when they see media coverage about an event, misinformation can enter consciousness and can cause contamination of memory. These are not, of course, the only source of distortion in memory. As we retrieve and reconstruct memories, distortions can creep in without explicit external influence, and these can become pieces of misinformation. This might be a result of inference-based processes, or some automatic process, and can perhaps help us understand the distortions we see in the absence of explicit misinformation (e.g., Schmolck et al.'s [2000] distortions in recollections of the O.J. Simpson trial verdict).
An obvious question arises as to why we would have evolved to have a memory system that is so malleable in its absorption of misinformation. One observation is that the “updating” seen in the misinformation studies is the same kind of “updating” that allows for correction of incorrect memories. Correct information can supplement or distort previously stored error, and this, of course, is a good thing. Whatever the misinformation reveals about normal memory processes, one thing is clear: the practical implications are significant. The obvious relevance to legal disputes, and other real-world activities, makes it understandable why the public would want to understand more about the misinformation effect and what it tells us about our malleable memories.
Footnotes
- Article published online ahead of print. Article and publication date are at http://www.learnmem.org/cgi/doi/10.1101/lm.94705.
- Cold Spring Harbor Laboratory Press
No comments:
Post a Comment