Dhamma

Monday, March 18, 2024

Seeing psi


It is as fatal as it is cowardly to blink facts because they are not to our taste.

—John Tyndall (1820-1893)

Given the substantial historical, anecdotal, and experimental evidence for psi, why do some intelligent people positively bristle at the mere suggestion that the evidence for psi be taken seriously? After all, scientists studying psi simply claim that every so often they find interesting evidence for strange sorts of perceptual and energetic anomalies. They’re not demanding that we also believe aliens have infiltrated the staff of the White House. Still, some people continue to insist that “there’s not a shred of evidence” for psi. Why can’t they see that there are thousands of shreds that, after we combine the weft of experiences and the warp of experiments, weave an immense, enchanting fabric?

The answer is contained in the odd fact that we do not perceive the world as it is, but as we wish it to be.1 We know this through decades of conventional research in perception, cognition, decision making, intuitive judgment, and memory. Essentially, we construct mental models of a world that reflect our expectations, biases, and desires, a world that is comfortable for our egos, that does not threaten our beliefs, and that is consistent, stable, and coherent.

In other words, our minds are “story generators” that create mental simulations of what is really out there. These models inevitably perpetuate distortions, because what we perceive is influenced by the hidden persuasions of ideas, memory, motivation, and expectations. An overview of how we know this helps clarify why we should be skeptical of both overly enthusiastic claims of psychic experiences and overly enthusiastic skeptical criticisms, and why controversy over the existence of psi has persisted in spite of a century of accumulating scientific evidence.

The bottom line is that if we do not expect to see psi, we won’t. And be cause our world will not include it, we will reach the perfectly logical conclusion that it does not exist. Therefore anyone who claims that it does is just stupid, illogical, or irrational. Of course, the opposite is also true. If we expect to see psi everywhere, then our world will be saturated with psychic phenomena. Just as uncritical skepticism can turn into paranoia and cynicism, uncritical belief can turn into an obsessive preoccupation with omens, signs, and coincidences. Neither extreme is a particularly balanced or well-integrated way of dealing with life’s uncertainties.

Four Stages Redux

This book opened with a listing of the four stages by which we accept new ideas. In Stage 1 the idea is flat-out impossible. By Stage 2 it is possible, but weak and uninteresting. In Stage 3 the idea is important, and the effects are strong and pervasive. In Stage 4 everyone thinks that he or she thought of it first. Later, no one remembers how contentious the whole affair was.

These same four stages are closely associated with shifts in perception and expectation. In Stage 1, expectations based on prior convictions prevent us from seeing what is out there. At this stage, because “it” can’t be seen, then of course “it” is impossible. Any evidence to the contrary must be flawed, even if no flaw can be specified. The stronger our expectation, the stronger our conviction is that we are correct.

In Stage 2, after our expectations have been tweaked by repeated exposure to new experiences or to overwhelming evidence, we may begin to see “it,” but only weakly, sporadically, and with strong distortions. At this stage, we sense that something interesting is going on, but because it not well understood, we can’t perceive it clearly. Authorities declare that it may not amount to much, but whatever it is, it might be prudent to take it seriously.

In Stage 3, after someone shows how it must be there after all, either through a new theoretical development or through the unveiling of an obvious, practical application, then suddenly the idea and its implications are obvious. Moreover, if the idea is truly important, it will seem to become omnipresent. After this stage, all sorts of new unconscious tactics come into play, like retrocognitive memory distortion (revisionist history), and a whole new set of expectations arises. Inevitably, mental scaffolding begins to take shape that blocks perception of future new ideas. History shows that this cycle is repeated over and over again.

Effects of Prior Convictions

A classic experiment by psychologists J. S. Bruner and Leo Postman demonstrated that sometimes what we see—or think we see—is not really there.2 Bruner and Postman created a deck of normal playing cards, except that some of the suit symbols were color-reversed. For example, the queen of diamonds had black-colored diamonds instead of red. The special cards were shuffled into an ordinary deck, and then as they were displayed one at a time, people were asked to identify them as fast as possible. The cards were first shown very briefly, too fast to identify them accurately. Then the display time was lengthened until all the cards could be identified. The amazing thing is that while all the cards were eventually identified with great confidence, no one noticed that there was anything out of the ordinary in the deck.

People saw a black four of hearts as either a four of spades or as a normal four of hearts with red hearts. In other words, their expectations about what playing cards should look like determined what they actually saw. When the researchers increased the amount of time that the cards were displayed, some people eventually began to notice that something was amiss, but they did not know exactly what was wrong. One person, while directly gazing at a red six of spades, said, “That’s the six of spades but there’s something wrong with it—the black spade has a red border.”3As the display time increased even more, people became more confused and hesitant. Eventually, most people saw what was before their eyes. But even when the cards were displayed for forty times the length of time needed to recognize normal playing cards, about 10 percent of the color-re versed playing cards were never correctly identified by any of the people!

The mental discomfort associated with seeing something that does not match our expectations is reflected in the exasperation of one participant in the experiment who, while looking at the cards, reported, “I can’t make the suit out, whatever it is. It didn’t even look like a card that time. I don’t know what color it is now or whether it’s a spade or a heart. I’m not even sure what a spade looks like. My God!”

Studies like this in the 1950s led psychologist Leon Festinger and his colleagues at Stanford University to develop the idea of cognitive dissonance.4 This is the uncomfortable feeling that develops when people are confronted by “things that shouldn’t ought to be, but are.” If the dissonance is sufficiently strong, and is not reduced in some way, the uncomfortable feeling will grow, and that feeling can develop into anger, fear, and even hostility. A pathological example of unresolved cognitive dissonance is represented by people who blow up abortion clinics in the name of Jesus. Also, to avoid unpleasant cognitive dissonance people will often react to evidence that disconfirms their beliefs by actually strengthening their original beliefs and creating rationalizations for the disconfirming evidence.

The drive to avoid cognitive dissonance is especially strong when the belief has led to public commitment. Because the primary debunkers of psi phenomena are publicly committed to their views through their affiliation with skeptics organizations, we can better understand some of the tactics they have used to reduce their cognitive dissonance.

Reducing Cognitive Dissonance

There are three common strategies for reducing cognitive dissonance. One way is to adopt what others believe. Parents often see this change in their children when they begin school. Children rapidly conform to groupthink, and after a few years, they need this particular pair of shoes, and that particular haircut, and this video game, or they will simply die. Children are not just imagining their strong needs for this or that fad. Even in young children, the need to conform to social pressure can be as painful as physical pain. Likewise, a college student faced with trying to please a skeptical professor will soon come to agree that anyone who believes in all that “New Age bunk,” or psi, is either mentally unstable or stupid.

A second way of dealing with cognitive dissonance is to apply pressure to people who hold different ideas. This explains why mavericks are often shunned by more conventional scientists and why there is almost no public funding of psi research. In totalitarian regimes, the heretics are simply tracked down and eliminated. To function without the annoying pain of cognitive dissonance, groups will use almost any means to achieve consensus.

The third way of reducing cognitive dissonance is to make the person who holds a different opinion significantly different from oneself. This is where disparaging labels like “heretic” and “pseudoscientist” come from. The heretic is stupid, malicious, foolish, sloppy, or evil, so his opinion does not matter. Or she has suspicious motives, or she believes in weird practices, or she looks different. The distressing history of how heretics were treated in the Middle Ages and the more recent “ethnic cleansings” of the last half-century remind us that witch-hunts are always just below the veneer of civility. The human psyche fears change and is always struggling to maintain the status quo.5 Vigorous struggles to promote the “one right” interpretation of the world have existed as long as human beings have held opinions. As history advances, and we forget the cost in human suffering, old controversies begin to look ridiculous. For example, an explosive controversy in the Middle Ages was whether God the Father and God the Son had the same nature or merely a similar nature. Hundreds died over that debate.6Cognitive Dissonance and PSI

When we are publicly committed to a belief, it is disturbing even to consider that any evidence contradicting our position may be true—because public ridicule adds to the unpleasantness of cognitive dissonance. This is one reason that the psi controversy has persisted for so long. It also helps to explain why it is much easier to be a skeptic than it is to be a researcher investigating unusual effects. Skeptics may be overly conservative, but if they are ultimately proved wrong they can just smile and shrug it off and say “Whoops, I guess I was wrong. Sorry!” By contrast, frontier scientists are often blindly attacked as though their findings represented a virus that must be extinguished from the existing “body” of knowledge at all costs.

Commitment stirs the fires of cognitive dissonance and makes it progressively more difficult to even casually entertain alternative hypotheses. This is as true for proponents as it is for skeptics. Cognitive dissonance is also one of the main reasons that many scientists dismiss the evidence provided by psi experiments without even examining it. In science, said the philosopher of science Thomas Kuhn, “novelty emerges only with difficulty, manifested by resistance, against a background provided by expectation. Initially only the anticipated and usual are experienced even under circumstances where an anomaly is later to be observed.”7This means that in the initial stages of a new discovery, when a scientific anomaly is first claimed, it literally cannot be seen by everyone. We have to change our expectations in order to see it. When one scientist claims to see something unusual, another scientist who is intrigued by the claim, but does not believe it yet, will simply fail to see the same effect.

Kuhn illustrated this bewildering state of affairs with the case of Sir William Herschel’s discovery of the planet Uranus. Uranus was observed at least seventeen times by different astronomers from 1690 to 1781. None of the observations made any sense if the object was a star, which was the prevailing assumption about most lights in the sky at the time, until Herschel suggested that the “star” might have been in a planetary orbit. Then it suddenly made sense. After this shift in perception, caused by a new way of thinking about old observations, suddenly everyone was seeing planets.8The same was true for studies of subliminal perception in the 1950s. Not all early experimenters could get results. No theory could account for the bizarre claim that something could be seen without being aware that it was being seen. But once computer-inspired information-processing models were developed, with their accompanying metaphors about information being processed simultaneously at different levels, then suddenly subliminal processing was acceptable and the effects were observable.9The effect of shifting perceptions was observed more recently when high-temperature superconductors were unexpectedly discovered in 1986. Soon afterward, superconducting temperatures previously considered flatly impossible were being reported regularly. The same had occurred with lasers. It took decades to get the first lasers to work; then suddenly every thing was “lasing.” It took decades to get the first crude holographs to work, and now they are put on cereal boxes by the millions. Some of these changes were the result of advancements in understanding the basic phenomenon, but those advancements could not occur until expectations about what was possible had already changed.

Another famous and poignant example is the case of German meteorologist Alfred Wegener. In 1915 Wegener published a “ludicrous” theory that the earth’s continents had once been a single, contiguous piece. Over mil lions of years, he claimed, the single continent split into several pieces, which then drifted apart into their current configuration. Wegener’s theory, dubbed “continental drift,” was supported by an extensive amount of care fully cataloged geological evidence. Still, his British and American colleagues laughed and called the idea impossible, and Wegener died an intellectual outcast in 1930. Today, every schoolchild is taught his theory, and by simply taking the time to examine a world map, we can now observe that Wegener’s impossible theory is entirely self-evident.10Expectancy Effects

I know I’m not seeing things as they are, I’m seeing things as I am.

—Laurel Lee

In attempting to understand how intelligent scientists could seriously propose criticisms of psi research that were blatantly invalid, sociologist Harry Collins showed that for controversial scientific topics where the mere existence of a phenomenon has been in question, scientific criticisms are al most completely determined by critics’ prior expectations. That is, criticisms are often unrelated to the actual results of experiments. For example, Collins showed that in the case of the search for gravity waves (hypothetical forces that “carry” gravity), reviewers’ assessment of the competency of experiments conducted by proponents and critics depended entirely on the reviewers’ expectations of what effects they thought should have been observed.11The expectancy effect has also been observed in experimental studies by Stanford University social psychologists Lee Ross and Mark Lepper. They found that precisely the same experimental evidence shown to a group of reviewers tended to polarize them according to their initial positions.12 Studies conforming to the reviewers’ preconceptions were seen as better designed, as more valid, and as reaching more adequate conclusions. Studies not conforming to prior expectations were seen as flawed, invalid, and reaching inadequate conclusions. Sound familiar?

This “perseverance effect” has been a major stumbling block for parapsychology. Collins and sociologist Trevor Pinch studied how conventional scientists have reacted to claims of experimental evidence for psi phenomena. In an article they wrote that focused on issues of social psychology, and in which they explicitly stated that their own position was entirely neutral with regard to the existence of psi, they received

a spleenful letter from a well known professional magician-and-sceptic which attempts to persuade us to change our attitude to research in the paranormal and claims that: “Seriously, how men of science such as yourselves can make excuses for … [the proponents’] incompetence is a matter of astonishment to me. … I was shocked at your paper; I had expected science rather than selective reporting.”13

Reviewer bias is not just evident in skeptics’ reviews of psi research; it is endemic in all scientific controversies. This is especially true for controversies concerned with questions about morality or mortality. For example, science becomes muddled with politics when we seek answers to difficult questions such as whether herbal remedies should be used to treat cancer, or whether nuclear power is safe, or whether a particular concentration of benzene or asbestos in the workplace is tolerable.

A reviewer’s judgment of a researcher’s level of competency is often established on the basis of who produced the results rather than on independent assessments of the experimental methods. For example, results reported by “prominent professors at Princeton University” will be viewed as more credible than identical results reported by a junior staff member at “East Central Southwestern Community College.”

Ultimately, it seems that scientific “truth,” at least for controversial topics, is not determined as much by experiment, or replication, or any other method listed in the textbooks, as by purely nonscientific factors. These include rhetoric, ad hominem attack, institutional politics, and battles over limited funding. In short, scientists are human. Assuming that scientists act rationally when faced with intellectual or economic pressures is a mistake.

Sociologist Harry Collins calls one element of this problem about getting to the “truth” of controversial matters the experimenters’ regress. This is an exasperating catch-22 that occurs when the correct outcome of an experiment is unknown. To settle the question under normal circumstances, where results are predicted by well-accepted theory, the outcome of a single experiment can be examined to see if it matches the expectation. If it does, the experiment was obviously correct. If not, it wasn’t.

In cases like parapsychology, to know whether the experiment was well performed, we first need to know whether psi exists. But to know whether psi exists, we need to run the right experiment. But to run the right experiment, we need a well-accepted theory. But … And so on. This forms an infinite, potentially unbreakable loop. In particular, this loop can continue unresolved in spite of the application of strict scientific methods. In an at tempt to break the experimenters’ regress, skeptics often argue that the phenomenon does not exist. Of course, to do that they must rely on invalid, nonscientific criticisms, because there is plenty of empirical evidence to the contrary.

It is difficult to detect purely rhetorical tactics unless one is deeply familiar with both sides of a debate. As Collins put it:

Without deep and active involvement in controversy, and/or a degree of philosophical self-consciousness about the social process of science (still very unusual outside a small group of academics) the critic may not notice how far scientific practice strays from the text book model of science.14

Judgment Errors

The acts of perception and cognition, which seem to be immediate and self-evident, involve absorbing huge amounts of meaningless sensory information and mentally constructing a stable and coherent model of the world. Mismatches between the world as it really is and our mental “virtual” world lead to persistent, predictable errors in judgment. These judgment errors have directly affected the scientific controversy about psi.

When a panel of expert clinicians, say psychologists, physicians, or psychiatrists, are asked to provide their best opinions about a group of patients, they are usually confident that their assessments will be accurate. After all, highly regarded clinicians have years of experience making complex judgments. They believe that their experiences in judging thousands of earlier cases have honed their intuitive abilities into a state of rarefied precision that no simple, automated procedure could ever match. They’re often wrong.

Psychologist Dale Griffin of Stanford University reviewed the research on how we make intuitive judgments for the same National Research Council report that reviewed the evidence on psi.15 Griffin’s job was to remind the committee that when we make expert judgments on complex is sues, it is important to use objective methods (like meta-analysis) to assess the evidence rather than to rely on personal intuitions. It’s too bad that the committee did not pay close attention to Griffin’s advice.

Starting in the 1950s, researchers began to study how expert intuition compared with predictions based on simpleminded mathematical rules. In such studies, a clinical panel was presented with personal information such as personality scores and tallies on various other tests, then asked to predict the likely outcomes for each person. The prediction might be for a medical assessment, or suitability for a job, or any number of other applications. The judges’ predictions were compared to a simple combination of scores from the various tests, and both predictions were compared with the actual out comes. To the dismay of the experts, not only were the mathematical predictions far superior to the experts’ intuitions, but many of these studies showed that the amount of professional training and experience of the judges was not even vaguely related to their accuracy! To add insult to in jury, the mathematical models were not highly sophisticated. In most cases, they were formed by simply adding up values from various test scores.16A flurry of studies in the 1950s confirmed that simple mathematical pre dictions were almost always better than expert clinical intuition for diagnosing medical symptoms such as brain damage, categorizing psychiatric patients, and predicting success in college. Clinical experts were not amused.

Today, when we evaluate complex evidence provided by a body of experimental data, we use the successor to those early mathematical models: the quantitative meta-analysis. So the National Research Council experts who relied on their personal opinions to evaluate the evidence for parapsychology, however intuitively appealing their opinions may have felt, would be as perplexed as the clinical experts of the 1950s to discover that their subjective opinions were just plain wrong.

What Do We Pay Attention To?

How could experts be so wrong? One reason is that expectation biases are self-generating. We cannot pay attention to everything equally, so instead we rely on past experience and vague mental “heuristics,” or guidelines, that worked fairly well on similar problems. Unfortunately, relying on subjective impressions and mental guidelines creates a cycle in which our past experience begins to divert us from paying attention to new things that might be even more predictive. After a while, since we no longer pay attention to anything other than what we have already decided is important, we tend to keep confirming what we already knew. This model-or theory-driven approach is called the confirmation bias.

The problem with the confirmation bias is that we end up learning only one or two ways to solve a problem, and then we keep reapplying that solution to all other problems, whether it is appropriate or not. This is especially compounded for highly experienced people, because past successes have made their theories so strong that they tend to overlook easier, simpler, more accurate, and more efficient ways of solving the problem. This is one reason that younger scientists are usually responsible for the giant, earth-shaking discoveries—they haven’t learned their craft so well that they have become blind to new possibilities. Younger scientists are invariably more open to psi than older scientists.

One well-known consequence of being driven by theory is the “self-fulfilling prophecy”—the way our private theories cause others to act toward us just as our theories predict. For instance, if our theory says that people are basically kind and loving, and we expect that people will act this way, then sure enough, they will usually respond in kind, loving ways, reinforcing our original expectation. In contrast, if we assume that people are basically nasty and paranoid, they will quickly respond in ways that reinforce this negative expectation. Many people know about the power of self-fulfilling prophecy through Norman Vincent Peale’s famous book, The Power of Positive Thinking.17An experiment demonstrating the self-fulfilling prophecy was described by Harvard psychologist Robert Rosenthal in a classic book entitled Pygmalion in the Classroom.18 Teachers were led to believe that some students were high achievers and others were not. In reality, the students had been assigned at random to the two categories. The teachers’ expectations about high achievers led them to treat the “high achievers” differently than the other students, and subsequent achievement tests confirmed that the self-fulfilling prophecy indeed led to higher scores for the randomly selected “high achievers.”

Such studies made it absolutely clear that when experimenters know how participants “should” behave, it is impossible not to send out unconscious signals. This is why scientists use the double-blind experimental design, so that their personal expectations do not contaminate the research results. And this is why we cannot fully trust fascinating psychic stories reported by groups that expect such things to occur, unless they also demonstrate that they are aware of, know how to, and did control for expectation biases.

An important consequence of the confirmation bias and self-fulfilling prophecy is that the more we think we already know the answer, the more difficult it is for us to judge new evidence fairly. This is precisely why scientific committees charged with evaluating the evidence in controversial fields such as psi must be composed of scientists who have no strong prior opinions about the topic. It is too bad that the National Research Council committee did not heed its own advice.

Because of the confirmation bias, skeptics who review a body of psi experiments are likely to select for review only the few studies that confirm their prior expectations. They will assume that all the other studies they could have reviewed would have had the same set of real or imagined problems. And they end up confirming their prior position. For example, one of skeptical psychologist Susan Blackmore’s favorite arguments against parapsychology is based upon a single occasion when she thought she had reason to suspect one set of experiments. For years now, she has used that single experience to justify her doubt about all other psi experiments.19

THE CONSCIOUS UNIVERSE

The Scientific TruthOf Psychic Phenomena

Dean I. Radin, Ph.D.


No comments:

Post a Comment