Interview with Keith Stanovich

By Coert Visser

Dr. Keith Stanovich, Professor of Human Development and Applied Psychology of the University of Toronto, is a leading expert on the psychology of reading and on rationality. His latest book, What Intelligence Tests Miss: The Psychology of Rational Thought, shows that IQ tests are very incomplete measures of cognitive functioning. These tests fail to assess rational thinking styles and skills which are nevertheless crucial to real-world behavior. In this interview with Keith Stanovich he explains the difference between IQ and rationality and why rationality is so important. Also he shares his views on how rationality can be enhanced.

In your book, you say that IQ tests are incomplete measures of cognitive functioning. Could you explain that?

I start out my book by noting the irony that in 2002, cognitive scientist Daniel Kahneman of Princeton University won the Nobel Prize in Economics for work on how humans make choices and assess probabilities—in short, for work on human rationality.  Being rational means adopting appropriate goals, taking the appropriate action given one’s goals and beliefs, and holding beliefs that are commensurate with available evidence—it means achieving one’s life goals using the best means possible.  To violate the thinking rules examined by Kahneman and Tversky thus has the practical consequence that we are less satisfied with our lives than we might be.  Research conducted in my own laboratory has indicated that there are systematic individual differences in the judgment and decision making skills studied by Kahneman and Tversky.

It is a profound historical irony of the behavioral sciences that the Nobel Prize was awarded for studies of cognitive characteristics that are entirely missing from the most well-known mental assessment device in the behavioral sciences—the intelligence test, and its many proxies, such as the SAT.  It is ironic because most laypeople are prone to think that IQ tests are tests of, to put it colloquially, good thinking.  Scientists and laypeople alike would tend to agree that “good thinking” encompasses good judgment and decision making—the type of thinking that helps us achieve our goals.  In fact, the type of “good thinking” that Kahneman and Tversky studied was deemed so important that research on it was awarded the Nobel Prize.  Yet assessments of such good thinking—rational thinking—are nowhere to be found on IQ tests.  Intelligence tests measure important things, but not these—they do not assess the extent of rational thought.  This might not be such an omission if it were the case that intelligence was a strong predictor of rational thinking.  However, my research group has found just the opposite—that it is a mild predictor at best and that some rational thinking skills are totally dissociated from intelligence.

You write about three types of thinking processes, the autonomous, the algorithmic and the reflective mind. Could you briefly explain these and explain how they are related to intelligence and rationality?

In 1996, philosopher
Daniel Dennett wrote a book about how aspects of the human mind were like the minds of other animals and how other aspects were not. He titled the book Kinds of Minds to suggest that within the brain of humans are control systems of very different types—different kinds of minds. In the spirit of Dennett’s book, I termed the part of the mind that carries out Type 1 processing the autonomous mind.  The difference between the algorithmic mind and the reflective mind is captured in another well established distinction in the measurement of individual differences—the distinction between cognitive ability and thinking dispositions.  The algorithmic mind is indexed by measures of computational power like fluid g in psychometric theory.  The reflective mind is indexed by individual differences in thinking disposition measures.

The term mindware was coined by psychologist David Perkins to refer to the rules, knowledge, procedures, and strategies that a person can retrieve from memory in order to aid decision making and problem solving. Perkins uses the term to stress the analogy to software in the brain/computer analogy.  Each of the levels in the tripartite model of mind has to access knowledge to carry out its operations.  The reflective mind not only accesses general knowledge structures but, importantly, accesses the person’s opinions, beliefs, and reflectively acquired goal structure.  The algorithmic mind accesses micro-strategies for cognitive operations and production system rules for sequencing behaviors and thoughts. Finally, the autonomous mind accesses not only evolutionarily-compiled encapsulated knowledge bases, but also retrieves information that has become tightly compiled and available to the autonomous mind due to overlearning and practice.
Rationality requires three different classes of mental characteristic. First, algorithmic-level cognitive capacity is needed in order that autonomous-system override and simulation activities can be sustained.  Second, the reflective mind must be characterized by the tendency to initiate the override of suboptimal responses generated by the autonomous mind and to initiate simulation activities that will result in a better response.  Finally, the mindware that allows the computation of rational responses needs to be available and accessible during simulation activities. Intelligence tests assess only the first of these three characteristics that determine rational thought and action.  As measures of rational thinking, they are radically incomplete.

That society, educators, psychologists, and personnel managers put so much emphasis on intelligence seems strange and unjustified given that intelligence tests cover only one of these three important mental processes. Could you say something about how individuals, organizations and, perhaps, society as a whole, might benefit from focusing more on raising rational thinking skills?

The lavish attention devoted to intelligence, raising it, praising it, worrying when it is low, etc., seems wasteful in light of the fact that we choose to virtually ignore another set of mental skills with just as much social consequence—rational thinking mindware and procedures.  Popular books tell parents how to raise more intelligent children, educational psychology textbooks discuss the raising of students’ intelligence, and we feel reassured when hearing that a particular disability does not impair intelligence.  There is no corresponding concern on the part of parents that their children grow into rational beings, no corresponding concern on the part of schools that their students reason judiciously, and no corresponding recognition that intelligence is useless to a child unable to adapt to the world.

I simply do not think that society has weighed the consequences of its failure to focus on irrationality as a real social problem.  These skills and dispositions profoundly affect the world in which we live.  Because of inadequately developed rational thinking abilities—because of the processing biases and mindware problems discussed in my book—physicians choose less effective medical treatments; people fail to accurately assess risks in their environment; information is misused in legal proceedings; millions of dollars are spent on unneeded projects by government and private industry; parents fail to vaccinate their children; unnecessary surgery is performed; animals are hunted to extinction; billions of dollars are wasted on quack medical remedies; and costly financial misjudgments are made.  Distorted processes of belief formation are also implicated in various forms of ethnocentric, racist, sexist, and homophobic hatred.

It is thus clear that widespread societal effects result from inadequately developed rational thinking dispositions and knowledge.  In the modern world, the impact of localized irrational thoughts and decisions can be propagated and magnified through globalized information technologies, thus affecting large numbers of people. That is, you may be affected by the irrational thinking of others even if you do not take irrational actions yourself.  This is why, for example, the spread of pseudoscientific beliefs is everyone’s concern.  For example, police departments hire psychics to help with investigations even though research has shown that their use is not efficacious.  Jurors have been caught making their decisions based on astrology.  Major banks and several Fortune 500 companies employ graphologists for personnel decisions even though voluminous evidence indicates that graphology is useless for this purpose.

Unfortunately, these examples are not rare. We are all affected in numerous ways when such contaminated mindware permeates society—even if we avoid this contaminated mindware ourselves.   Pseudosciences such as astrology are now large industries, involving newspaper columns, radio shows, book publishing, the Internet, magazine articles, and other means of dissemination.  The House of Representatives Select Committee on Aging has estimated that the amount wasted on medical quackery nationally reaches into the billions.  Physicians are increasingly concerned about the spread of medical quackery on the Internet and its real health costs.

It seems that sometimes high rationality can irritate some people. For instance, you can sometimes here people saying things like: "don't be so rational!" Do you think there can be such a thing as being too rational?

Under a proper definition of rationality, one consistent with modern cognitive science, no.  It certainly is possible for a person to be “too logical” but being logical is not synonymous with being rational.  Psychologists study rationality because it is one of the most important human values.  It is important for a person’s happiness and well-being that they think and act rationally.  The high status accorded rationality in my writings may seem at odds with other characterizations that deem rationality either trivial -little more than the ability to solve textbook-type logic problems- or in fact antithetical to human fulfillment -as an impairment to an enjoyable emotional life, for instance. These ideas about rationality derive from a restricted and mistaken view of rational thought—one not in accord with the study of rationality in modern cognitive science.

Dictionary definitions of rationality tend to be rather lame and unspecific (“the state or quality of being in accord with reason”), and some critics who wish to downplay the importance of rationality have promulgated a caricature of rationality that involves restricting its definition to the ability to do the syllogistic reasoning problems that are encountered in Philosophy 101.  The meaning of rationality in modern cognitive science is, in contrast, much more robust and important.  Cognitive scientists recognize two types of rationality:  instrumental and epistemic.  The simplest definition of instrumental rationality, the one that emphasizes most that it is grounded in the practical world, is: Behaving in the world so that you get exactly what you most want, given the resources (physical and mental) available to you.  The other aspect of rationality studied by cognitive scientists is epistemic rationality. This aspect of rationality concerns how well beliefs map onto the actual structure of the world.  The two types of rationality are related. In order to take actions that fulfill our goals, we need to base those actions on beliefs that are properly calibrated to the world.

Although many people feel (mistakenly or not) that they could do without the ability to solve textbook logic problems (which is why the caricatured view of rationality works to undercut its status), virtually no person wishes to eschew epistemic rationality and instrumental rationality, properly defined. Virtually all people want their beliefs to be in some correspondence with reality, and they also want to act to maximize the achievement of their goals.  Psychologist
Ken Manktelow, in his book Psychology of Reasoning, has emphasized the practicality of both types of rationality by noting that they concern two critical things: What is true and what to do. Epistemic rationality is about what is true and instrumental rationality is about what to do.

Nothing could be more practical or useful for a person’s life than the thinking processes that help them find out what is true and what is best to do. This stands in marked contrast to some restricted views of what rationality is (for example, the rationality=logic view that I mentioned above).  Being rational (in the sense studied by cognitive scientists) is NOT just being logical.  Instead, logic (and all other cognitive tools) must prove its worth.  It must show that it helps us get at what is true or helps us to figure out what it is best to do.  My philosophy echoes that of
Jonathan Baron, in his book Thinking and Deciding (4th Edition), when he argues that “the best kind of thinking, which we shall call rational thinking, is whatever kind of thinking best helps people achieve their goals.  If it should turn out that following the rules of formal logic leads to eternal happiness, then it is rational thinking to follow the laws of logic, assuming that we all want eternal happiness.  If it should turn out, on the other hand, that carefully violating the laws of logic at every turn leads to eternal happiness, then it is these violations that we shall call rational” (p. 61).

A similar admonition applies when we think about the relation between emotion and rationality.  In folk psychology, emotion is seen as antithetical to rationality.  The absence of emotion is seen as purifying thinking into purely rational form.  This idea is not consistent with definition of rationality that I (and most other cognitive scientists) adopt.  Instrumental rationality is behavior consistent with maximizing goal satisfaction, not a particular psychological process.  It is perfectly possible for the emotions to facilitate instrumental rationality as well as to impede it.  In fact, conceptions of emotions in cognitive science stress the adaptive regulatory powers of the emotions.  Emotions often get us “in the right ballpark” of the correct response.  If more accuracy than that is required, then a more precise type of analytic cognition will be required.  Of course, we can rely too much on the emotions.  We can base responses on a “ballpark” solution in situations that really require a more precise type of analytic thought.  More often than not, however, processes of emotional regulation facilitate rational thought and action.

Writer
Malcolm Gladwell, in his bestselling book Blink, adopts the folk psychological view of the relation between emotion and rationality that is at odds with the way those concepts are discussed in cognitive science.  Gladwell discusses the famous cases of cognitive neuroscientist Antonio Damasio where damage to the ventromedial prefrontal cortex caused nonfunctional behavior without impairing intelligence.  Gladwell argues that “people with damage to their ventromedial area are perfectly rational.  They can be highly intelligent and functional, but they lack judgment” (2005, p. 59).  But this is not the right way to describe these cases.  In my view, someone who lacks judgment cannot be rational.

In the book, you explain the lack of rationality is associated with three things: 1) an overreliance on the autonomous mind, relying on unconscious heuristics where deliberate thinking would have been asked for, 2) a mindware gap, lack of rational tools, procedures, knowledge, strategies, and 3) being infected with contaminated mindware, which refers to beliefs, rules, strategies, etc that are not grounded in evidence but which are potentially harmful and yet hard to get rid of, like a computer virus. Now, I can imagine that bridging the mindware gap can be accomplished largely by education. The other two seem a bit harder to me. Could you share some ideas about what might help to prevent an overreliance on the autonomous mind and about how to fight contaminated mindware?
You are correct that irrationality caused by mindware gaps is most easily remediable, as it is entirely due to missing strategies and declarative knowledge that can be taught (your category #2 above).  But keep in mind that often category #1 (overriding the tendencies of the autonomous mind) is closely linked because override is most often done with learned mindware, and sometimes override fails because of inadequately instantiated mindware.  In such a case, inadequately learned mindware should really be considered the source of the problem (the line between the two is continuous—As the rule is less and less well instantiated, at some point it is so poorly compiled that it is not a candidate to override the Type 1 response and thus the processing error becomes a mindware gap).

Other categories of cognitive failure are harder to classify in terms of whether they are more dispositional (category #1) or knowledge-like (category #2).  For example, disjunctive reasoning is the tendency to consider all possible states of the world when deciding among options or when choosing a problem solution in a reasoning task.  It is a rational thinking strategy with a high degree of generality.  People make many suboptimal decisions because of the failure to flesh out all the possible options in a situation, yet the disjunctive mental tendency is not computationally expensive.  This is consistent with the finding that there are not strong intelligence-related limitations on the ability to think disjunctively and with evidence indicating that disjunctive reasoning is a rational thinking strategy that can be taught.

The tendency to consider alternative hypotheses is, like disjunctive reasoning, strategic mindware of great generality.  Also, it can be implemented in very simple ways. Many studies have attempted to teach the technical issue of thinking of P(D/~H) [the probability of the observed data given the alternative hypothesis] or thinking of the alternative hypothesis by instructing people in a simple habit.  People are given extensive practice at saying to themselves the phrase “think of the opposite” in relevant situations.  This strategic mindware does not stress computational capacity and thus is probably easily learnable by many individuals. Several studies have shown that practice at the simple strategy of triggering the thought “think of the opposite” can help to prevent a host of the thinking errors studied in the heuristics and biases literature, including but not limited to: anchoring biases, overconfidence effects, hindsight bias, confirmation bias, and self serving biases.

Various aspects of probabilistic thinking represent mindware of great generality and potency.  However, as any person who has ever taught a statistics course can attest (your present author included), some of these insights are counterintuitive and unnatural for people—particularly in their application.  There is nevertheless still some evidence that they are indeed teachable—albeit with somewhat more effort and difficulty than strategies such as disjunctive reasoning or considering alternative hypotheses.  Aspects of scientific thinking necessary to infer a causal relationship are also definitely teachable.  Other strategies of great generality may be easier to learn—particularly by those of lower intelligence.  For example, psychologist
Peter Gollwitzer has discussed an action strategy of extremely wide generality—the use of implementation intentions.  An implementation intention is formed when the individual marks the cue-action sequence with the conscious, verbal declaration: “when X occurs, I will do Y”.  Finally, research has shown that an even more minimalist cognitive strategy of forming mental goals (whether or not they have implementation intentions) can be efficacious.  For example, people perform better in a task when they are told to form a mental goal (“set a specific, challenging goal for yourself”) for their performance as opposed to being given the generic motivational instructions (“do your best”).

We are often making choices that reduce our happiness because we find it hard to predict what will make us happy.  For example, people often underestimate how quickly they will adapt to both fortunate and unfortunate events.  Our imaginations fail at projecting the future. Psychologist
Dan Gilbert cites evidence indicating that a remediating strategy in such situations might be to use a surrogate—someone who is presently undergoing the event whose happiness (or unhappiness) you are trying to simulate.  For example, if you are wondering how you will react to “empty nest” syndrome, ask someone who has just had their last child leave for college rather than trying to imagine yourself in that situation.  If you want to know how you will feel if your team is knocked out in the first round of the tournament, ask someone whose team has just been knocked out rather than trying to imagine it yourself. People tend not to want to use this mechanism because they think that their own uniqueness makes their guesses from introspection more accurate than the actual experiences of the people undergoing the event.  People are simply skeptical about whether other people’s experiences apply to them.  This is a form of egocentrism akin to the myside processing.  Gilbert captures the irony of people’s reluctance to adopt the surrogate strategy by telling his readers: “If you are like most people, then like most people, you don’t know you’re like most people” (p. 229, 2006)

Much of the strategic mindware discussed so far represents learnable strategies in the domain of instrumental rationality (achieving one’s goals). Epistemic rationality (having beliefs well calibrated to the world) is often disrupted by contaminated mindware. However, even here, there are teachable macro-strategies that can reduce the probability of acquiring mindware harmful that is to its host.  For example, the principle of falsifiability provides a wonderful inoculation against many kinds of nonfunctional beliefs.  It is a tool of immense generality.  It is taught in low-level methodology and philosophy of science courses, but could be taught much more broadly than this.

Many pseudoscientific beliefs represent the presence of contaminated mindware.  The critical thinking skills that help individuals to recognize pseudoscientific belief systems can be taught in high-school courses.  Finally, I think that the language of
memetic science itself is therapeutic—a learnable mental tool that can help us become more conscious of the possibility that we are hosting contaminated mindware.  One way that the meme concept will aid in cognitive self-improvement is that by emphasizing the epidemiology of belief it will indirectly suggest to many (for whom it will be a new insight) the contingency of belief.  By providing a common term for all cultural units, memetic science provides a neutral context for evaluating whether any belief serves our interests as humans.  The very concept of the meme will suggest to more and more people that they need to engage in mindware examination.

I recently heard someone say: "I'm just a simple man doing a simple job. What's the harm in me being not so rational?" This made me wonder, is there anything known about what characteristics of a task, role or context determine the criticality of rationality? How can we know when rationality is critical and when it is a bit less important or even completely unimportant?

Your question relates to an issue I have written about in my book
The Robot’s Rebellion.  The simple man with the simple job might be protected from his irrationality by living in a rational cultural, in which he is, in effect, a cultural freeloader. Cultural diffusion that allows knowledge to be shared short-circuits the need for separate individual discovery. In fact, most of us are cultural freeloaders--adding nothing to the collective knowledge or rationality of humanity.  Instead, we benefit every day from the knowledge and rational strategies invented by others.

The development of probability theory, concepts of empiricism, mathematics, scientific inference, and logic throughout the centuries have provided humans with conceptual tools to aid in the formation and revision of belief and in their reasoning about action.  A college sophomore with introductory statistics under his or her belt could, if time-transported to the Europe of a couple of centuries ago, become rich "beyond the dreams of avarice" by frequenting the gaming tables or by becoming involved in insurance or lotteries.  The cultural evolution of rational standards is apt to occur markedly faster than human evolution.  In part this cultural evolution creates the conditions whereby instrumental rationality separates from genetic optimization.  As we add to the tools of rational thought, we add to the software that the analytic system can run to achieve long-leash goals that optimize actions for the individual.  Learning a tool of rational thinking can quickly change behavior and reasoning in useful ways--as when a university student reads the editorial page with new reflectiveness after having just learned the rules of logic.  Evolutionary change is glacial by comparison.

Thus, in an astonishingly short time by evolutionary standards, humans can learn and disseminate--through education and other forms of cultural transmission--modes of thinking that can trump genetically optimized modules in our brains that have been driving our behavior for eons.  Because new discoveries by innovators can be conveyed linguistically, the general populace needs only the capability to understand the new cognitive tools--not to independently discover the new tools themselves.

Cultural increases in rationality itself might likewise be sustained through analogous mechanisms of cumulative ratcheting.  That is, cultural institutions might well arise that take advantage of the tools of rational thought, and these cultural institutions might enforce rules whereby people accrue the benefits of the tools of rationality without actually internalizing the rational tools.  In short, people just learn to imitate others in certain situations or “follow the rules” of rationality in order to accrue some societal benefits, while not actually becoming more rational themselves.

Cultural institutions themselves may achieve rationality at an organizational level without this entailing that the individual people within the organization are themselves actually running the tools of rational thought on their serial mental simulators.

Could you tell me about some of the questions that currently fascinate you? What are some of the research questions you would like to explore in the near future?

I am constantly asked about the possibility of a standardized rational thinking test.  I respond that there is no conceptual or empirical impediment to such an endeavor—just the will, money, and time.  I have begun, in ongoing writings, to sketch out a framework for the assessment of rational thought.

***

Further reading


  • What Intelligence Tests Miss: The Psychology of Rational Thought
  • Stanovich, K. E. (2009, Nov/Dec).  The thinking that IQ tests miss. Scientific American Mind20(6), 34-39. Stanovich_IQ-Tests-Miss_SAM09.pdf
  • Stanovich, K. E. (2009). Distinguishing the reflective, algorithmic, and autonomous minds: Is it time for a tri-process theory? In J. Evans & K. Frankish (Eds.), In two minds: Dual processes and beyond (pp. 55-88). Oxford: Oxford University Press. Stanovich_Two_Minds.pdf
  • Stanovich, K. E. (2009).  Rationality versus intelligence. Project Syndicate.  link
  • Stanovich, K. E, & West. R. F. (2008). On the relative independence of thinking biases and cognitive ability.Journal of Personality and Social Psychology94, 672-695.  JPSP08.pdf

Comments

Todd I. Stark said…
It seems odd to me to consider intelligence (henceforth primarily meaning "neural" intelligence) in terms of the psychology of rational thinking, I wouldn't guess they would have much to do with each other.

Intelligence I would think is more of a way of thinking about the efficiency or effectiveness of cognition and problem solving, whereas rationality is a norm for consistency in reasoning.

Seems like Keith is making an important and valuable point about rational thinking, but also an obvious one about intelligence. If the idea is to show that intelligence is not important, then I think it misses the mark completely.

My intuitive expectation would be that measures of intelligence would not correlate, or would correlate negatively, with measures of rationality. One is presumably thinking ability, the other describes a specific way of ordering thinking to make it consistent. That's a little like equating the power of an engine with the ability to stay on the road. Maybe we are framing this differently?
Coert Visser said…
Hi Todd, thanks,I like this metaphor: That's a little like equating the power of an engine with the ability to stay on the road.
Anonymous said…
That metaphor was previously used by David Perkins --another critic of the standard interpretation of IQ tests. I think it a very important point, i.e., there is much more to being a "good thinker" than simply the cognitive capacities that are measured by the standard IQ tests.
Coert Visser said…
Hi Anonymous, Agree. Btw, I like Perkins' work