October 16, 2009

A Closer Look at Rationality

As I wrote about yesterday, Keith Stanovich explains in his new book What Intelligence Tests Miss: The Psychology of Rational Thought that IQ tests are incomplete measures of cognitive functioning. Although many laymen and psychologists seem to think IQ tests do measure rationality, they actually don’t. In fact, intelligence, as measure by IQ tests correlates only low to moderately with rational thinking skills. According to Stanovich, this explains why it is not strange to see intelligent people behave irrationally and hold false and unsupported beliefs. Some real world examples are: intelligent people who fall prey to Ponzi scheme swindlers like Bernie Madoff, a highly educated person who denies the evidence for evolution, a United States president who consults an astrologist, and so forth. Below, I will try to summarize how Stanovich explains rationality and lack of rationality.

What is rationality? Cognitive scientists distinguish two basic forms: 1) INSTRUMENTAL RATIONALITY, behaving in such a way that you achieve what you want, and 2) EPISTEMIC RATIONALITY, taking care that your beliefs correspond with the actual structure of the world. Irrational thinking and behaving is associated with three things.

The first is an overreliance on the autonomous mind which subconsciously and automatically uses all kinds of heuristic to come to conclusions and solve problems. The autonomous mind is fast and very valuable but also very imprecise. It is prone to all kinds of biases. Thinking deliberately instead of letting the autonomous mind make judgments cost much more time and energy which is why it is temping no resist.

The second thing which is associated with irrationality is what is called a mindware gap. The term ‘mindware ‘ refers to the rules, knowledge, procedures, and strategies that a person has available for making judgments, decisions and solving problems. Lack of such knowledge, etc hinders rationality.

The third thing which is associated with irrationality is something called contaminated mindware, beliefs, rules, strategies, etc that are not grounded in evidence and that are not good for the one who holds them (the host) but which can still spread easily throughout a population. There are several reasons why they can spread easily: 1) they are often packaged in an appealing narrative which promises some kind of benefit to the host, 2) they sometimes ride on the back of other popular mindware which may be more valid by copying superficial characteristics from that mindware, 3) they contain self-replication instructions (‘send this mail on to 10 different people’), 4) they may have evaluation-disabling properties (for instance by claiming that evidence is not relevant or possible, by making belief which is unsupported by evidence into a virtue, by encouraging adherents to attack non-believers, etc). You might think that intelligence would guarantee a good protection against contaminated mindware but this turns out to be wrong. By making narratives complex, highly intelligent people can even become extra attracted to them. Further, studies have demonstrated that intelligent people may be more capable of creating ‘islands of false beliefs’ or ’webs of falsity’ by using their considerable computational power to rationalize their beliefs and to ward off the arguments of skeptics.

In a next post I will try reflect on what all of this may imply.


  1. Very interesting post.
    I am curious to see what implicatons you will draw from all the thought-provoking points you make.

    I wanted to comment about the first part, your very beginning of the post, where you point out that even very bright people fall prey to delusions or irrational behavior.

    I was wondering if another perspective could help in sorting out what rationality is: namely, that we have different modules that get triggered, or that we apply, in different situations.

    In that case, we would not have one but many "rationalities".

    To be clearer: in evolutionary psych we talk about modules, or in psychology about "motivational systems".
    So anger has different meaning if it is activated against somebody that is outside the group (or perceived so) or against somebody within the group: in the first case the goal is to physically eliminate the threat, in the second case most often than not to establish a hierarchy. Or it could be activated within the mating system, and so on...
    This does not only apply to emotion but to perception and thinking as well: e.g. some people might be anti-semitic (when they reason within their "socialization" system, in the terms used by J.R. Harris in "No two alike"), and at the same time be friend with a jew (interpersonal system, that works within the group).
    I think this perspective, that we activated different systems in different situations help us explain some very contradicting behaviors.

    Just my 2-cents, looking forward to reading your next post!!

  2. Hi Paolo,
    I don't know many about different rationalities, have to think about that. Could be true. What I do know is that an evolutionary perspective on (ir)rational behavior is indeed useful. It explains why rationality will not be 'perfect' or maximal.

    The author, Keith Stanovich writes about this, too. He explains that evolution does not lead to perfect human rationality. The reason is, in contrast to maximization, natural selection works on a 'better than' principle.

    In other words: evolution is survival of the fitter, rather than survival of the fittest. Als, he says: "Evolution guarantees that humans are genetic fitness optimizers in their local environment, not that they are truth or utility maximizers as rationality requires.

    Beliefs need not always track the world with maximum accuracy in order for fitness to increase. Thus, evolution does not guarantee perfect epistemic rationality. In the book Nassim Nicholas Taleb is quoted saying: For natural selection does not care about truth; it cares only about reproductive success.

  3. Hi Coert,

    And you will read some more along these lines in my upcoming paper in InterAction Journal! :)

    BTW, about beliefs "tracking the world" more or less accurately - I think it was Watzlawick that pointed out the distinction between "matching" and "fitting" reality. Our beliefs do not necessarily match reality but they need to fit reality.


Enter your email address:

Delivered by FeedBurner