October 17, 2009

Some deliberations on the desirablility of rationality

Following up on the posts What Intelligence Tests Miss: IQ and rationality are largely independent and A Closer Look at Rationality here are some thoughts and questions about what the view presented in the posts might imply. Let me start by saying that I find the basic ideas presented in Keith Stanovich's book convincing, namely that: 1) Intelligence as measured by IQ tests and rationality are largely independent, which explains why intelligent people may behave and think irrationally, 2) IQ tests don't measure rationality and contrast between the strong focus on IQ testing and the very limited attention to measuring and teaching rational thinking is a bad thing, 3) rational thinking could be taught more and this would lead to social benefits. Here are some additional thoughts and questions on the desirability of raising rationality.

Perfect rationality is out of the question. That this is so can be understood from an evolutionary perspective. As Stanovich explains in his book, evolution does not lead to perfect rationality because natural selection does not specifically favor maximizing truth or utility. Instead it favors genetic fitness in a local environment. This means developing rationality is a matter of optimization instead maximization. Spending extreme resources on building rationality does not guarantee evolutionary advantage because those resources might also have been spent on other useful things. As Richard Dawkins says in his latest book: "Perfection in one department must be bought in the form of a sacrifice in another department".

That maximal rationality is undesirable and impossible also follows from Stanovich's tripartite model of the brain which consists of the autonomous mind, the algorithmic mind and the reflective mind (further explanation here). It is true that the autonomous mind works with rough heuristics which work in a quick and dirty way and which may frequently miss the mark. An override by the deliberate part of the brain (which consists of the algorithmic brain plus the reflective brain) can help to correct the inaccurateness of the autonomous mind and make judgments and decisions more rational. But because deliberate thinking demands so much attention it would be impossible to let deliberate thinking make all judgments and decisions. So much of everything we do and think has been 'delegated' to the autonomous mind that this would be unthinkable. Some division of labor between the autonomous mind and the deliberate mind is efficient. The question is how to divide it most effectively. How often and when should the deliberate mind override the autonomous mind? How can we recognize situations which ask for such overrides? When must we demand rationality from ourselves and from others?

Another perspective on the question of how much rationality follows from looking at its advantages and disadvantages. It seems logical that increasing ones rationality is usually beneficial, both for the individual and for society. After all, increasing instrumental rationality means that one becomes better and goal directed thinking and acting. And increasing epistemic rationality means that one's maps of the world become more realistic; in other words ones beliefs about reality correspond more closely to the actual structure of reality. But there may be some disadvantages, too. I am not talking about the stereotype of Mr. Spock, the assumption that there is a trade off between rationality and social or emotional competence. I would predict that rationality and social or emotional competence are largely independent (in the same that rationality and intelligence are largely independent). In am talking about the possibility that increasing your rationality may be aversive to others and might lead to some extra social barriers, like social rejection. History shows many examples of people who are now considered to be ahead of their time in terms of rationality who were punished by their contemporaries. People challenging widely held beliefs (never mind if they are true or not) can be considered as a threat to power positions, to the stability of institutions, or can be viewed as disloyal, crazy or arrogant. There are many examples of people who have been ridiculed, isolated, imprisoned, banned, imprisoned, convicted to death and murdered because of their ideas which later turn out to be true. The paradox seems to be: it requires rationality to appreciate it.

This leads me to the question of how to fight contaminated mindware. Contaminated mindware refers to a belief system which is not true and potentially harmful to the person who holds it and others but which can still spread quickly through a population due to some of its characteristics (which are explained here). The question is whether a head on attack of popular contaminated mindware will leads to its demise or runs the risk of making it even more popular. A head on attack might lead to further publicity for the contaminated mindware, thus exposing more people to its attractiveness. And it may lead to more attacks on its opponents (because contaminated mindware often contains an instruction to attack opponents, non-believers). Or might a different approach work better? For instance an approach of teaching people to recognize contaminated mindware more easily and protect themselves better against it?

As you see, these are just some open ended explorations. Further suggestions are welcome.

8 comments:

  1. Some general thoughts on intelligence and reasoning ...

    It seems to me that speaking of rationality as if it were a property or trait we can quantify is problematic in general. We have various different models for how the mind works, and reason plays different roles in each.

    The concept in practice centers around optimizing the means for attaining goals. It is rational to do what you perceive best accomplishes your aims, and irrational to do otherwise. So for example, decision theory, game theory, and economics (traditional and behavioral) talk a lot about what "rational actors" who act in their own best interest should do.

    But these fields also have different views of what a rational actor is because we also have very deep historical associations of rationality with *explanations* for things rather than optimal means for achieving goals. The rationality-like terms such as "rationalization," "reasonable," and so on demonstrate this association. There is a long historical link between _thinking_ and _explanation_ when we talk about "reasoning," but we find that they are really not the same thing.

    So one of the reasons for confusion here is that analytic intelligence (IQ) has some relationship with being capable of crafting good explanations, which make people seem "rational" to themselves and others, but this does not neccessarily mean they are being "rational" in the sense of choosing the perceived optimal means to achieve their stated ends.

    I suspect that objective reasoning isn't a natural ability, but rather a collection of tools with a historical context that draws on natural abilities. You can identify specific ways of thinking that are more or less associated with objective reasoning and are located in specific times and places in cultural and intellectual history. Different ways of thinking are associated with different kinds of explanations in turn. This notion is, I think, a premise of the very concept of being able to have an intellectual history of something like science or math.

    This distinction between the tools of explanation and reasoning in general is to me one reason why we can have various different rational explanations for the same thing.

    The _best explanation_ in a given situation, in the sense we seek in scientific reasoning, is not a matter of being "more rational," it is a matter of identifying the goals and the tools for finding the best means to achieve them. Explanations can be rational (for a set of aims) without being very useful (for another set).


    So rationality in broad view is, in my view, a collection of tools that build on various human abilities and emphasize two different things: (1) finding the best means to achieve identified goals, and (2) finding explanations that are assumed to be valuable.

    kind regards,

    Todd

    ReplyDelete
  2. re: Dr. Spock ... Was the fictional half-Vulcan given an honorary PhD, or were you talking about the baby doctor from the mid 20th century?

    ReplyDelete
  3. Hi Todd, thanks for both your comments! I have just shown how little I know of Star Trek (or maybe how irrational I can be?) I meant mr spock not the Dr.. :)
    I have changed it

    ReplyDelete
  4. Hi Todd,
    Here some thoughts triggered by your comment.

    I find the distinction between instrumental and epistemic rationality useful (more here http://bit.ly/3oTKxX).

    At the risk of simplyfying too much, instrumental rationality seems to be about doing what works and epistemic rationality is concerned with truth and refers to seeing reality for what it is.

    It seems to be a pitfall to overlook any of these two rationalities.

    Only focusing on what is true but forgetting to do what works may lead to your neglecting to do things that help you to survive and remain connected to other people.

    Only focusing on doing what works but neglecting the what is true question may lead to you moving efficiently through a web of falsity distancing you more and more from reality.

    Btw, I share your reservation about the measurability of rationality. Providing people with reasoning tools through education seems to me to be a more fruitful way forward than trying to create a new metric like RQ (rationality quotient).

    Thanks for your comment
    Kind regards, Coert

    ReplyDelete
  5. Hi Coert. Yes, we agree. Some more thoughts...

    There are lots of nuances to this, but I think in practical terms it just comes down to the practical and the true.

    Instrumental rationality as you've defined it seems to me to be about what is useful. Epistemic rationality is I think about knowing that what we know is true, or at least broadly consistent (e.g. a "reflective equillibrium").

    We obviously need to know what is true because clear thinking depends on having good information to operate on. That's the undeniable value of epistemic rationality. I think *most* of the time what is true is also what is useful, and the distinction is minor. We've learned to value truth for its own sake, to the degree that I think it even makes sense to speak of epistemic virtue.

    But then at the _extremes_ the two ideas do diverge, and that's when we make a big deal out of them.

    Useful fictions have undeniable value in thinking (especially bootstrapping new thinking patterns). When we speak of imagination, we are really speaking of a species of useful fiction. When we plan something that does not yet exist, we have a useful fiction. When we try to change things for the better, and help others envision new possibilities, we are again invoking useful fictions. The problem is of course that we are so good at fiction that it easily goes beyond the useful fiction into blind faith and attachment to ethereal abstracts that lead us into mazes of instrumental rationality that is divorced from reality.

    ReplyDelete
  6. Hi Coert,

    here are some links that might be relevant to the discussion about mindware:

    Antivirus for the mind
    and, of course, the list of:
    Virus Definitions

    My belief is that awareness about such mallware might improve people's chance of getting rid of them. Like in the computer, if the computer becomes very slow and the user is unaware of the existence of viruses he/she might not know what to do but, if they are aware, they can install an antivirus and get rid of the infection.

    ReplyDelete
  7. Hi Peter, Thanks. I think the comparison between contaminated mindware an a computer virus is valid. Will take a look at your links

    ReplyDelete

Enter your email address:

Delivered by FeedBurner