August 20, 2015

Review of Mindware: tools for smart thinking by Richard Nisbett

Psychologist Richard Nisbett has written a new book called Mindware: tools for smart thinking. I think it is essential reading for students of psychology. Here is my review.

As a psychology student in the 1980s I first learned about the work of the Richard Nisbett. Together with Lee Ross (who coined the term fundamental attribution error; which I will come back to later) he wrote the classic book Human Inference (1980) about how people use rules of thumb in social judgment and decision making and about how we often systematic mistakes in the way we judge events and people. Nisbett & Ross' work build on and was closely related to the work done by Amos Tversky and Daniel Kahneman.

Nisbett was also known for work he had done together with Timothy Wilson about how many mental processes are inaccessible to our conscious awareness. I came across Nisbett's work again when his book Intelligence and how to get it (2009) was published. He agreed to an interview by email with me about this fascinating book but this interview was never published because it was interrupted when we were halfway through the questions. So instead I wrote this post about the book.

His new book Mindware: tools for smart thinking (2015) revisits many of the topics he has written about in the past. The main idea behind the book is that our personal and professional lives can be improved by learning about effective judgement and reasoning. In the beginning of the book he explains that we do not perceive the world as it is. Instead, we rely on schemas, cognitive frameworks, templates or rule systems, to make sense of what we encounter. Two problems with these schemas (for example stereotypes) are that they are often mistaken and that we are often unaware of them.

A prime example of schemas are heuristics, rules of thumb (often unconsciously applied) for solving problems. Nisbett discusses examples such as the effort heuristic, the price heuristic, the scarcity heuristic, the familiarity heuristic, the representativeness heuristic, and the availability heuristic. These heuristics are often helpful but also rather crude strategies which in many cases lead to inaccurate judgments. I'll mention two of them specifically: (1) the representativeness heuristic: events are judged as more likely if they’re similar to the prototype of the event than if they’re less similar, (2) availilbility heuristic: The more easily examples of the event come to mind, the more frequent or plausible they seem.

One of the most important ways, if not the most important way, we systematically judge ourselves and others inaccurately is in that we underestimate the influence of situations and overestimate the influence of personal characteristics. This is called the fundamental attribution error. We underestimate both subtle factors and factors which are plain to see (such as social roles). By the way, we are somewhat less inclined to apply to make the fundamental attribution error when we are judging our own behavior than when we are judging others' behaviors. Also, Asians seem to be less vulnerable to this error of judgment than Westerners.

While much of our thinking is unconscious and inaccessible to us we are very active in coming up with explanations for our behaviors en judgments. These explanations, as many experiments have demonstrated, are often wrong. We can totally miss actual factors actually influencing us and we can be completely confident that something influenced us which actually didn't. We often do not know why we do and think what we do and think. That our unconscious mind uses heuristics which are often not very reliable does not mean that the unconscious mind is inferior to the conscious mind. In several ways it is superior such as detecting complex patterns, and processing information which cannot be easily described verbally.

Nisbett then devotes a few chapters to the science of behavioral economics discussing concepts like cost benefit analysis, the sunk cost rule, opportunity costs, loss aversion, the endowment effect, and choice architecture. To explain a few: (1) the sunk cost rule is a rational but counterintuitive principle which says you should only take into account future costs and benefits of your choices and not 'cry over spilt milk', (2) the endownment effect is our tendency to overvalue things we possess, (3) choice architecture: the way we present choice options has great influence on how people choose.

In the next part of the book he discusses the important topic of statistics. Two main functions of statistics are to describe phenomena accurately and to determine relations between phenomena accurately. We tend to make several types of errors in our statistical reasoning. One is that we tend to base our conclusions on too few observations. Nisbett points to the importance of the law of large numbers which says that the more observations you make, the closer you get to the true score. Another error is that we do not take into account the phenomenon of regression to the mean which is that the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement.

Another problem is confirmation bias: we tend to only look for evidence which is supportive of our hypothesis. Also, we notice and remember events better when they confirm our hypothesis. Confirmation bias contributes to the problem that we see correlation when they are not there. Yet another problem is that we fail to see correlations which are actually there, in particular when we do not expect these correlations. We tend to only see unsuspected correlations when they are rather strong and when the two events are close to each other in time. The previously mentioned representativeness heuristic often underlies which correlations we expect.

Then Nisbett proceeds to comparing the relative value of correlational techniques and experiments. Experiments are vastly superior. In experiments only the variable of interest is varied which makes it possible to draw conclusions about causality. Any differences in experimental groups and control groups must be due to the variable which was different in those conditions. Correlation studies are a different matter. As is well-known to many people, correlations do not imply causality. The fact that variable A and B are associated does not say that A causes B. The reason is there may be different explanations for the correlation. For example, B may cause A. Or a third variable may cause both A and B.

A specific correlational technique which is very popular in economics, psychology and epidemiology is multiple regression analysis (MRA). In MRA you try to predict a variable of interest (the criterion) based on a set of other variables (predictor variables). The idea behind MRA is control for all variables which may influence the criterion by subsequently pulling their correlations out of the mix so as to get at the true causal relation between the predictor and the criterion. In practice however, it is unlikely that we will be able to determine which variables may be influential and also that we will be able to measure each of these variables validly. The practice of MRA findings is that they often are different from experimental findings. In MRA often both non-existing effects are found, and existing effects remain hidden. The book contains a few compelling examples of this.

As I mentioned, self-reports about mental processes are often highly unreliable. But also survey results must be approached with great caution. The problem with survey results go way beyond the problem of social desirability (the tendency to give answers that make you look good). As it turns out what people report is very much dependent on the way questions are phrased. One example of this is the reference group effect. When people are not asked to compare their self-assessments to a specific reference group they will compare themselves to a reference group which is salient to them.

This effect in combination with the so-called self-enhancement bias, also known as the Lake Wobegon effect, which means that people in most cultures believe they are superior to most others in their group, can lead to some strange findings. For example, Italians describe themselves as more conscientious than the Japanese. However, research using behavioral measures (instead of self-report measures) paint a different picture. As Nisbett says: "the less conscientious a nation is as measured by behavioral indices, the more conscientious its citizens are as measured by self-report."

In the rest of the book Nisbett discusses topics like (1) logic and dialectical reasoning: he says that there are some systematic differences in the ways Easterners and Westerners approach situations and problems (and the former may be superior in many social situations), (2) reductionism and the principle of parsimony, and (3) the rise and fall of postmodernist views in science.

Review: The book gives an interesting and good overview over the topic of human judgment. For psychology students it is an excellent introduction to these important topics, for psychologists it is a good way to maintain and refresh one's knowledge (for example, I found the reminder of the weaknesses of MRA useful). The dissemination of the type of knowledge beyond psychologists is very important. Lack of awareness of much of what is described in this book by the general public is the reason that many people are vulnerable to assertions which are not true. Getting more people to know about these concepts and findings can help people protect themselves from errors of judgement and from charlatans.

No comments:

Post a Comment

Enter your email address:

Delivered by FeedBurner