Critical analysis by Sisk et al. (2018) - two meta-analyzes on mindsets


Some time ago, a paper was published describing two meta-analyzes on the effects of mindsets and mindset interventions (Sisk et al., 2018). The authors conclude that the results in both analyzes were weak and that education is better able to spend its time and money on other things than on mindset interventions. They do, however, make the nuance that their research suggests that mindset interventions do seem worthwhile for special risk groups. However, there is quite a lot to haggle on the article. Not only on their findings, but, more importantly, on their interpretation of those findings. Read below why even the problematic article by Sisk et al. actually shows that mindset interventions are indeed interesting and important.

Sisk et al. (2018)

The article by Sisk et al. looks compelling at first glance and may appear troubling if you were convinced of the importance of mindset and mindset interventions in young children and adolescents. 

The authors performed two meta-analyzes. In the first meta-analysis (k = 273, N = 365,915) they examined the relationship between mindset and school performance. In the second meta-analysis (k = 43, N = 57,155) they examined the effectiveness of mindset interventions on school performance. In both studies they also considered possible moderating factors.

The authors conclude that the effects they found in both studies were weak, but that mindset interventions appeared to be useful for students with a low socioeconomic status and for students with a high risk of failure ("high risk students"). Are these conclusions correct?

Meta-analyzes can appear impressive because they aggregate the results of many studies into a huge sample using statistical methods. The correlations and effects found from such mega-samples would then be much more reliable than the results from all those small samples from the original studies. 


But it is important to take a critical look at how the meta-analyzes were performed. In a poorly performed meta-analysis, reported results can be extremely misleading. The following questions are important when assessing a meta-analysis: 
  1. Have the right studies been included in the meta-analysis? 
  2. Have the correct analyzes been carried out? 
  3. Have the findings been interpreted correctly?

Problem 1: how reliable are the findings?

An important factor that determines how the results of a meta-analysis turn out is which studies are included in the meta-analysis. To decide which studies to include, clear inclusion criteria should be formulated. Nowadays it is common to register (pre-register) these inclusion criteria in advance. The importance of pre-registering inclusion criteria is that you as a researcher do not suspect that you have included and omitted studies in order to obtain the type of result you want. Sisk et al. have not pre-registered their inclusion criteria. 

I also understood that Sisk et al. made several surprising decisions about which studies they did and did not include in their analyzes. Equally surprising, Sisk et al. do not report at all two recently performed meta-analyzes. These are Burnette et al. (2013) and Lazowski & Hulleman (2016).

A second problematic aspect concerns how Sisk et al. looked at the effectiveness of mindset interventions. Mindset interventions do not have the same effect on their school grades or test scores for all students. Many students already have a growth mindset and many students already get high marks. For students who already have a growth mindset, there is little effect on their mindset, but this effect is not expected, nor is this effect necessary or desired. Likewise, for students who already get high grades, a mindset intervention is not expected to have a strong effect on their school grades. Their grades are unlikely to get much higher and they don't need to. 


Therefore, lumping effects on all students for the entire group is not very informative and misleading. The focus should be on mentioning interaction effects. For example, what is the effect of mindset interventions on students with fixed mindsets and low grades?

Problem 2: how correct are the interpretations of Sisk et al.?

Given that there are already problems surrounding the findings of Sisk et al. (Problem 1 described above), the authors' interpretations should be viewed with skepticism in advance. But even disregarding this for a moment, there are significant additional problems in their interpretations. 

The first problem concerns the first meta-analysis. This meta-analysis is based on correlational studies. It is known that correlational studies do not allow conclusions to be drawn about causality. Associations found between two variables do not prove that one variable is caused by the other. Furthermore, causal relationships between two variables can be masked in correlational research.

In this article I describe an example by Richard Nisbett in which he explains that many correlational studies show that there is no correlation between class size and student performance. But experiments show that students in smaller classes do perform better. Experimental research is superior because this type of research makes it possible to investigate exactly and only those variables that you are interested in. It is quite possible that mindset interventions have a causal effect in specific circumstances without any evidence of this in correlational studies.

A second problem concerns the interpretation of the second meta-analysis, especially with regard to the effect sizes found. Piling up a large number of studies and then looking at the mean effect size, as Sisk et al. do, is of little use and misleading. It is important to consider the nature of the studies. 


First of all, it is important to distinguish between a laboratory experiment and a field experiment. You can generally find much greater effects in a laboratory experiment than in a field experiment. Burnette et al. (2013) showed in their meta-analysis that mindset interventions in laboratory experiments do indeed show substantial effects. 

When assessing effect sizes found in field experiments, it is also important to always relate them to the size and costs of the intervention. A very radical, costly and time-consuming intervention that produces a small effect size is not worth the effort. But a very small, inexpensive, and time-consuming intervention that produces an effect of reasonable magnitude can be very rewarding. The latter is the case with mindset interventions.

Many mindset interventions from the second meta-analysis by Sisk et al. were so-called light touch experiments. These are very short-term, inexpensive and not very intensive interventions. If you consider the significant effects reported by Sisk et al. (D = 0.08 overall and d = .19 for at-risk students) then they are meaningful because they mostly arose from light touch experiments and because they are similar in impact or even compares favorably with many other educational interventions that are often much more expensive and more intensive.


Discussion

Mindset interventions are, for the reasons I mentioned above, much more interesting and meaningful than Sisk et al. suggest. To learn more about mindsets and mindset interventions, we need to look much more precisely at when and how they can best be used. Which students in which contexts can most benefit from mindset interventions and which interventions work especially well?

Comments