Two new studies on the effects of growth mindset interventions with different conclusions: how is that possible?

There are two new publications on the effects of growth mindset interventions. The two articles analyze the same research literature via meta-analyses but draw completely different conclusions. Macnamara & Burgoyne (2022) argue that growth mindset interventions hardly work; Burnette et al. (2022) found positive effects on academic outcomes, mental health, and social functioning. How can this be? There is a simple and important explanation described in an article by Tipton et al. (2022).

1. Introduction: what is mindset theory? 

Mindset theory is a theory originally formulated by Carol Dweck. The theory states that what people believe about the malleability of intelligence predicts what type of goals they will set, which in turn predicts how they will explain failure (attribution style after failure), which in turn predicts what their emotional response will be, and finally predicts how they will behave and what learning results they will achieve.

A growth mindset is the belief that by making effective efforts, you can get better at the things you would like to get better at. A fixed mindset is the belief that your abilities and traits are largely unchangeable. The mindset-meaning system reflects the interrelated effects of different mindsets on the types of goals, beliefs about effort, attributions of adversity, and degree of persistence.

Research has shown that growth mindset interventions are effective: 1) when used for the target audience that needs them, 2) when delivered appropriately, 3) in a context which is supportive of the growth mindset. What is also interesting about growth mindset interventions is that they have a light touch character: they are simple and cheap to apply.

2. The Article That Says Mindset Interventions Don't Work 

Macnamara & Burgoyne (2022) state in their article that growth mindset interventions are popular in education but that their effectiveness has "never been systematically evaluated". They then discuss whether the studies on mindset interventions meet best practices in order to determine whether causal conclusions about mindset interventions are justified. Also, they perform three meta-analyses (over 63 studies, N = 97,672). They note that they have identified major shortcomings in the study design, analysis, and reporting of the studies. They also see indications of research and publication bias. By the latter, they mean that authors with a financial incentive to draw positive conclusions about mindset interventions reported significantly greater effects than authors without this incentive.

Based on their own research, they found a small overall effect of mindset interventions (d¯ = 0.05, 95% CI = [0.02, 0.09]). Moreover, they state that this effect was no longer significant after adjusting for possible publication bias. Furthermore, they found no theoretically significant moderators. When examining only studies showing that the intervention affected students' mindsets as intended (13 studies, N = 18,355), the effect was not significant: d¯ = 0.04, 95% CI = [-0.01, 0.10]. In the highest quality studies (6 studies, N = 13,571) they found a non-significant effect: d¯ = 0.02, 95% CI = [-0.06, 0.10].

Macnamara & Burgoyne argue that the “apparent effects of growth mindset interventions on academic performance are likely due to inadequate study design, reporting errors, and bias.” They further conclude that “despite the popularity of growth mindset interventions in schools, positive outcomes are rare and potentially spurious due to inadequately designed interventions, reporting errors and bias.” 

3. The Article That Says Growth Mindset Interventions Do Work 

Jeni Burnette and a number of colleagues write that while growth mindset interventions are more widely adopted, it is important to know whether they are effective (Burnette et al. (2022). They argue that in order to properly answer this question, it is important to understand meaningful heterogeneity in effects. The authors conducted a systematic review and meta-analysis, considering two main moderators: 1) subsamples expected to benefit most and 2) the appropriateness of intervention application. Furthermore, the authors introduce a process model that can help with further theory development.

In their research on mindset interventions, they included randomized studies published between 2002 and the end of 2020. They found 53 independent samples in which different interventions were tested. They reported cumulative effect sizes for multiple outcomes (i.e., mindset, motivation, behavior, and outcomes), with a focus on three primary outcomes (i.e., improved school performance, mental health, or social functioning).

Multilevel meta-regression analyses (with targeted subsampling and correct application) revealed significant effects for academic achievement (d = 0.14, 95% CI [.06, .22]) and for mental health (d = 0.32, 95% CI [ .10, .54]). They also write that a large variation in effects was found. They conclude their article with a discussion of heterogeneity and the limitations of meta-analyses. 
The authors conclude: “Despite the wide variation in effectiveness of growth mindset interventions, we found positive effects on academic outcomes, mental health, and social functioning, especially when interventions are applied to people who are expected to benefit the most.”

4. The article explaining the difference between the two articles 

Beth Tipton is a statistician who is an expert in meta-analysis. She co-wrote a commentary with a number of other researchers (including noted mindset researcher David Yeager) (Tipton et al., 2022). Tipton et al. write that researchers who perform meta-analyses often incorrectly ask yes-or-no questions: Is there an intervention effect or not? They write that this traditional all-or-nothing thinking contrasts with current best practice in meta-analysis, which calls for a heterogeneity-oriented approach.

The heterogeneity-oriented approach means that you have to look at the extent to which effects vary between procedures, target groups, or contexts. This will help you understand where and why effects are weaker or stronger. The authors compare the two meta-analyses mentioned above, which, based on the same literature, arrive at completely different conclusions due to the different methods they used.

Macnamara & Burgoyne pooled the effect sizes for each study and then examined moderators one by one by dividing the data into small subgroups. Burnette et al. modeled the variation of effects across studies –across subgroups and outcomes– and applied modern, multi-level meta-regression methods. The first study concluded that growth mindset effects are biased, but the second yielded nuanced conclusions consistent with theoretical predictions. Tipton et al. explain why the practices of the Burnette study are more in line with best practices for analyzing large and heterogeneous literature. 
The authors criticize more aspects of the Macnamara & Burgoyne article in detail, such as establishing their own list of criteria for study quality when there are clear and generally accepted criteria. They also criticize the way Macnamara & Burgoyne operationalize manipulation checks and how they curiously define financial conflicts of interest.

Furthermore, Tipton et al. re-analyzed the dataset of Macnamara & Burgoyne via the modern, heterogeneous methods of Burnette et al. This re-analysis confirmed the conclusions of Burnette et al.: There is a meaningful, significant effect of growth mindset in focal (risk) groups. The authors argue that heterogeneity-tuned meta-analysis is important both for advancing the theory and for avoiding the boom-or-bust cycle that plagues too much psychological science (“does the intervention work or not?”).

5. My reflections 

Tipton et al.'s insight is simple and important. We need to get rid of the simplistic nature of the question, "Does it work or does it not work?" Compare the situation with research into the effectiveness of paracetamol. If you want to investigate that, you have to look specifically at people who are in pain. Has their pain decreased after taking Paracetamol? The majority of people have no pain. There is no improvement for them after a paracetamol. If you look at a sample in which a limited number of people are in pain and many are without pain and look at the average effect of a paracetamol on them, you make an elementary mistake.

We need to let go of this all-or-nothing thinking. Average effects over heterogeneously composed samples mean little or nothing. There are large differences in the effects of interventions on different individuals and in different contexts. Macnamara & Burgoyne use an outdated approach to meta-analysis (as did Sisk et al. years ago). The analysis by Macnamara & Burgoyne therefore yields misleading findings. Burnette et al. use state-of-the-art meta-analytical techniques. When you apply these techniques to the Macanamara & Burgoyne dataset, you get the same positive findings as with Burnette et al. (2022).