David started saying that scientific rigor is important because it discredits cherished false beliefs and snake-oil solutions. He went on to say that also more serious concepts and approaches like work engagement, the innovator's dilemma, and the growth mindset come in for criticism. He said: "I feel it's only a matter of time before someone writes a harsh critique of the growth mindset." He then pointed out that there is a risk that healthy skepticism might turn into cynicism. He closed his email by raising the question whether sometimes a concept which perhaps is not scientifically rigorous might still be useful and better than nothing.
I think this is a very interesting topic. I agree that anything can be criticized and I think that nothing is above criticism. I have no doubt that David is right that the growth mindset will also be criticized. No method or concept is final or perfect. The good thing about a rational and scientific approach to any truth claims is that we may find out what is wrong with specific claims so that we can gradually refine and improve them. The growth mindset is a valuable concept but it is not perfect. It is inevitable that we will improve on it by discovering what can be improved in its conceptualization.
But what I do feel is that some people tend to go too far in their criticism and become hostile. One skeptical researcher, for example, often attacks claims which in his view (and he is probably more often right then wrong, I admit) are based on sloppy science. What I do not like is that he also usually makes it quite personal by attributing bad intentions to people or groups of people, saying they are more interested in their commercial goals than in doing honest science. I think this is ironic. He criticizes them for making sloppy claims but I think he is doing the same thing, namely making claims about people’s intentions, which I think he cannot substantiate. Shouldn’t he be more evidence based when saying such things? What’s his evidence?
David and I agree that demanding scientific rigor is a good thing. For example, that Losada & Fredrickson’s conclusions were based on wrong math (with their positivity ratio, read more here) is important to address. We should not hold on to that idea when it turns out it is based on a study that was flawed. But that does not mean that Losada and Fredrickson did something dishonest. I think it is inevitable that we will be making mistakes in science and in practice and we should gradually improve what we are doing while respecting other people’s intentions even while they are making mistakes unless we have evidence of fraud.
My view is that any truth claim is not simply either right or wrong. I think it is more realistic to think in terms of a continuum which makes it possible to distinguish less wrong from more wrong (read more about this here). When talking about this some more, David added that, apart from a truthfulness continuum, we might also think in terms of a usefulness continuum. Sometimes we may even use methods which we know are not valid in their foundation and in their specific claims (such as horoscopes) because we have noticed that they are sometimes helpful because they mention some obvious generic truths which may be just what we need to generate some useful ideas.