I agree with Ed’s points.
My beefs are different. The main ones are:
- RCTs may well measure “if something worked”. But the tacit assumption behind funding them is that they are generating knowledge that can be transferred reliably from one context and time to another. This assumption is not being tested.
- RCTs transfer a methodology developed for extremely unintelligent populations (such as cells responding to drugs), to very intelligent populations, with culture, forethought, learning, and language. This transfer is a massive leap of faith, and the leap has been glossed over in the making.
- RCT proponents (and the formal evaluation community as a whole) fail to eat their own dog food.
To expand on that last point:
RCTs hang their legitimacy on a larger marketing spiel that investment in development should be “evidence-based”. This is a motherhood statement. Who can be against evidence? It sounds like the alternative would be to pluck investment rationales out of the air, or base them on personal idiosyncracies. The real question is whether RCTs are a good form of evidence upon which to base investment decisions.
Let’s ask this: what evidence is there that RCTs drive good development? This is not a hard question to study. The whole concept of “development” is based on the fact that there dozens of countries around the world that have indeed “developed.”
Let’s look at the United States, one of the wealthier countries in the world. Is it’s wealth being driven by RCT-guided investments? Look at the fabulous technological and social innovations that underpin that wealth. Are these being driven by RCTs? Are there any RCTs being done in Silicon Valley? Do venture capitalists commission RCTs before placing investments?
I don’t have hard data on this, but I suspect that the answer is “no.”
Let’s look at the flip side of this line of questioning: where RCT research has been conducted over a long period of time, do we see resounding success? Since RCTers’ main clients are government, this is the same as asking the question: does RCT-driven government programming work? Does the United States have effective poverty alleviation within its own borders? Does it have the healthiest poor? The best educated? Does it have the cleanest water in the world? (Ironically, the water seems to be polluted by those same RCT-tested drugs, an impact that the RCTs didn’t pick up on.)
The answer, to me, is not immediately obvious. Maybe RCT-guided government programs are the most effective in the world, and the United States (our example) does indeed have the healthiest, best educated, and upwardly bound poor people in the world. But before we export the RCT methodology to poor people around the world, I’d want to see some hard evidence that RCTs have worked inside our own countries.
But the clarion call for the RCT craze is “what about what RCTs did for medicine?” Well, that’s the wrong question.
The developed countries have created demonstrably healthier populations. Life expectancy has doubled over the last 150 years or so, from about 40, to about 80. (If I get some comments on this post, I’ll dig up more precise data.) It is precisely this kind of development impact that we want to share with people in other, “lesser developed” countries.
However, in this doubling, medicine doesn’t get much credit. It’s public health that did the work. And improvements in public health were not guided by RCTs, but rather by scientific developments such as the germ theory of disease, technological developments like the microscope, and engineering developments which made centralised water supplies possible.
Furthermore, even within medicine, the major milestones were not the result of RCT guidance. No RCTs in the development (or testing) of sulfa drugs. Or anaesthesia. Or the early leaps forward in antisepsis.
Here’s my hypothesis (testable: though I haven’t tested it yet) on when and why RCTs were introduced in health. It’s when all the heavy lifting had been done, all the major improvements made, and further development was “at the edges”: improvement became marginal, we saw declining return on investment, and it was no longer obvious if there were any improvements. That’s when complex statistical methods become required to see “if something works.”
I am reminded of an old bit of folk knowledge. The Romans expressed it as a question:
Qui custodiet ipsos custodes
Who guards the guardians?
The RCT fans would like to set themselves up as the guardians of “what works” in development. Before we let that happen, we should ask to see rigorous, hard nosed, empirical evidence that RCTs themselves work to guide effective development investments.
My thesis is that RCTs are late-comers to that development process, in countries where development has succeeded; and that their role has been marginal at best. I’d say that on the face of it, the thesis looks pretty sound.
So if the RCTers want to sell themselves on the basis of the contrary… I want to see the evidence. We have hundreds of years of successful development history in the developed world. Let’s look through it. Where is the evidence that, historically, where development has taken place, it has been guided by RCTs?
And if it is not RCTs, but something else that has guided us effectively so far: then let’s use that.