This post is inspired by a post by Duncan Green here, in which he invites comments on an ODI working paper by Ben Ramalingam, Miguel Laric and John Primrose, which is posted here: “From Best Practice to Best Fit: Understanding and navigating wicked problems in international development.”
This post may not make sense unless you read that paper.
The term “wicked problem” was coined by Horst Rittel in 1970s. I have a personal history with these problems because I took Rittel’s class on wicked problems. I also did my masters degree with another systems theorist, Chris Alexander, whose concept of pattern language (currently used in the design of computer software) was devised specifically to defeat the extensive interconnectivity of wicked problems by “decomposing” the graph of a problem so that the strongest relationships were preserved, and only the weaker links were severed. This process was described in what is now a design classic: Notes on the Synthesis of Form—still in press after 40 years, which is not bad. You can have a look at it here.
On leaving Berkeley, I then spent seven years in the East Sepik of Papua New Guinea implementing pattern theory in a development context. That work won a number of awards, including being shortlisted for the World Habitat Award. I mention these not because I think awards prove anything, but because you have to implement in order to win. We implemented. We encountered many wicked problems.
The term “wicked problems” first came to public attention in a speech by another Berkeley prof, C. West Churchman. Churchman was a leading systems theorist and one of the key figures in operations research. Churchman used the term in a guest editorial in a journal, in which he discussed he moral responsibility of operations research to, quote:
…inform the manager in what respect our ‘solutions’ have failed to tame his wicked problems’.
I cite this because I think the ODI paper makes much too sanguine a case for resolving wicked problems. It says:
The scholars who developed the wicked problems framework emphasise the importance of operational research methods in understanding such problems and navigating towards improved policy and practice.
No footnote. Churchman’s point was that you have moral obligation to fess up when your ops research fails to tame wicked problems. And on the landmark paper in which Rittel and his coauthor Melvin Webber describes wicked problems in depth, they say this:
Many now have an image of how an idealized planning system would function. It is being seen as an on-going, cybernetic process of governance, incorporating systematic procedures for continuously searching out goals; identifying problems; forecasting uncontrollable contextual changes; inventing alternative strategies, tactics, and time-sequenced actions; stimulating alternative and plausible action sets and their consequences; evaluating alternatively forecasted outcomes; statistically monitoring those conditions of the publics and of systems that are judged to be germane; feeding back information to the simulation and decision channels so that errors can be corrected—all in a simultaneously functioning governing process. That set of steps is familiar to all of us, for it comprises what is by now the modern-classical model of planning. And yet we all know that such a planning system is unattainable, even as we seek more closely to approximate it. It is even questionable whether such a planning system is desirable.
The final emphasis is mine.
In short, these founding figures of systems theory were not noting the existence of wicked problems in the real world, but questioning the ability—and even the advisability—of using cybernetica to address these problems. Summarising Rittel and Webber: scientific problems, yes. Social problems, no.
Although the ODI paper is a bit light on the specifics of the pilots, it appears that they depend wholly on first order cybernetics, and in particular these, from Rittel and Webber’s description:
- inventing alternative strategies, tactics, and time-sequenced actions
- statistically monitoring those conditions of the publics and of systems that are judged to be germane
- feeding back information to the simulation and decision channels so that errors can be corrected
- all in a simultaneously functioning governing process
The paper describes their approach as “learning by doing”. The problems discussed are described in terms of cybernetic network diagrams. And the phasing diagram “Figure 9: Evolving programme approach” does not go beyond first order cybernetics.
- feedback loops within the overall program cycle
- parallel trialling of alternative strategies
- some probabilistic thinking instead of simplistic go/no go
Now: I think that these are all good ideas and the authors do us a service by trialling them in aid projects. And I agree with the ODI paper that (probably) “the majority of development problems may well be of the wicked variety.”
Where I propose that the paper goes wrong is in proposing:
- causal network diagrams as getting “inside the black box”
- another layer of “systems, processes and capacities” for aid projects and organisations
- that any of the above is an operation research approach to wicked problems.
Over the next few weeks, I intend to write three further posts inspired by this paper, along the following lines:
- why drawing complex spaghetti diagrams are not part of the solution, and actually part of the problem, and what really happens to the Evolving Programme Approach when it encounters true wickedness
- why taking the good ideas in this paper and casting them in terms of complexity, wickedness, and ops research delays their trialling and adoption, and could make projects worse
- why the best approach to complex problems does not lie in simple machine-like diagrams and processes (which we know all donors love to see, and have their place) but in how other equally complex systems (we call them “humans”) are deployed.
Content and order may be varied on the day.