Fad surfing in the development boardroom

by David Week on 25 October 2011


This is a response to J’s post on Tales from the Hood, entitled: “Fail“.

The title of my post comes from a book I have on my shelf: “Fad Surfing in the Boardroom: Reclaiming the Courage to Manage in the Age of Instant Answers.” Note the subtitle, which proposes that the alternative to fad surfing is to avoid “instant answers”, and taking the courage to manage.

That book sits next to another: “Dangerous Company: The Consulting Powerhouses and the Businesses They Save and Ruin.” Together, these two books deliver the following message from the world of business:

  • Fads sweep through business on a regular basis
  • Each of these fads contains a small kernel of useful truth, surrounded by voluminous layers of bullshit
  • Each of these fads is pushed by a coterie of people who stand to advanced their careers by pushing something
  • As a manager, it’s okay to examine the fad, and extract and use the kernel of useful truth
  • If you buy the whole enchilada, you are morally reprobate, and likely drive your company into the graveyard.

Fads in development

In J’s post, he talks about “admitting failure” as the latest wave of development thinking:

“Admitting failure” has been slowly gaining momentum for a few years, now, at least in the aid world. It’s one of those ideas whose time… is just around the corner. Much like all things “local”, like “sustainability” before that, and “evidence-based programming” before that, “admitting failure” is the sexy new relief and development language convention of the month, and as MJ further points out, is almost certain to become de rigeur in proposals, monitoring and evaluation reports, and NGO external publications within the foreseeable future.”

Well, that’s a good list of recent development fads. What’s missing is any discussion of whether any of these fads have actually made a difference to recipients of aid. That’s an honest question: I’d like to hear the answer from the recipients, not the purveyors. To my knowledge, no-one has asked.

J then goes on to express some concerns about it this wave. My summary:

  • It could be no more than PR: not real learning
  • The concept is so vague as to be potentially useless
  • The histrionic and extreme tenor of the idea of “failure”
  • A lack of appetite in the aid community for nuanced analysis

However, he ends on a positive note: that Edison tried 9000 filaments for his light bulb before coming across a material that worked. “…we have to be getting this stuff right or abandoning particular practices long before try number 9000.” He also gives kudos to “Engineers Without Borders for their nascent leadership within the industry to admit failure.”

My view:

  • that the whole notion of “admitting failure” is a fad
  • it’s not a good idea
  • it “seems” to make sense but only in a shallow way, which ignores the deep cultural resources we have at hand with which to craft better aid
  • and the best thing to do is ignore it complete, if we want to get on with the real job of development.

In other words: it’s not just misconceived. It’s a damaging distraction that will lead people to think that they are doing something, when they are not.

How to make things better, really

Okay, here’s the formula…

No! There is no formula! But here’s a clue.

Western industrialised culture has invented the most powerful error-correction systems in the history of the planet. You sit in building which doesn’t leak, which is immune to earthquakes, you drink quality assured coffee from a quality assured cup that is made with quality assured water that comes out of your tap day after day after day with predictable and consistent level of purity. You use a computer which is complex beyond your understanding, which the deay it came out could be bought in the hundreds of thousands from 10,000 places in 50 countries, and which will work reliable for years without repair or maintenance. You drive a car with the same characteristics. We fly in planes that don’t crash, and are even safer than the cars. All of this so taken for granted, that people complain bitterly at the slightest “failure” in any of these flawless products and services. Your Fedex shipment is a day late? Your book from Amazon came with an uncut page? God forbid that the LCD you’re staring at has a dead pixel. What? THE POWER WENT OUT??? It’s the end of the world as we know it.

The science of quality

Now, none of this consistency we take for granted happens by accident. (He’s says with a hint of irony.) This ability to deliver products and services to an extraordinary degree of quality is the result an enormous body of knowledge which travels under rubrics like Quality Assurance, Six Sigma, Lean Thinking, kaizen. Some trace it back to our roots in Greek culture, when Plato invented the idea of “perfection”, something we struggle ceaseless to achieve.

If you’re not familiar with the contemporary history of quality, I suggest you google the following terms (including the quote marks):

  • kaizen (15.5 million hits)
  • “quality assurance” (69m hits)
  • “six sigma” (17m hits)
  • “lean manufacturing” (5.4m hits)
  • “W. Edwards Deming” (358k hits)

…and scan some of the articles. Including the quotes means that you are only getting hits that are specific to quality, because only those articles are going to be using these very specific terms. And I’m counting the hits, by way of proof of just how pervasive this material is in our society: which is why flying is safe, cars come with three year guarantees, your lights are on, water comes out of your tap, and you can buy from Amazon and it arrives at your door.

In comparison:

  • “admitting failure” gets 224k hits, and because it’s such a generic thing to say most of those will not have anything to do with this fad in development
  • half of the top 10 entries are about development
  • the number two entry is J’s very post on “admitting failure”
  • the number one entry is a product of EWB Canada.

All of these are clear evidence that “admitting failure” is not how culture has achieved these amazing feats of error-free service and product delivery. Instead, a snarky blog post on the Internet is voted by google the second most authoritative document on the matter.

I think I can safely say that the whole fad is the recent invention of one person.

The kernel of truth in “admitting failure” comes from an early (60-year-old) principle of the science of quality, which is that in order to get quality, you have to “measure variance.” But “failure” is not a quality term at all, because — as J points out — it is so mushy as to be useless. Neither is “admit”, because that term assumes that someone is to blame, whereas in practice 96% of the sources of error turn out to be systemic: not anyone’s “fault”.

In 30 years in development, I have yet to work on a project that didn’t “measure variance”, though some do it far better than others.

An example of how not to do quality

Since J praised EWB, I thought that it was due diligence to take a look. You can find the “Failure Reports” reports here. I examined one. Here are my findings. I don’t have time to read more, but I invite you to do so.

I chose the 2009 Failure Report

I scanned down to the first English language title (some are in French), which was “Hiring Local Staff”, by Sarah Grant.

Note: do not take the following as a critique of Sarah Grant. 96% of the time, it’s the system.

The problem Sarah describes is that she is tasked at the end of a program to make arrangements to make the program sustainable by transferring wholly to a government partner. In doing so, she hires additional staff to strengthen that partner. That doesn’t work out.

  • Problem: The article is part of a series called “Learning from our mistakes: A collection from overseas volunteer staff.” It’s not a good idea to leave to the frontline, young professional 1 the sole responsibility for identifying errors, since she can’t see the whole system. Real learning requires an outside view. Self-assessment can be part of quality system, but alone it is not sufficient.
  • For the same reason, it’s not a good idea to have young professional staff doing the assessment alone. They’re likely to be inexperienced, and thereby may not contextualise the problem, or proposed the correct learnings.

Diagnostic 1: A young professional frontline workers is left alone to figure out what went wrong.

  • Sarah writes “The project was designed in a sustainable manner in the first place.” But there are many tell-tales that it was not designed to be sustainable.
  • “Past volunteers had established the project as something where EWB volunteers go to a community, remain there for 6 weeks and leave once a computer livelihood training centre is established. So at the community level the project was sustained.” I read this to be that the program was volunteer initiated and staffed. Not a good ground for sustainability.
  • The project depended on government for ongoing support. But though the project had a government partner in name, in practice that partner did not own the project from the outset. She writes: “At the national level we weren’t set up for success.” In other words: not designed for success.
  • Her assignment was: “…to help EWB phase responsibly out of the Scala project. Or rather have the project continue to be run successfully without our presence. This was a huge task but I was up for it!” So the phase-out was not pre-planned and agreed at the outset, but rather assigned to her as a “huge task” at the end.
  • “So, back to the Social Technology Bureau, I brought up the idea of having the Scala Project a permanent part of someone’s work. “We’re too busy”, was the response.” And rightly so. Government departments are established to implement the will of the Government, not to adopt projects initiated and established by overseas agencies. Again, that she has to propose that this project be “a permanent part of someone’s work” screams “no ownership” and “donor-driven, not partner-driven.”

Diagnostic 2: This project has sustainability problems from the get-go, and Sarah’s “failure” has nothing to do with her. It was a design failure.

Here are her “lessons learned”:

  • “Add on solutions don’t work”. Here Sarah suggests that bringing an outsider in to support the government department doesn’t work, because outsiders don’t work. This is incorrect. Government departments in all kinds of countries, and in many development projects, bring in consultants and contractors to do work where the hands of the civil servants are too full, or the work needs special expertise. In fact, it’s essential to do so, because you can’t load up overloaded civil servants indefinitely.
  • “Money and capacity building don’t work.” Here, she suggests that the government accepted the outside worker only because that worker represented additional resource. Again, incorrect. Departments reject offers of outside contractors and consultants all the time, because supervising them takes effort, and they don’t agree with what those people are being hired to do. Again: the real problem is that you cannot hand a non-government project to a government department. It’s actually illegal in most cases for them to accept! She was given an impossible task.
  • “I didn’t really trust that my partner would figure out a way to continue to project in my absence so I forced a solution on them.” Probably accurate, but again this looks like a consequence of the way that project was established. It was not her error; she can’t see outside of her task (because of her situation), and so will draw incorrect conclusions.
In the end, the news was good. The Government did adopt the project, and it continues. However:

Diagnostic 3: Sarah, due to the fact that she is in the situation and not outside it, and may as a young professional not have much experience, draws two incorrect conclusions. These are not not only incorrect: they are damagingly so. Anyone who holds these beliefs is going to be hindered in their future work.

What’s missing from this system?

If we go up a level, we find that these “Failure Reports” are part of EWB’s “Accountability.” This accountability consists of three things:

  • Testimonials from partners: “It has been a wonderful thing having Heather here…”
  • Testimonials (but not copies of the reports) from independent evaluators
  • Failure Reports written by frontline workers.
On my assessment, this does not constitute “accountability” for an aid organisation. I’m not saying that EWB does not good work: just that you can’t tell from this.

Imagine an airline in which, when a pilot came down too hard and collapse the nose wheel, the airline’s response was to have the pilot consider what happened, and write up his or her “lessons learned.”

Imagine further that:

  • none of the pilots has more than 10 years experience
  • none of their reports were independently vetted or assessed by more senior pilots
  • these reports, together with flyer testimonials and some untabled reports by outside evaluators constitutes the whole of the airline’s safety system.
Would you fly?

9000 filaments

J suggests that the poor are important, that development assistance is important, and that if we in development are doing something wrong, we’d better figure that out and change it before we do it 9000 times.

I can’t agree more.

However, my approach would be:

  1. Don’t do anything without doing a literature search. Be like Newton, and see far because you stand on the shoulders of giants (i.e. the work of the many that have come before you)
  2. Don’t reinvent the wheel (all over again) by trying to figure out a solution from scratch
  3. Don’t follow the most recent fad.
How do you spot a fad?
  • It’s branded with a trendy label.
  • It claims to do something really important: but on 15 minutes reflection you can see what it claims to do weren’t already being done, all kinds of things which do happen, wouldn’t be happening. 2
  • Was invented recently by a couple of people who aren’t themselves standing on the shoulders of giants.

Nostrums aside:

On good (i.e. most) projects people work in teams. In good projects, people know what their objectives are. They are also well aware if they are, or are not achieving them. They discuss these problems. They come up with solutions, and implement them. Some solutions work, and some don’t. Through this processes, people learn: outsiders and insiders. In a good project, that learning sticks locally, because it is owned and operated by local institutions and people; and the knowledge spreads to other projects through the constant churn of development workers.

Could we do better to document new knowledge? Yes: but “admitting failure” is a poor model. Better, tried and tested models are available.

Could we do better to disseminate new knowledge. Again: yes. But again: don’t look for new wheels. Just adapt and improve the excellent wheels that already exist.


  1. Thank you, Erin for your comment, which highlighted that I should be speaking about young professionals, not volunteers.
  2. For instance, the World Bank moved away from funding dams, and into funding education. Oxfam moved away from accountability to donors, to accountability to beneficiaries. AusAID moved away from inputs driven projects, to outputs driven projects, to programs, to SWAps. AusAID’s White Paper on PNG during the Strategic Review stated explicitly that they’d spent a billion dollars there, with nothing to show for it. Every time you see an agency change tack, or strategy, or even objectives: people are admitting that what they were doing before was flawed, or not good enough.

Previous post:

Next post: