Fad surfing in the development boardroom

by David Week on 25 October 2011


This is a response to J’s post on Tales from the Hood, entitled: “Fail“.

The title of my post comes from a book I have on my shelf: “Fad Surfing in the Boardroom: Reclaiming the Courage to Manage in the Age of Instant Answers.” Note the subtitle, which proposes that the alternative to fad surfing is to avoid “instant answers”, and taking the courage to manage.

That book sits next to another: “Dangerous Company: The Consulting Powerhouses and the Businesses They Save and Ruin.” Together, these two books deliver the following message from the world of business:

  • Fads sweep through business on a regular basis
  • Each of these fads contains a small kernel of useful truth, surrounded by voluminous layers of bullshit
  • Each of these fads is pushed by a coterie of people who stand to advanced their careers by pushing something
  • As a manager, it’s okay to examine the fad, and extract and use the kernel of useful truth
  • If you buy the whole enchilada, you are morally reprobate, and likely drive your company into the graveyard.

Fads in development

In J’s post, he talks about “admitting failure” as the latest wave of development thinking:

“Admitting failure” has been slowly gaining momentum for a few years, now, at least in the aid world. It’s one of those ideas whose time… is just around the corner. Much like all things “local”, like “sustainability” before that, and “evidence-based programming” before that, “admitting failure” is the sexy new relief and development language convention of the month, and as MJ further points out, is almost certain to become de rigeur in proposals, monitoring and evaluation reports, and NGO external publications within the foreseeable future.”

Well, that’s a good list of recent development fads. What’s missing is any discussion of whether any of these fads have actually made a difference to recipients of aid. That’s an honest question: I’d like to hear the answer from the recipients, not the purveyors. To my knowledge, no-one has asked.

J then goes on to express some concerns about it this wave. My summary:

  • It could be no more than PR: not real learning
  • The concept is so vague as to be potentially useless
  • The histrionic and extreme tenor of the idea of “failure”
  • A lack of appetite in the aid community for nuanced analysis

However, he ends on a positive note: that Edison tried 9000 filaments for his light bulb before coming across a material that worked. “…we have to be getting this stuff right or abandoning particular practices long before try number 9000.” He also gives kudos to “Engineers Without Borders for their nascent leadership within the industry to admit failure.”

My view:

  • that the whole notion of “admitting failure” is a fad
  • it’s not a good idea
  • it “seems” to make sense but only in a shallow way, which ignores the deep cultural resources we have at hand with which to craft better aid
  • and the best thing to do is ignore it complete, if we want to get on with the real job of development.

In other words: it’s not just misconceived. It’s a damaging distraction that will lead people to think that they are doing something, when they are not.

How to make things better, really

Okay, here’s the formula…

No! There is no formula! But here’s a clue.

Western industrialised culture has invented the most powerful error-correction systems in the history of the planet. You sit in building which doesn’t leak, which is immune to earthquakes, you drink quality assured coffee from a quality assured cup that is made with quality assured water that comes out of your tap day after day after day with predictable and consistent level of purity. You use a computer which is complex beyond your understanding, which the deay it came out could be bought in the hundreds of thousands from 10,000 places in 50 countries, and which will work reliable for years without repair or maintenance. You drive a car with the same characteristics. We fly in planes that don’t crash, and are even safer than the cars. All of this so taken for granted, that people complain bitterly at the slightest “failure” in any of these flawless products and services. Your Fedex shipment is a day late? Your book from Amazon came with an uncut page? God forbid that the LCD you’re staring at has a dead pixel. What? THE POWER WENT OUT??? It’s the end of the world as we know it.

The science of quality

Now, none of this consistency we take for granted happens by accident. (He’s says with a hint of irony.) This ability to deliver products and services to an extraordinary degree of quality is the result an enormous body of knowledge which travels under rubrics like Quality Assurance, Six Sigma, Lean Thinking, kaizen. Some trace it back to our roots in Greek culture, when Plato invented the idea of “perfection”, something we struggle ceaseless to achieve.

If you’re not familiar with the contemporary history of quality, I suggest you google the following terms (including the quote marks):

  • kaizen (15.5 million hits)
  • “quality assurance” (69m hits)
  • “six sigma” (17m hits)
  • “lean manufacturing” (5.4m hits)
  • “W. Edwards Deming” (358k hits)

…and scan some of the articles. Including the quotes means that you are only getting hits that are specific to quality, because only those articles are going to be using these very specific terms. And I’m counting the hits, by way of proof of just how pervasive this material is in our society: which is why flying is safe, cars come with three year guarantees, your lights are on, water comes out of your tap, and you can buy from Amazon and it arrives at your door.

In comparison:

  • “admitting failure” gets 224k hits, and because it’s such a generic thing to say most of those will not have anything to do with this fad in development
  • half of the top 10 entries are about development
  • the number two entry is J’s very post on “admitting failure”
  • the number one entry is a product of EWB Canada.

All of these are clear evidence that “admitting failure” is not how culture has achieved these amazing feats of error-free service and product delivery. Instead, a snarky blog post on the Internet is voted by google the second most authoritative document on the matter.

I think I can safely say that the whole fad is the recent invention of one person.

The kernel of truth in “admitting failure” comes from an early (60-year-old) principle of the science of quality, which is that in order to get quality, you have to “measure variance.” But “failure” is not a quality term at all, because — as J points out — it is so mushy as to be useless. Neither is “admit”, because that term assumes that someone is to blame, whereas in practice 96% of the sources of error turn out to be systemic: not anyone’s “fault”.

In 30 years in development, I have yet to work on a project that didn’t “measure variance”, though some do it far better than others.

An example of how not to do quality

Since J praised EWB, I thought that it was due diligence to take a look. You can find the “Failure Reports” reports here. I examined one. Here are my findings. I don’t have time to read more, but I invite you to do so.

I chose the 2009 Failure Report

I scanned down to the first English language title (some are in French), which was “Hiring Local Staff”, by Sarah Grant.

Note: do not take the following as a critique of Sarah Grant. 96% of the time, it’s the system.

The problem Sarah describes is that she is tasked at the end of a program to make arrangements to make the program sustainable by transferring wholly to a government partner. In doing so, she hires additional staff to strengthen that partner. That doesn’t work out.

  • Problem: The article is part of a series called “Learning from our mistakes: A collection from overseas volunteer staff.” It’s not a good idea to leave to the frontline, young professional 1 the sole responsibility for identifying errors, since she can’t see the whole system. Real learning requires an outside view. Self-assessment can be part of quality system, but alone it is not sufficient.
  • For the same reason, it’s not a good idea to have young professional staff doing the assessment alone. They’re likely to be inexperienced, and thereby may not contextualise the problem, or proposed the correct learnings.

Diagnostic 1: A young professional frontline workers is left alone to figure out what went wrong.

  • Sarah writes “The project was designed in a sustainable manner in the first place.” But there are many tell-tales that it was not designed to be sustainable.
  • “Past volunteers had established the project as something where EWB volunteers go to a community, remain there for 6 weeks and leave once a computer livelihood training centre is established. So at the community level the project was sustained.” I read this to be that the program was volunteer initiated and staffed. Not a good ground for sustainability.
  • The project depended on government for ongoing support. But though the project had a government partner in name, in practice that partner did not own the project from the outset. She writes: “At the national level we weren’t set up for success.” In other words: not designed for success.
  • Her assignment was: “…to help EWB phase responsibly out of the Scala project. Or rather have the project continue to be run successfully without our presence. This was a huge task but I was up for it!” So the phase-out was not pre-planned and agreed at the outset, but rather assigned to her as a “huge task” at the end.
  • “So, back to the Social Technology Bureau, I brought up the idea of having the Scala Project a permanent part of someone’s work. “We’re too busy”, was the response.” And rightly so. Government departments are established to implement the will of the Government, not to adopt projects initiated and established by overseas agencies. Again, that she has to propose that this project be “a permanent part of someone’s work” screams “no ownership” and “donor-driven, not partner-driven.”

Diagnostic 2: This project has sustainability problems from the get-go, and Sarah’s “failure” has nothing to do with her. It was a design failure.

Here are her “lessons learned”:

  • “Add on solutions don’t work”. Here Sarah suggests that bringing an outsider in to support the government department doesn’t work, because outsiders don’t work. This is incorrect. Government departments in all kinds of countries, and in many development projects, bring in consultants and contractors to do work where the hands of the civil servants are too full, or the work needs special expertise. In fact, it’s essential to do so, because you can’t load up overloaded civil servants indefinitely.
  • “Money and capacity building don’t work.” Here, she suggests that the government accepted the outside worker only because that worker represented additional resource. Again, incorrect. Departments reject offers of outside contractors and consultants all the time, because supervising them takes effort, and they don’t agree with what those people are being hired to do. Again: the real problem is that you cannot hand a non-government project to a government department. It’s actually illegal in most cases for them to accept! She was given an impossible task.
  • “I didn’t really trust that my partner would figure out a way to continue to project in my absence so I forced a solution on them.” Probably accurate, but again this looks like a consequence of the way that project was established. It was not her error; she can’t see outside of her task (because of her situation), and so will draw incorrect conclusions.
In the end, the news was good. The Government did adopt the project, and it continues. However:

Diagnostic 3: Sarah, due to the fact that she is in the situation and not outside it, and may as a young professional not have much experience, draws two incorrect conclusions. These are not not only incorrect: they are damagingly so. Anyone who holds these beliefs is going to be hindered in their future work.

What’s missing from this system?

If we go up a level, we find that these “Failure Reports” are part of EWB’s “Accountability.” This accountability consists of three things:

  • Testimonials from partners: “It has been a wonderful thing having Heather here…”
  • Testimonials (but not copies of the reports) from independent evaluators
  • Failure Reports written by frontline workers.
On my assessment, this does not constitute “accountability” for an aid organisation. I’m not saying that EWB does not good work: just that you can’t tell from this.

Imagine an airline in which, when a pilot came down too hard and collapse the nose wheel, the airline’s response was to have the pilot consider what happened, and write up his or her “lessons learned.”

Imagine further that:

  • none of the pilots has more than 10 years experience
  • none of their reports were independently vetted or assessed by more senior pilots
  • these reports, together with flyer testimonials and some untabled reports by outside evaluators constitutes the whole of the airline’s safety system.
Would you fly?

9000 filaments

J suggests that the poor are important, that development assistance is important, and that if we in development are doing something wrong, we’d better figure that out and change it before we do it 9000 times.

I can’t agree more.

However, my approach would be:

  1. Don’t do anything without doing a literature search. Be like Newton, and see far because you stand on the shoulders of giants (i.e. the work of the many that have come before you)
  2. Don’t reinvent the wheel (all over again) by trying to figure out a solution from scratch
  3. Don’t follow the most recent fad.
How do you spot a fad?
  • It’s branded with a trendy label.
  • It claims to do something really important: but on 15 minutes reflection you can see what it claims to do weren’t already being done, all kinds of things which do happen, wouldn’t be happening. 2
  • Was invented recently by a couple of people who aren’t themselves standing on the shoulders of giants.

Nostrums aside:

On good (i.e. most) projects people work in teams. In good projects, people know what their objectives are. They are also well aware if they are, or are not achieving them. They discuss these problems. They come up with solutions, and implement them. Some solutions work, and some don’t. Through this processes, people learn: outsiders and insiders. In a good project, that learning sticks locally, because it is owned and operated by local institutions and people; and the knowledge spreads to other projects through the constant churn of development workers.

Could we do better to document new knowledge? Yes: but “admitting failure” is a poor model. Better, tried and tested models are available.

Could we do better to disseminate new knowledge. Again: yes. But again: don’t look for new wheels. Just adapt and improve the excellent wheels that already exist.


  1. Thank you, Erin for your comment, which highlighted that I should be speaking about young professionals, not volunteers.
  2. For instance, the World Bank moved away from funding dams, and into funding education. Oxfam moved away from accountability to donors, to accountability to beneficiaries. AusAID moved away from inputs driven projects, to outputs driven projects, to programs, to SWAps. AusAID’s White Paper on PNG during the Strategic Review stated explicitly that they’d spent a billion dollars there, with nothing to show for it. Every time you see an agency change tack, or strategy, or even objectives: people are admitting that what they were doing before was flawed, or not good enough.
  • MJ

    Fascinating! EWB exposed as simple volunteer mongers. Maybe better volunteer mongers than the competition, but as you imply, the biggest failure might be in sending volunteers to do something not suited to volunteerism.

    Two points on Google rankings: (a) blogging engines appear to have v efficient mechanisms to get new posts incorporated rapidly (thereby improving the immediacy of the web, perhaps), and (b) Google adjusts the results according to your location and past surfing history. J’s post came out 8th when I just did a search from a mostly clean browser.

    Overall, I would say I’m more with J on this one, tho. I am a bit cynical, and as many posters pointed out: admitting failure is little more than doing the lesson learning that you ought to be doing anyway, but doing it in public. But, I do really like the humility that is central to admitting failure. The development/aid industry needs that in lorry loads.

    • Erin Antcliffe

      To just pick up on that point about volunteers, it calls into question: what constitutes a volunteer? EWB “volunteers” are in fact called “African Programs Staff” by the organization, and treated as such. They are paid stipends for their living costs and all overhead costs are paid for (transportation, insurance, management support, etc.). It’s true that many (though not all) of these staff are young and lack experience, but what’s the difference between a “volunteer staff member” and a young grad working for a development agency? Don’t they have the same skillsets? Job descriptions? Where can we really draw the line between volunteer and professional? Just curious…
      (Full disclosure: I’m an EWB staffer in Ghana)

      • Hi Erin. Full disclosure: I was a volunteer for seven years after leaving university. I would not be who I am now (good or bad!) but for that experience. But you raise a very good point, which has led me to edit my article to change “volunteers” to “younger professionals”.

        I would say that volunteering is a notch above paid grads, in that volunteers experience front line work with communities, whereas most of the paid grads I know are stuck doing program management in head offices. I don’t think anyone should be building a career in development without knowing what the coalface is like, and I think volunteering is one of the most reliable and accessible ways to get that experience.

        In terms of professionalism: I agree, no difference.

        So: kudos for EWB for enabling that volunteer experience for engineering professionals. But low marks for (a) getting the frontline workers to “admit failures” for problems that should sheeted home to the senior managers, and (b) touting “admitting failure” as the key to quality improvement, when their whole profession is at the forefront of quality improvement, and has developed excellent tools which if they had been brought to bear would have caught these “failures” before they ever happened.

      • Hi David,

        Thanks for engaging in the discussion about “volunteer” vs. “young professional”. I agree with your generalization about the roles they get slotted into, and the value of having field experience.

        I feel as thought maybe I’m diving too deep into the “way EWB works” here, but thought I’d point out one thing: there really are no senior managers in a home office making decisions about EWB’s projects and programming. It’s quite a young, un-bureaucratic organization, so there is really no one “above” these young professionals to blame. Program decisions are taken by Team Leaders in the field (a role that actually didn’t exist when Sarah’s project was undertaken over 5 years ago – we’ve since created it). Teams range from 5 to 15 people, all volunteers/young professionals, that are in charge of their own programs. Back in Canada at our “home office”, we have two Co-Directors of African Programs (one of which happens to be Sarah – she’s still with EWB) and a CEO above that. But while they’re responsible for managing our African teams, they don’t really make decisions on strategy – that is left up to the teams on the ground. So yes, we’re inexperienced; yes, we’re giving an immense amount of responsibility and autonomy to young people; but we feel the way development normally works is quite broken and we’re using these cost-effective resources (volunteers) to search for new models for development impact. It’s not a perfect system, I’m definitely the first to admit that. But the fact is that the people on the ground DO have the best perspective to understand when things are going wrong (or right) and to identify and change these things. That’s what our Failure Report is all about.

        Of course I’m not going to argue that Admitting Failure is enough – far from it. We need to not only admit mistakes, but learn from them, then incorporate those lessons into future iterations of our work – only then can we actually say we’ve taken a step forward from failure. But I believe that this “publicity stunt” is the first step toward this process, which is much more nuanced than can be articulated in a sound byte.

        Alright I’ve written too much… but thanks for engaging in the debate. I’m definitely on the fence about a lot of this, and writing about it helps explore the different facets of the idea. I appreciate it!


        p.s. might need to clarify that EWB doesn’t do any engineering work… confusing, yes, but important to know.
        p.p.s. yes our website needs A LOT of work!! You’re not the first to notice…
        p.p.p.s. you misquoted Sarah as saying “Money and capacity building don’t work” when the report really says “Money and capacity building don’t MIX” – quite different, might want to fix that!
        p.p.p.p.s. Sorry again for the monster comment!

      • MJ

        Hi Erin.

        Thx for the correction. I agree whether or not EWB staff are paid is not terribly relevant. What is I think relevant is the length of time they stay. How well they really get to understand the country where they’re working, and the problems they are supposed to be solving.

        1-2 even 3 year rotations, imho, often do not work well and are not a sustainable solution to deep-seated development challenges. I have my doubts about any development organisation that constructs itself around that kind of a business model.

        In particular:
        – “so yes, we’re inexperienced;” agree!
        – “yes, we’re giving an immense amount of responsibility and autonomy to young people;” agree!
        – “but we feel the way development normally works is quite broken” agree!
        – “and we’re using these cost-effective resources (volunteers) to search for new models for development impact” eh? What makes you think these volunteers are cost-effective (cheap yes, effective?) and an appropriate mechanism to search for new models for development impact?

      • Super-interesting, given that it’s very rare for someone in a dev’t organization to stay in a job placement in a developing country for longer than that! I agree that it’s not ideal… but what alternative model do you propose? We’ve been struggling with this turnover problem since EWB started and have racked our brains for years trying to figure out what gets people to stay. If you have some ideas, please share!

        On cost-effectiveness, I actually do believe our staff are quite effective at this “searching” role. It’s what we’ve come to specialize in, in a way – being able to push the system to change (gov’t, NGOs, projects, private sector). I think this partly comes from having a very rigorous hiring process and hiring on attitudes rather than knowledge or experience. So you have people asking the right questions and looking down new paths, rather than people who are all trained to think or design a project in a certain way. Again, it’s not perfect, and we certainly find a lot of challenges, but I do think that this IS a good role for our EWB staff to play. It’s somehow a bridge between “figuring out what works” and “implementing what works” – at least, that’s how I’m seeing it now. And I think this is a role that is sorely lacking in the development industry. We still need to get better at it, understand the role more completely, and get in with the right networks… but I think this can happen.

        Would love to hear your thoughts!

      • Thanks to everyone here for the good debate. I think there is a key point in what Erin saying here that needs to be compared to David’s overall point. My understanding of EWB is roughly how Erin and MJ are getting at it here: a relatively cheap (compared to other expat consultants or researchers) way of searching for and promoting slightly different ideas in the development sector. But EWB itself has little decision-making power in terms of implementing anything. If admitting failure is going to be really relevant, it would seem to be mostly at higher levels (govt, donors or big NGOs). This is where we can compare David’s view (those bigger actors already have ways of avoiding or learning from failure) and Erin’s (that they don’t, and small organisations like EWB need to start initiatives like this that go up the chain).

        Disclosure: I used to work for EWB-UK, but the organisations are not formally linked.

      • MJ

        “But EWB itself has little decision-making power in terms of implementing anything.” I would describe this as a major constraint. Yes there is a role for provision of low-cost junior technical expertise, but you will always be limited to what you can achieve with it. For that one-in-a-million brilliant host organisation that is able to leverage the skills of the EWB ‘volunteer’ to achieve great things, they would probably have found an alternative source of support. For the rest, however good the EWB ‘volunteer’ is, they will struggle to transform mediocrity. They will struggle even more if they are just part of a conveyor belt of volunteers.

      • Phil Schleihauf

        I don’t get it — is this comment a speculation based on how EWB is described in this comment thread, or an observation based on the organization’s actual work?

        I’m especially interested if it is the latter, and would love to hear about it. Obviously may be getting a biased view of EWB’s operations, since I volunteer for them at the university chapter level.

      • MJ

        I have no 1st hand experience of EWB’s work. But it is a general observation that if you are putting temporary staff into another organisation, what they can achieve must inevitably be limited by that organisation and its practices. From my experience those limits are likely to be more significant in precisely those organisations which are most likely to want or appear suitable for an EWB volunteer, while the best run organisations will mostly have alternative solutions. Unfortunately, without completely changing their business model, I don’t see much that EWB can do about this.

      • Hi Erin. I’ve edited the post, replacing my use of the word “volunteer” with the more accurate “young professional.” (They might even better be described as professionals of any age, new to development… but that’s too wordy.) I also cited you in Note 2 for pointing this out.

  • I think condemning something as a pointless fad because it is a relatively new idea is quite weak. Obviously, this isn’t your only point but it does seem like a loose crutch of your argument.

    Otherwise, is it fair to say that your basic assumption is that aid agencies works pretty well? Again, pretty weak.

    • Hi Rowan. Thanks for your comment.

      I’m not condemning it as a fad because it’s relatively new. I’m condemning it because (a) there about 100 million person-years have been put into ways of eliminating errors from products and services, which it ignores, (b) of the knowledge base developed by that 100 million person-years of proven success, “admitting failure” is not only not part of it: it’s specifically been rejected as misguided. 

      I make no assumption that aid agencies work pretty well. All I am saying is that they already admit failure in very public ways, at all levels of scale (from project to senior management) and that fact that they don’t take out full-page ads in the NYT to proclaim it so is neither here nor there. No-one else does.

      The essence of my critique is this: “Admitting failure” is a publicity stunt that (a) doesn’t acknowledge the reality on the ground (b) does not apply proven professional knowledge to the problem and (c) has it’s frontline workers “admitting failure” for problems that are properly the responsibility of senior management, who are in turn silent on their own failures to properly design and manage their projects, their quality systems, and the people who depend upon them.

      • “On good (i.e. most) projects people work in teams. In good projects,
        people know what their objectives are. They are also well aware if they
        are, or are not achieving them. They discuss these problems. They come
        up with solutions, and implement them.”

        This seems to me like a pretty positive reading of aid –> assumption isn’t stated as such but it seems to me to be fairly obviously implicit. Not necessarily a bad thing, of course, but I think you’re placing too much of a positive spin on the failure admitting mechanism in place.

        I agree ‘admitting failure’ could well be a publicity stunt, catch phrase of the moment. Certainly, not applying it to management is absurd!

        However, giving a voice to/asking for opinions from low level workers (full disclosure: such as myself) could be very useful — people who are too entrenched in the global development world can easily get sucked into certain ways of working and thinking. Using ‘admitting failure’ as a framework for refreshing philosophy or organisational culture, I think, could be extremely useful.

      • Hi Rowan. What I’m saying is that most projects have in place formal and informal quality and learning systems. I guess it’s a “positive” view in the sense that I think you have a lot of smart people trying very hard in difficult circumstances, and trying to get things done. They know what a bad result is, try to avoid them, and try and do better in future. 

        I think that the “failure admitting” process that’s in place can be improved. But it IS IN PLACE, and the idea of “admitting failure” is not new, and the idea of annual “Failure Reports” is not an advance on what we have already. What we have is, in the bilateral and multilateral world:

        • fairly good project planning that ensure that there are goals, that key stakeholders agree on them, that people are on the same page, that adequate resources are assigned, and so forth. My big complaint here is that there seems to be little lateral transfer from other domains in which project management has been in place for longer.

        • a good idea of what “quality at entry” looks like

        • monitoring and evaluation systems: my main complaint here would be that they are donor-focussed, rather than management and beneficiary focussed.

        • project close-out evaluation and reporting

        • a culture in which people are committed to good outcomes. There are a few bad eggs, incompetents, and mercenaries: as there are in any industry. But the start to quality is good people, and I think we’re pretty fortunate in that regard.

        Re young professionals: of course people of all ages have good ideas, and everybody should have an equal opportunity to contribute. A team or organisation dominated by the ideas of old fogeys would be awful.

        What I’m talking about is a particular technical thing called “experience” which leads to “expertise”, which can only come from participating in many, many projects. The reason is this: to adequately assess a “failure”, you need to know whether what you’re seeing is commonplace, or should never have happened at all. That’s how you weed out the interesting “failures”, from those which are just errors due to inexperience. And for that, you need to have access to mental stock of project experience… which you accumulate over time.


  • Pingback: When admitting failure isn’t enough « Bottom Up Thinking()

  • These VERY INTERESTING comments are pointing to issues of volunteer work, young professionals, and turnover at developing organizations… But I would like to return to the core of the debate. Erin sustains (as I did in my post) that admitting failure is not enough, and it is just a kind of first step in learning. And, if I don´t get David wrong, he is stating that is not necessary at all… Please, correct me if there are more nuanced ideas in between, but I think we have here the central issue under discussion: it is worth to admit failure in aid/development organizations?

    • MJ

      I think there are 3 steps:
      1. Admit Failure
      2. Diagnose Cause of Failure (requires insight!)
      3. Correct Cause of Failure
      Part of David’s argument appears to be that having a great fad around (1) does not, on its own, get you very far!

      • Good to have the steps clear… What about thinking of Admit Failure more as a trend than a fad? Any chance, any worth? 

        Wikipedia says the following: Though the term trend may be used interchangeably with fad, a fad is generally considered a fleeting behavior whereas a trend is considered to be a behavior that evolves into a relatively permanent change

      • I don’t think anyone would argue with that!

  • Pingback: Rome wasn’t built in three years « Bottom Up Thinking()

  • Mina Shahid

    I would argue that the core purpose of admitting failure is to encourage development interventions to move away from the implementation of log-frames to more continuous learning and adaptation of approaches to actually achieve positive impact for beneficiaries. To state that “Admitting failure” in the development industry is a fad and a bad idea assumes that the development industry already does acknowledge and learn from failure which actually is far from the “reality on the ground.” 

    The reason that website was created was because of countless frustrating experiences by EWB staff embedded within development projects that refused to learn from their mistakes, let alone admit them. Furthermore, it’s important when talking about failure to understand what success in the development industry actually means. For 99% of implementers and projects it’s usually associated with spending the allotted budget on the planned activities. It’s important to admit failure when neither of these actions actually reduces poverty. This is what EWB cares about the most. 

    “They already admit failure in very public ways, at all levels of scale (from project to senior management) ” is an interesting statement because I don’t think it’s actually true due to the power dynamics that most donors have over implementers. The majority of implementers I’ve observed would never admit their failures publicly because they are afraid of going out of business once their donor finds out. 

    Publicizing your organizations’ intent on learning and changing the way it works in development by admitting it’s failures, is a new way to be held accountable by others in the industry and ultimately by the people you are trying to help ultimately moving the industry towards more substantial implementation of development projects. Furthermore it moves the industry away from rigid project designs to more flexible conditions for implementation that build-in learning.

    I think the actual reality on the ground is that GOOD development organizations and projects that actually achieve results on poverty reduction are continuously admitting their failures, and redesigning their strategies. And they’re doing this not because somebody told them to, but because they’ve built in a culture of continuous monitoring and improvement. 

    (I am also a staff member with EWB in Ghana, although I don’t think it matters much)

    • Hi Mina. “For 99% of implementers and projects it’s usually associated with spending the allotted budget on the planned activities.” This is based on EWB volunteer experience. But EWB volunteer experience is limited to the kinds of organisations that take EWB volunteers. There’s a bias in your perception. If an organisation is staffed by professionals, it’s unlikely to be asking for volunteers.

      I agree that logframes are simplistic tools that we need to move beyond. I think the most direct way to move beyond them is to develop better tools. Some people are doing that.

      “The majority of implementers I’ve observed would never admit their failures publicly because they are afraid of going out of business once their donor finds out.” Most of work I do is for donors. We keep tight tabs on what’s happening with their money, and believe me: we know all about the failures.

      Here’s the kind of stuff that I’m talking about:

      I happen to know about this report, because I know some of the people that wrote it. It’s written in a neutral, diplomatic tone which is appropriate because the failures here include the failures of sovereign governments.

      Let me interpret this:

      “Access to primary education has improved” means “we’ve built a lot of schools”

      “Learning has lagged” means “but they still don’t have good teachers and learning materials, so kids still aren’t learning”

      “Countries expanding too quickly can put learning at risk” means “we have been focussed too much on building lots of schools, and not enough on the quality of education going on inside those new schools”.

      “Education sector management matters” means “the whole expansion has been inadequately managed.

      That’s 1, 2, 3, 4 failures, there in black and white, in public. 

      Again: failures are known, and acted on. You can verify that by reading the evaluations and the completion reports, the policy notes. They are full of criticism, and “lessons learned” (usually called “recommendations”.) Future designs (in the WB, anyway) then have to refer specifically to those lessons learned, and explain how they are addressed in the design.

  • I’m Head
    of Failure at Engineers Without Borders Canada (EWB) and happen to be the
    person behind AdmittingFailure.com. David, you make some great points with many
    interesting things to consider and comment on. (On that note, hats off to all the people contributing to this
    great dialogue.) We can learn from you and others who are critical of our work
    just as we can learn from our own mistakes.

    Yes, we
    should all be conscious of fads and superficiality. In fact, Admitting Failure
    itself is a work in progress, evolving to improve in quality, much like you
    suggest good quality processes should.

    Failure isn’t a fad. It’s an experiment. We identified a need for humility,
    honesty, clarity and communication in a sector where mistakes are repeated, and
    things we know don’t work are being replicated. There is an obvious need for
    more knowledge sharing about what works and what doesn’t. AdmittingFailure.com
    is EWB’s attempt to address that need. It might work, it might not work; but it
    will definitely adapt and evolve as we experiment and learn.

    I really
    appreciate your comments about getting information from project ‘recipients’ as
    to whether Admitting Failure will help them. We launched AdmittingFailure.com
    with Charity Ngoma, someone we worked with in Zambia. She gave an inspirational talk at the Admitting Failure launch
    about how the freedom to admit failure helped her and her colleagues improve
    the quality of their agricultural value chains project. Nevertheless, we can
    always do better and I would love to hear ideas on how to gather this
    information from recipients.

    From my
    small circle of influence, the opportunity I recently saw to move on this was
    to encourage EWB’s Africa Program staff to discuss their failures with their
    partners in Africa in order to open up this discussion at the field level and
    even write up their failures in coordination with partners where appropriate. The
    results of which will be apparent in our 2011 Failure Report coming out January
    2012 – stay tuned!

    And as a
    small point of clarification, you said we don’t provide copies of the
    independent evaluation reports; EWB has five of these reports (downloadable as
    PDFs) on our transparency and
    accountability page.

    follows the discussion regarding your points where our perspectives differ.

    I’d like to address your assessment of Sarah Grant’s failure. I’m lucky enough
    to sit across from Sarah, who just returned last year from working for five
    years in the field to take on the role of EWB’s Co-Director of African


    mention the Scala project was a failure at the design level. However, Sarah
    states in her article that the project was well designed to be sustained at a local level; scaling to a national level
    was not considered at the outset.

    Sarah’s failure
    is one I see again and again – an archetypal failure at the “scale-up /
    transfer-to-local-ownership” stage in development projects. She hurried to push
    the project to scale and failed to address the underlying systemic issue: that
    national government has difficulty adopting projects after the pilot phase.
    This kept the program from expanding nationally and sustainably. 

    Failures such
    as these have helped EWB learn and become the organisation it is now with a
    strong emphasis on understanding field realities and bringing this
    understanding to policy and donor levels to address systemic issues.

    regard to more fundamental issues:

    1. The
    notion that someone is unqualified because they are young or a volunteer is
    simply not true. Nor is it true that someone cannot effectively solve
    development problems unless they are a “development professional”. There are
    many historical examples of outsiders coming up with ideas and solutions that
    are transformative. To name a few, micro-finance and vaccine distribution using
    market mechanisms came from relative outsiders. The inexperienced come in
    dreaming. Not only can they offer big ideas from a new perspective, but they
    are unburdened by what has always been done. An interesting post was recently
    written on the Develop Economies blog. As proof of value, many of EWB’s local partners
    in Africa pay the partial or full cost of having an EWBer contribute to their

    2. I
    would also like to point out the importance of stories like those in the
    Failure Report as culture-setters. You make the argument that the system has
    failures. By creating Failure Reports we are seeking to address system-level
    problems by encouraging humility, transparency and learning as fundamental
    components of EWB’s culture and values.

    3. The
    development industry doesn’t learn quickly enough. Need proof? The sheer quantity of inadequate projects and implementation systems that persist! The
    sector is horrible at learning and creating systems for learning, especially at
    a program level.

    suggest good approaches, but practices don’t always translate to the field, nor
    do they change a lot for those on the ground. People on the ground often know
    things aren’t working, but this information isn’t used for decisions, or at
    least not quickly enough.

    I’ll use
    the tens of thousands of non-functioning water points/wells as an example. At
    the field level we know that maintenance
    is needed, not more wells necessarily. But instead of investing in a
    locally-sustainable water service delivery model, NGOs continue to simply build
    more wells.

    4. We are
    making a bet that explicitly identifying failure is one way to “shock the
    system” and give learning approaches a little sexiness and visibility,
    particularly for donors.  I would love
    for any donors out there to provide their thoughts on this. Admitting Failure
    and Failure Reports aren’t the whole solution, but we think they could be part
    of the solution.

    5. Your
    comparisons to flying, electricity, etc. aren’t relevant as analogies for
    learning systems in development. Flying is a well-known technical process that
    has an established algorithm for success. It’s complicated, but there is
    virtually no complexity. All the mystery has been taken out of flying through much
    iteration over an extended period of time. This makes any new learning linear
    and well adapted to a set of rules.

    In stark
    contrast, development is very complex – “wicked” according to ODI. This is important because
    treating development as a system with an algorithm for success is precisely one
    of the problems we’re trying to solve. Admitting Failure encourages iterating
    on new ideas to improve quality much more rapidly and making this a cultural
    part of the system (rather than scale-up models that seek to replicate over and
    over again). This requires a high degree of comfort with failure, which we
    simply don’t see.

    Finally, I agree that Kaizen and micro-iteration are great, but poor feedback
    loops and accountability to donors– and not customers– means we simply
    haven’t seen these quality processes take hold within development organizations
    or projects at the ground level. Instead, presumably as a result of this
    accountability paradox, we see a lot of counter-examples where quality problems
    (failures) are ignored or hidden. Erin Antcliffe’s A Tale of Two Projects post on Oct 20th provides
    a perfect example of this.

    The underlying
    hypothesis of Admitting Failure is that development work will be more effective
    if actors admit failure into the
    sector. This is much like the culture of start-ups that tolerates inherent risk
    in the name of innovation and creating something new and beneficial that can go
    to scale.

    As I commented
    to another post in this forum, we currently do not have the perfect workable
    solution for alleviating poverty; therefore, there is a need for trying new
    approaches and constantly learning from what does not work and improving on
    what does.

    There will
    always be a place for celebrating and, more importantly, scaling successes.
    Admitting Failure is simply about allowing space for the innovators and
    entrepreneurs working in the sector to try new things, humbly and quickly
    recognize failures, and always learn and improve. Stay humble because we can do


    Or perhaps I
    should have simply used your own words (from your post on “Smart Aid”) to
    defend Admitting Failure:

    “You know
    that statement of Newton:  ‘If I have
    seen a little further it is by standing on the shoulders of giants.’ I think
    that we need to thank even the people who have made huge mistakes in the past,
    because it’s only because of them that we aren’t making the same ones

    • Hi Ashley. Thanks for this long and considered response.

      First, I’m kind of concerned that a specific response to J’s suggestion of the Newest Thing sweeping the development landscape, has somewhat morphed into a critique of EWB. I don’t think that’s appropriate. I’ve got a pretty good idea of what EWB does, and how it goes about its business, and thought I might point out a few places I think you could make improvements, overall, I think that operations at the EWB scale (a) provide a very useful service within the overall aid ecology, and (b) can’t afford, nor should be expect to provide, complicated or robust error-catching systems. 

      BUT: I still have serious concerns about Admitting Failure (AF from here on in), and I think it could actually does some harm. More on that later. But for EWB’s core business: full support.

      This is where I think you’re way wrong:

      “The development community is failing… to learn from failure. Instead of recognizing these experiences as learning opportunities, we hide them away out of fear and embarrassment.”

      You say that this is an experiment, but you seem to have gone from experiment to PR campaign without all the necessary bits in between. I’ll describe what I think those are, in the next post, in which I hope to focus on positive models and solutions for development QA, rather than critique. But, here’s an outline:

      • Fact-checking: “The development community is failing… to learn from failure.” I can disprove that in about two hours of googling. I can show you reports of public statements of massive problems.

      • Knowing the history of a problem: Even your statement about the water wells… the “village well” problem was the paradigmatic example of failure back in the 1970s when I was at Berkeley. There’s a very good blog
      in which a field worker explores many of the problems with the water supply installation programs in the country in which he’s operating. The point is: everybody knows the problems with water supply. People have tried all kinds of hypotheses and solutions: more appropriate technology, training, ownership, village consultation, better designs, better field engineering, maintenance programs, better donor and program coordination… and yet the problems persist. The problem is NOT that they the failures aren’t acknowledged. The problem is NOT that people haven’t studied these failures, and tried to overcome. The problem is that it’s proved to be very difficult. It is, actually, a problem that EWB would be well placed to have its own stab at :-).

      • Experimental design: if this is an experiment, did you a baseline? Did you a design? Above all: have you set clear criteria under which you will declare “admitting failure” a failure… or is it more like a campaign? Because it looks to me more like a campaign, than an experiment.

      • Humility: I like your cultural goals, and expect you to live like them. Unfortunately, this statement: “”The development community is failing… to learn from failure” condemns tens of not hundreds of thousands of workers and organisations whom you have not checked out, studied, consulted or know anything about… and sets your gang up as offering a solution to this purported problem. This seems to be an eentsy bit non-humble, to me.

      • “The sheer quantity of inadequate projects and implementation systems that persist! The sector is horrible at learning and creating systems for learning, especially at a program level.” An opinion. Your data, please? Where did you get this picture? Or is just an opinion? I can come up with data to the contrary, but before I do that, I want to see the basis for these statements. If you were ranting on your blog, that would be one thing: but you’re investing time and money intended for the poor on this venture, so the standard of proof that this is based on fact and workable is higher, and the onus is on you.

      In a way, I agree that airplanes are a poor metaphor… mainly because “failure” there is so obvious. But there still lessons to be learned from those silly programs. Perhaps the topic for a separate post.

      Thanks again.

  • Hi All

    First, thanks for the having this terrific conversation on my blog. This is how I envision the Internet to be at its best, in facilitating this kind of public dialogue between people who don’t leave next door to each other, and in most cases have never met. And anyone in the entire world can read, and post. Amazing. 

    Each of these posts raise some points and questions for me. Just now though: I have to head to the airport, but will be back online tomorrow.

    Beyond the specific points, I’m reminded of several great books on failure, worth knowing about. They are: 

    Henry Petroski’s: “Success through Failure: The Paradox of Design”, and “To Engineer Is Human: The Role of Failure in Successful Design”. Searching for on Amazon.com reveals a wealth of other works, in including some on the psychology of failure.

    Edward Tenner’s “Why Things Bite Back: Technology and the Revenge of Unintended Consequences”.

    I’ll be back soon.

  • Pingback: The #failure of development debate « KM on a dollar a day()

  • Pingback: Development Digest – 28/10/11 « What am I doing here?()

  • Don

    Great post with some interesting insights.
    I’m curious if you can take your points to their natural conclusion and provide a more convincing argument. If QA, Six Sigma, etc. are the solution you make them out to be, can you provide examples of them working in the context of development work? By stating that an idea could work without providing evidence for it, aren’t you just providing ‘another potential solution’ instead of ‘the solution’ (ie. wouldn’t Admitting Failure just be added to the suite of potential options including QA, Six Sigma, stc.)?

    I’m stuck at this point in your argument because although I understand the benefits of quality assurance practices in the private sector, context is everything in development work, and great ideas have failed and failed again specifically when the context of a situation wasn’t taken into account.

    Although I understand the use of Google page hits as proxies in your argument, I’d caution against it. Popularity (or pervasiveness) does not equate to correctness – especially contextual correctness. Go ahead and Google “Tea Party” to see how dangerous this line of thinking can actually be.

    Thanks again for the thoughts and discussion. Definitely appreciate the critique, as groupthink is something all development workers need to push back against. But the critiquing is the easy part. After all, tearing things down is fun. I would hope we’re all here for the other half of the equation, being interested in building things up. So how do we do that?

  • J.

    Hey David,

    Really good post. Although I didn’t say so directly in my post, I agree that admitting failure is essentially an industry fad. If it’s even that. I suspect, but obviously can’t prove, that it will remain largely within the domain of aid marketing, comms, and perhaps the subject of breakout sessions at the odd high-level think-tankish HRI-style conferences in places like Busan or Rome. All to say that I think we probably agree more than we disagree when you get right down to it.

    I would like to clarify that despite the (quite inadvertent, but totally badass) Google rank for my post entitled “Fail”, I do not see myself and do not want to be known as the person who invented the fad of “admitting failure.” This said on the outside chance you may have been referring to me with the statement: “I think I can safely say that the whole fad is the recent invention of one person.”

    If you Google “Good Aid”, you’ll see that I’m straight up the number one entry…, yet, dammit, from where I sit “good aid” has yet to achieve fad status. But then, according to your logic, maybe this is a good thing!

    Cheers mate!

    • Hi J. Why are you top of the list for “good aid”? Well, if I google “good architecture”, the number one hit is a book from the “For Dummies” series. I think:

      • your blog gets a lot of positive google votes from novices who are trying to get a handle on aid… not all your readers, but you have 2000 twitter followers, so: of those, quite a few, because

      • in professional aid, or architecture, or health, or law dialogue, nobody talks about “good aid” or “good architecture”. These are too broad to be a useful analytic categories to improve aid or architecture.  A technical term for people working improvement is “aid accountability.” There, the top hit is not a blogger, and not a popular publisher, but the European Network on Debt and Development. 

      Cheers too.

  • Pingback: EWB Newsletter [Halloween Edition] » EWB McGill()

  • Majd Al-Shihabi

    David, a few thoughts: 
    You put too much emphasis on experience. Experience is important, but it also leads to rigidity and thus a lack of resilience.

    The way “experience” is being talked about here is not only hegemonic, but also offensive. It devalues the ideas and experiences of young people and assumes that in complex situations (international development in this case), experience is all that matters. 

    Let’s see where experienced people have been able to solve complex problems in the past: Our broken education system? Our financial systems that keep collapsing every 11 years? Medical research in understanding how our brain works? The Palestine question?

    I’m not claiming that I have the answers, but what I am claiming is that anyone’s 30 years of experience in one field are definitely not enough. 

    I’m sorry for using such strong language, but this is an issue that I experience too often, and as a 23 year old with many new ideas, I’m outraged. 

    (Full disclosure: I am involved in EWB on the student chapter level, but my response is not written from that point of view)

    • Hi Majd. Don’t worry, I get outraged all the time. And I find that the more outraged I am when I write a blog, the more response I get. Go figure.

      In the first place, I agree with your list of problems, and that experienced people have not been able to solve them (except in the case of how the brain works, where they seem to be making lots of progress.) But to make your case, you would have to show that inexperienced people have made lots of progress on these, but they haven’t either. 

      Again: I am not saying that people of any age should not be listened to, or can’t contribute new or fundamentally important ideas. Personally, I’d be happy to see the world run by people under 30, and think on the whole they’d do a better job.

      I am saying that for the particular job of assessing the “lessons learned” from a particular failure, you shouldn’t leave an inexperienced person to do it by themselves, because they don’t the system-wide and historical understanding necessary to properly assess the significance of the error. That’s all. 

  • Erin Antcliffe

    Hi David,
    Thanks for taking the time to respond to the comments. I
    think it’s great blogging etiquette and I’m happy you’ve made the
    I’ve noticed the more you write, the more I find myself
    disagreeing with you. However, I think maybe blog comments is not the
    best way to continue this discussion, so I won’t write too much. I just
    wanted to make 2 quick points:

    1) the “very good blog http://waterwellness.ca/” that you pointed out is in fact written by an EWB “volunteer” – and I agree, his blog is excellent!

    2) It strikes me that you don’t actually know anything about EWB
    Canada’s work. Perhaps you are confusing us with another version of the
    organization, like EWB USA or EWB Australia? These organizations are in
    no way affiliated and in fact quite distinct from each other, both in
    management and in strategic approach. It would do you well to learn more
    about EWB Canada’s work before making proclamations such as “EWB
    volunteer experience is limited to the kinds of organisations that
    take EWB volunteers…. If an
    organisation is staffed by professionals, it’s unlikely to be asking for
    volunteers.” Take a look at some of our partners to find out the types
    of organizations that find value in our “volunteers”, and in fact value
    them enough to pay us for their services.
    I’ll end there, but I’ve enjoyed the discussion and look forward to many more.


  • Hi David, Just enjoyed your post.  You’re dead-on about development fads.  I’m not so negative about the failure one for the following reasons.  There have been different flavors of “learning” fads rippling through development that often had little impact since they lacked ingredients of measurement and admitting failure which both seem to be part (not all!) of getting toward greater accountability (to beneficiaries, taxpayers and donors) and the cycles of correction and learning that you point out above.  Best wishes, Don

    • Hi Don. Thanks. The post was really against faddism, and not against EWB or “admitting failure”. They just became the case study, because of the way the issue happened to be raised. David

Previous post:

Next post: