Reinventing the wheel (all over again)

by David Week on 11 September 2011

Share

Because I come to development from a professional background (architecture, and through architecture, project management) I’m familiar with a pre-existing knowledge base that lies outside that industry we call “development”.

As a result, I often come across areas of development assistance which appear to me to be reinventions of the wheel—in complete apparent ignorance that the wheel has already been developed to very high standards, and needs only be adapted to development: not built from scratch.

Here are three examples:

1. The LOGFRAME

The LOGFRAME was invented in 1969 by some consultants, in response to a request from USAID for a system for managing projects. They came up with the LOGFRAME, a.k.a. Objectives Oriented Project Planning (OOPP). In the LOGFRAME, an overall goal is broken down into subordinate objectives. From these, we break down further into activities required to reach those objectives. And to these activities we then apply resources, and verifiable indicators that measure the objectives, the activities, and the resources.

On the other hand the GANTT chart, invented in 1915, is used all over the world to run vast projects: like winning World War I, winning World War II, and putting a human being on the moon, plus thousands of construction projects involving trillions of dollars. It does the same job as the LOGFRAME in one single graphical diagram, that the LOGFRAME does in 20 pages of opaque text.

Plus, it is embodied in very simple, cheap software solutions, which you can by off the shelf.

What were they thinking?

2 Monitoring

An M&E colleague of mine (for whom I have great respect) once said to me:

a project’s monitoring systems should just be its management information systems.

A this point a light went on, and I made a connection. Did the people that invented “monitoring” ever realise that the monitoring = management, and that the monitoring of complex projects has been going on ever sense the beginnings of the industrial revolution?

  • If you go into the Ford plant in Geelong, they are able to tell you where in the world—exactly—every part is located, is in the supply chain for a car that is going to roll of the line on such-and-such a date.
  • I once talked to Bluescope salespeople, who opened a laptop and showed me how when an order is placed, they are able to tell at anytime where that order is: whether is in a warehouse, or being rolled, or on a truck, and if on a truck where that truck is.

Sure, development involves social issues, but a company like Woolworth’s or Coles doesn’t track buyer behaviour and motivation through a combination of numeric and non-numeric tools to an incredible degree of precision. I once heard a talk by the general manager of Stew Leonard Jr’s, a grocery store with the highest turnover per square meter of any retail store in the US, a store studied by HBS, a store with a turnover in the hundreds of millions per year. He described how through a combination of walking the floor and talking to customers (qualitative measures) and computerised sales accounting (quant) they are able to tell within one week, the effect of any change they make in their way of doing business. In fact, Stew Leonard’s has a corporate motto: “Try it for a week.” When someone comes up with an idea, they spend no time on guessing the impact. They simply try it, no matter how crazy sounds: and they see the impact, in real time, in real operations.

What’s called “monitoring” in development is just crude, in comparison. When I hear the M&E academics debate “qual vs quant”, and final arrive the radical idea of “mixed methods”, I think: what century are they living in? Monitoring is going on continuously around them, to amazing degrees of sophistication, and rather than simple borrow and adapt, they invent some new thing called “monitoring.”

3. Evaluation

One of the worst myths in development is that there is a special field of study called “evaluation”, and that projects can be evaluated by a specialist called an “evaluator”, who knows nothing about the subject matter. But the fact is that “evaluation” is what generals, scientists, professionals and managers (people accountable for getting things done) do each and every day of their lives, and the idea–for instance –that an outside “evaluator” could “evaluate” Virgin Airlines’ service offering, or Apple’s product designs, or Fedex’s logistics programming, or China’s political leadership, without know anything in depth about airlines, product designers, logistics or Chinese politics, economy, culture and history is somewhat strange.

How did this strange notion develop? It developed because “evaluation”, per se, is not a product of science, industry, professional practice, or even the military. It;s a product of government bureaucracy, which is still today the main, if not the only, buyer for “evaluation.” Why the government bureaucracy’s methods are so different from that of these other domains (which are arguably critical to a nation’s development), is a longer discussion.

Here endeth my rant.

  • On points 2 and 3 I quite concur. But on point 1, you’ve lost me.

    Obviously gantt charts are great tools for tracking implementation. Good aid project managers use these too. But are they a design tool? As an architect do you sit down to design a house one day with a blank sheet of paper, and, struck with a great idea, immediately set to work drawing a gantt chart? If not, why are you comparing a logframe, or document that captures the design of an aid project, with the implementation/delivery schedule?

  • Pingback: Fad surfing in the development boardroom - Architecture for Development()

Previous post:

Next post: