You are currently viewing Humility, iteration, and learning

“Big data” seems to get all of the news and enthusiasm these days, but there is a quiet revolution in small data that is sweeping the world, sector by sector, organization by organization, department by department. Fueling this revolution everywhere is a new-found value in humility, iteration, and learning.

Smart Policy Design

Asim Khwaja, a colleague and former dissertation advisor of mine at the Harvard Kennedy School, recently gave an inspiring talk on the subject. In that talk, he made the strongest case I’d heard yet for humility in development policy and programming, for building iteration and learning into the process – because you can’t realistically expect Version 1 of policies and programs to perfectly achieve their aims. I work with Asim on a project to promote the use of evidence in policymaking in India and Pakistan, and I’d designed curriculum that focused on, for example, the use of evidence as part of a systematic decision-making process. But Asim was talking about something much more, something much broader: a culture of learning and an approach to policy and programming that builds trial-and-error into the process.

Asim, with Rohini Pande and others at Harvard’s Evidence for Policy Design (EPoD), promote iteration and learning as part of their “smart policy design” process. Economic tools and theory play a role, but more fundamental to smart policy design is iteration and learning. In his talk, Asim motivated this approach with an analogy to modern cars that collect data helpful to mechanics and children who learn by repeatedly trying things and seeing what happens. The basic idea is that the world is complicated enough (people are complicated enough!) that we can’t really expect to design the perfect policies or programs in one go. Rather, we need to make our best effort, see what happens, and then refine. And we need to intentionally build in opportunities to learn as we go, so that the “see what happens” part happens and the “refine” part becomes feasible.

This is obviously a key instinct behind the modern push for monitoring and evaluation (M&E). But, unfortunately, many of us involved in international development believe that this instinct is not deeply enough embedded into the culture of policy and programming. All too often, M&E is an afterthought or donor requirement to be endured. All too often, we just go through the motions, without any true, higher-level commitment to humility, iteration, and learning. But slowly things are changing. The Asims of the world are gaining traction. (To see a similar talk Asim gave on this subject a year ago, see the Youtube recording.)

You can find these same motivations for humility, iteration, and learning in the increasing push for human-centered design in development programming. But it goes far beyond the international development sector: human-centered design has been catching on pretty much everywhere solutions or processes involve humans.

The same motivations can also be found behind the “agile” software development practices that have swept the software world in recent years. Rather than think that you can design a product and develop it in one go, there is, built into agile methodology, the humility to iterate and learn all through the process.


And just like human-centered design, agile methodology has been spreading rapidly beyond software companies and IT departments. Marketing departments, for example, are becoming agile. A great friend, long-time collaborator, and Dobility supporter, Scott Brinker has blogged on this subject for years, and now he’s literally written the book on it. Humility, iteration, and learning are catching on in marketing departments around the world.

Increasingly, people want to make evidence-based decisions, they want to be data-driven. And one of the most powerful forces driving that trend is this deeper culture of humility, iteration, and learning. In my view, this is all a very good thing.

However, it puts a lot of pressure on the evidence – on the data – that people use to inform their decisions. If that evidence or data is biased or just plain wrong, what then? Have we ended up worse off rather than better? This fear is behind a lot of the resistance to these trends, behind many people’s continued reliance on gut feeling, personal experience, and blind faith.

In other words, having the right process is great, but without the right inputs that process will fail. Without good data and evidence – without effective learning – the entire approach will fail.

This is where we, the SurveyCTO team here at Dobility, have chosen to focus our energies: on making sure that the data people collect is the kind of informative, high-quality data that will foster true learning and support this culture of humility, iteration, and learning. We help people put this culture into practice, in part by tuning out the Big Data hoopla and focusing on the small data issues that matter most. And, like the Asims of the world, we’re gaining traction.