Evaluation plays a huge role in public policy. Or does it? Anyway, it takes up a lot of time and other resources, such as money and people. Evaluations take a look at interventions and try to see what effects they will have, or what effect they are having or, most commonly, have they done what they were meant to do, and are the negative effects less than, equal to or more than was envisaged. From big investments to programmes to grass-roots projects, everything that involves public money is being evaluated all the time. Which means business for consultants, hotels to house the review groups, editors to re-work the reports, travel for the delegates to assess the outputs, and so on. Within administrations there are then more groups and commitees to derive “learnings” and identify action areas, new guidelines and methodologies, and so on. But what are the substantive consequences of all of this activity? Are there changes in approach as a result of the evaluation? Even if there are, are they perhaps swamped by other changes as a result of other considerations? Do the people who are preparing new interventions have time to read the evaluation reports in this field, let alone related fields, or are they even aware of them?
I’ve written a note here on what needs to change in evaluation itself if it’s to be of use in the future. In brief, evaluations need to take a longer view, they need to take a wider view, and they need to be quicker. (Also, as I’ve suggested above, we need to change the policy and administrative environment in which evaluations take place, if they’re to have any impact).