September 15, 2022

Does performance monitoring lead to better performance?

The world of performance monitoring in the aid program is a pandora’s box. For some it signals data, insights and cutting edge management. For others, it’s an overwhelming world of jargon, process and despair. But one thing is certain – we have a forensic new Foreign Minister, a new development strategy and performance framework en route and the question of what works isn’t disappearing.  

We ask the experts whether our current performance approach is a vice or virtue.

Gordon Peake
Affiliate, Georgetown University

Let me answer with the sort of dappled yarn unlikely to be found in an official performance report. When working as a jobbing evaluation consultant I once produced what I thought was a sterling draft report for Government. It was leavened with contextual nuance, speckled with descriptions of the mixed motivations of some counterparts and the authentic enthusiasm of others. I had lauded the dogged work of program staff producing outcomes which were real and important but had zero much to do with the unfathomable performance reporting framework they were yoked to.

Upon submission, I received a stern email with directions to rewrite according to the script. I didn’t feel much of the ‘kind regards’ the email was signed off with.

Official performance monitoring bespeaks the world as we’d like it to be – sober, deliberative, logical, solution-oriented – rather than the messy, complex, and contingent one that it is. It is silent on questions of politics and geopolitical competition.

The net effect is insipid and hard-to-read. It’s hard to be convinced that performance reports are determinative when it comes to making programmatic decisions. It can all feel a bit for show.

So, can performance monitoring lead to better performance? For sure. But only if funders overcome the urge to be precious and allow the poor souls who work in monitoring and evaluation to narrate the world as the complicated, stop-start, unpredictable place it is.

There is one person whose incisive, empathetic but forensic questioning about aid performance over the years is a model to emulate. Her name is Penny Wong. She didn’t abide one-dimensional stories in opposition. As Foreign Minister, she is in the position to ask for more lifelike reporting about performance.

Gordon’s got the gift of narrating the world he sees with a distinct blend of warm heartedness, cynical objectivity, and a dedication to honesty. His wicked turn of phrase, humour and openness to collaboration make him a favourite coffee buddy amongst the communities he’s worked with in Timor Leste and PNG, as well as for Lab staff. Watch this space for more from Gordon at the Lab…

Jocelyn Condon
Director of Development Effectiveness, Australian Council for International Development

Performance monitoring can lead to better performance. But, it’s erroneous to suggest the pathway is guaranteed. Improved performance in Australia’s development program depends on two things: the quality of the performance monitoring and what is done with the results. For most of the program there is no shortage of data collected. But to translate this into performance improvements, a critical bridge must be made between how success is defined, measured and navigated on the one hand, and how the development program is accountable for its performance on the other.

How the performance system is setup matters. Australia’s development program is at a crossroads following the closure of the Office of Development Effectiveness. A new Government with a new development strategy imminent provides the opportunity to rebuild and resource this core business.

But this isn’t just a matter of a neat one page framework. It’s a wholescale modernisation and reintroduction of a performance culture. Sure, the framework should align to international commitments such as the Sustainable Development Goals and the Grand Bargain. But the vital mechanics of whole-of-program evaluation, pathways for the use of evidence to inform investment decision-making and performance based country planning shouldn’t be underestimated. It would be timely for Government to consider a dedicated functional unit with the expertise and the mandate to execute such a performance approach.

And on the accountability front? Accountability is inexorable from performance improvement. Australia is committed to the International Aid Transparency Initiative, yet there is a notable lack of publicly available information about the evaluation of Australia’s development program. Better programming on the ground will come from a more open conversation about what could be done better, and how.

Jocelyn is the leader behind the Australian Council for International Development’s development standard and work fostering a high performing NGO sector. She has a unique blend of development, financial and governance skills that she brings to bear in the not-for-profit and development sectors. At the Lab, we love Jocelyn for her combination of smarts, quick wit and practical know-how and enjoy the way she thinks about issues of overall development and performance.

Chris Roche & Allan Mua Illingworth
Professor of Development Practice, La Trobe University & Research Fellow, La Trobe University

Often not.

Why? Performance and evaluation systems tend to be short-termist. They focus on the visible, countable and predictable. But we challenge you to think of a single development pathway which is well understood in this way.

Performance systems often ignore the inherently political nature of development in favour of ‘best practice’ world views, ways of knowing and being. They are usually more about accountability (what happened, when?) than learning and adaptation (why has something worked or not, for whom, and what must change?).  

But none of this is to say that performance monitoring is pointless.  

So, what can be done?

First, we need to reimagine the purpose and practice of evaluation itself. A good starting point is learning from indigenous, locally-led perspectives, and approaches such as the Rebbilib initiative in the Pacific. Drawing from Micronesian master navigators this sees the mapping of voyages and destinations as being defined by what Pacific people value and where they want to go, not set by a foreign brains-trust.

Second, we need to tip the scale of investment away from ‘what happened’ towards more learning, feedback and adaptation. Pooling local knowledge allows the ideas, beliefs and relationships which underpin social change to be understood and shared.  

Finally, a solid look at governance arrangements for performance systems under a new development strategy is key. We already recognise the importance of gathering multiple perspectives, but we can do better to bring these to bear when it comes to the use, or indeed non-use or abuse, of findings.

Chris is a longtime friend of the Lab. His work with La Trobe University and the Developmental Leadership Program has inspired much of our analysis and approach. We love Chris for his encyclopaedic ability to find the exact paper we need, his knack for putting words to our thought bubbles and his commitment to positive deviance (indeed, he ran a conference once with this title.)

Allan is a monitoring and evaluation specialist of Pacific Island heritage who has had a long career in international development including with the Pacific Leadership Program, the Solomon Islands Governance and Justice Programs, and The Pacific Community (SPC). Allan is renowned for the passion he brings to building a new generation of Pacific Island evaluators and contributing to the debate on ‘Pasefika’ ways of knowing and being.

Read more