Diamonds for AWARD: Evidence in the quest for Women Scientists in African Agriculture

“Just look at our diamond!” It had the pride of place within the team. Not some sparkling ring on a finger, but a quite complicated theory of change with feedback loops and all, redrawn in the shape of a diamond. It proudly displayed the essence of the programme they were managing: equipping top women agricultural scientists across sub-Saharan Africa to further their careers as leaders in the sector, and to develop innovations that could advance the prosperity and well-being of the African smallholder farmers – most of whom are women.


By Zenda Ofir; Independent Evaluator, Vice-President of the International Development Evaluation Society (IDEAS) and former President, African Evaluation Association (AfrEA)

The diamonds (there were actually two) were developed in 2009-2010, before ‘theories of change’ became all the rage. They were the basis for all the evidence generated and used to manage the African Women in Agricultural Research and Development Programme (AWARD) launched in 2009 with funding from the Bill and Melinda Gates Foundation, USAID and the Acropolis Foundation.

Engaging at the time hundreds of top women scientists across 11 African countries, it was a comprehensive two-year career development programme with eight complementary components in the domains of mentoring, science and leadership.

Over the next five years – its first phase – AWARD grew from strength to strength under the leadership of its wonderfully committed and dynamic director, Vicki Wilde and the great AWARD team. And it continues to this day under the equally committed and energetic guidance of its current director, Wanjiru Kamau-Rutenberg.

Importantly, this first phase of AWARD was exceptionally evidence-rich.

Adaptive management before it came in vogue

I worked with AWARD throughout this period, from 2009 to 2015. I became involved because in 2008 I did a comparative evaluation for the CGIAR and for USAID/USDA of two major international fellowship programmes aimed at women in science in Africa. The next year the experience of these two programmes led to AWARD, and I was asked by the newly established Steering Committee to become their M&E Advisor. But I soon started to play the role of an internal evaluator, working closely with the programme team in the style of developmental evaluation. I guided the development of their M&E system, helped with analyses, and stimulated reflection about unfolding issues.

We applied systems thinking and adaptive management in AWARD long before it became fashionable. And the AWARD diamonds were always with the team as reference.

Right at the start, upon my advice, the Steering Committee decided that rather than using an experimental design as some expected, they would support a theory-based, learning-oriented approach to evaluate performance and impact. They agreed that the former would have been extractive, too disempowering for an upstart programme, and unable to reflect the role that evaluation processes and evaluation evidence should play in Africa:

It should support critical, evaluative thinking.

It should inspire confidence and course-corrections as lessons are learned when interventions are designed, implemented, adjusted, and evolved into new phases.

It should cultivate an internalised sense of accountability towards all stakeholders, not only towards “donors” – and importantly, towards those the programme is intended to serve; and

It should contribute to credible insights about how such programmes can be successfully delivered.

To emphasise this, we developed a set of principles that directed the development of the M&E system:

  • Empower AWARD Fellows to use and benefit from M&E
  • Make M&E useful to multiple stakeholders
  • Balance accountability
  • Endorse appropriate and ethical methods
  • Be effective, yet ‘light’ and cost-effective
  • Innovate
  • Focus on positive, enduring change
  • Treat M&E as a management priority.

Combining inputs to develop the AWARD diamonds


We started with developing the programme logic, a consolidated idea of the programme team and stakeholders’ theory about how change might happen. I facilitated meetings with the team and with the first cohorts of fellows to figure out how they thought change would happen given what AWARD aimed to do. We followed an adapted outcome mapping approach, thinking of the systems within which the programme was being implemented, and using tailored adaptations of a generic roadmap to personal leadership. We did not draw a systems map – at the time we did not emphasise its importance as we do today – but we identified a range of assumptions and connections underlying the theory of change that we thought to test. We also drew from prior knowledge about such programmes in the research literature, and from evaluations.

So, combining all of these inputs, the diamonds came into being. One of the team members, Marco Noordeloos (later responsible for AWARD’s learning and outreach), turned my ugly duckling TOC diagrams into diamond artistry. Then we designed the M&E system to test the theory of change – or at least, to generate sufficient evidence to help the Steering Committee (which included the programme financiers), the management team and the Fellows to understand how to keep on improving towards success.

Remember, the Fellows and programme team, not ‘outsiders’, had defined what success would look like, broadly guided by the programme objectives. This helped them to ‘own’ the expected programme results, and the theory of change.

Most importantly, the programme leadership and team regularly reflected on what they were learning, staying true to the M&E principles. They also shared the data and trends at least once per year with both the Steering Committee and the Fellows, encouraging reflection on what was taking place.

We were lucky that the then 13-person Steering Committee, made up of the programme financiers (‘donors’) and African as well as international specialists in relevant areas of work, were wholeheartedly supportive of this approach. They were deeply interested in the evidence that the programme produced, and met once a year for several days to discuss the evidence and what was learned for the programme going forward. The financiers were also visionary, allocating almost 10 percent of the budget to the knowledge generation, evaluative learning and accountability components.

Evidence at the core of programme design and implementation  

We combined, at different stages of implementation, data and information from 15 different types of evidence collection methods, and Dedoose software to capture everything, and to help with the analyses. We tracked factual and perceptual, quantitative and qualitative data and information, and used eight different types of sources for effective triangulation.

Our methods included, among others, analysis of Fellows’ CVs to identify trends, their outputs such as publications and technical advancements, the perspectives of supervisors and observers in their environment, spontaneous emails about experiences and observations from Fellows and others around them, their own assessment of their experiences during their Fellowship and, very importantly, their impact stories. For the latter, we used rubrics to assess the strength of the evidence as well as the types of changes that were occurring.

We tracked whether implementation was of high quality, why things were going well or not, and what course corrections were necessary. We tracked Fellows’ achievements and the extent to which AWARD was likely to have contributed to these. We focused especially on identifying emerging outcomes – expected and unexpected, positive and negative – drawing together data from all sources.

All of this helped us to understand which components of the programme were essential for success; the value of synergistic partnerships; whether the sum of the activities’ results was greater and different from what could be expected from each component separately; which activities did not act synergistically; how everyone could do better to get to desirable outcomes; and where the theory of change broke down.

We tracked Fellows’ scientific as well as leadership achievements, and the ripple effects on those around them and on their institutions. We had a strong focus on trying to identify negative consequences of the programme – and there were a few. We even determined whether there was anything transformative in the programme that would ensure that the changes that the Fellows – and their institutions – experienced would sustain.

We gained two more benefits from this extensive learning-oriented, improvement-focused M&E system. We developed AWSEM (pronounced ‘awesome’!) – AWARD’s African Women in Science Empowerment Model, which applied the well-known power framework (power within, power to do, power over, power with) in AWARD’s context. We used both quantitative trends data and rich qualitative information from different sources, with extensive triangulation. We understood much more about which elements made for success, including the synergistic, reinforcing effects of several elements of the programme – its inspiring leadership course, the Fellows’ self-discovery during mentoring events, and the exposure opportunities that built up their confidence and profile.

Bumps in the road contribute to continuous learning

Some members of the AWARD team, Steering Committee, financiers and I, 2011

Of course, everything did not go as smoothly as this description would imply! The M&E team’s hiccups and failures were well documented and reflected upon. They faced methodological challenges, capacity challenges and challenges with the implementation of the M&E tools. But much went well too, and much was learned.

In its next phase, implemented over the past six years under Wanjiru’s leadership, AWARD has continued to grow from strength to strength. It has reached into more countries in Africa, increased its focus on institutional development, and is doing more to ensure that research in the agricultural sector is gender-responsive.
And it has further expanded its M&E experience to include experimental evaluation designs.

We cannot (yet) say that the evidence has helped to save lives or plucked people out of poverty. Perhaps eventually and indirectly, when the confidence and leadership and scientific skills of the AWARD Fellows have rippled out sufficiently to improve the wellbeing of the millions of smallholder farmers in Africa.

But we can say this: AWARD cultivated among the management team, among the Fellows and among the Steering Committee members from different parts of the world the strong sense that evaluative evidence is valuable; that it can help improve actions and accelerate progress towards success; that monitoring and evaluation can be empowering and useful, helping each do better in her own way; and that it can help us understand much more about how ‘development’ in this domain can best be done.

I have no doubt that the sound foundation laid by the evidence-rich first phase of AWARD helped it to expand with confidence, and with insight.

It is just a pity that the diamonds had to be set aside for another, hopefully equally or more sparkling M&E effort. But change is, after all, the only constant.

Leave a Reply

Your email address will not be published. Required fields are marked *