In April 2019, the Jordan Development Evaluation Association (EvalJordan) participated in EvalPartners Flagship 1 Program: “Towards formalizing the National Evaluation Policy and Practices in Jordan”. Our Q&A with the flagship program coordinator and Vice President of EvalJordan, Hayat Askar, gives more detail about the roll-out of the Flagship in Jordan. She also shares her views on some of the challenges with closing the M&E gap, and her thoughts about the EvalAgenda as the pathway for a more evidence-based culture.
How did EvalJordan participate in the EvalPartners Flagship 1 Building National Evaluation Capacities program?
EvalJordan implemented the project in a consultative approach with relevant partners and stakeholders and financial and technical support from UNICEF.
The main activity was the Mapping of Monitoring and Evaluation (M&E) Capacities and Practices in Jordan that EvalJordan conducted to better understand the existing evaluation systems, capacities and efforts that govern the evaluation practices in Jordan.
The mapping involved 53 organizations comprising of three different stakeholder types:
The study was essential to inform EvalJordan’s strategy as well as the National Evaluation policy/framework efforts that were delayed due to the Covid-19 crisis.
You can find out more about the study by clicking here
What has been the key lessons from this process?
One of the lessons that I learned was that the quality of the work should be the first priority, despite how limited time is allocated for such projects. One of the challenges we faced was that the exercises’ timeline and duration were very short.
This has put more pressure on us, the Board of EvalJordan, who had to put more efforts and time to ensure we are satisfied with the deliverables. We were also transparent with EvalPartners who supported us and extended the project to enable us to get the best out of this exercise.
Another lesson learned is that perceptions are not always enough. The mapping findings represented respondents’ perceptions. On the one hand, this allowed us to see the situation from the Monitoring and Evaluation individual lens, and individuals are actually the main beneficiaries of EvalJordan.
However, the accuracy of the results primarily relied on the honesty and perception of the respondents in answering the interview questions. Results would have been more reliable if they were supported by real evidence and examples from interviewees’ work. This is something EvalJordan alone cannot achieve, and should be ideally met if an official government entity is the one leading such an exercise, at least in the case of the government organizations who were the main priority for EvalJordan.
Do you think evidence is sufficiently used at national level?
A photo from EvalJordan’s second evaluation days. It was organized for government organizations taking part in the study. The topics were based on the results of the mapping.
The interviewed organizations believed that their organizations reflect the M&E results and findings in relevant plans and strategies, and that there is a systematic way of sharing evaluation findings with all staff involved in the work.
This however varied based on organizations sector (local vs international). But this takes us to another question, and that is whether these M&E results are enough to fill the evidence gap and to inform plans and policies.
The gap in utilizing qualitative data collection methods, collecting data through hand-written forms, is highly depending on donors funding for conducting evaluation, all these are examples that are still observed in some organizations and would affect the sufficient use of data at the national level.
The gap in utilizing qualitative data collection methods, collecting data through hand-written forms, is highly depending on donors funding for conducting evaluation, all these are examples that are still observed in some organizations and would affect the sufficient use of data at the national level
What can evaluators do to become Evidence Champions and promote the use of evidence at national level?
I always believe in the importance of a culture of evaluation and evaluative thinking. Evaluators have a great role to ensure that everyone starts seeing the value behind the evaluation. But we cannot assume that this is the sole role of evaluators.
More is expected from organizations and leaders. One of the challenges shared by EvalJordan’s LinkedIn followers during gLOCAL Evaluation week was the need to actually use the lessons learnt from evaluations, so that it’s not another report on the shelf or a checkbox met for donor requirements.
Evaluators have a great role to ensure that everyone starts seeing the value behind the evaluation. But we cannot assume that this is the sole role of evaluators
What do you think is the meaning of the EvalAgenda, and how should evaluators approach the EvalAgenda in this Decade of Action?
I see the EvalAgenda as the pathway towards a more evidence-based culture, that is built on partnership and accountability. It is the responsibility of everyone to make a change to meet one’s own organizations’ goals, the national goals and thus the Global Agenda 2030. Evaluators need to believe that they are the agents of change, and that despite how small the results they share, they are contributing towards a bigger change.
I see the EvalAgenda as the pathway towards a more evidence-based culture, that is built on partnership and accountability