As part of the Evidence Stories initiative, #EvidenceMatters aims to give a voice to and share the thoughts of the Young & Emerging evidence champions who just entered the field of evaluation. In this Q&A we sit down with Genevieve Quinn, a research assistant at UN Women working on an evaluation to support National Action Plans on Women, Peace and Security. Reflecting on the question of how to make evaluation more accessible, she says it’s important that evaluation findings are presented to non-evaluator audiences and that civil society members are involved, and contribute their thoughts in these processes.
Tell us a little about yourself and your current research focus at UN Women?
I am relatively new to the field of evaluation, coming into the work directly from finishing my PhD in Politics. I have an interdisciplinary academic background in public and social policy, law, and politics, and was keen to start applying the analytical and research skills gained during my years of academic study in a concrete and purposeful way.
For any student of politics working for the UN is a dream job, and when I saw an internship posting from the Independent Evaluation Office at UN Women seeking candidates with strong research and analytical skills, I decided to apply. I have since transitioned from intern to research assistant, where I have been a part of the corporate thematic evaluation of UN Women’s support to National Action Plans (NAPs) on Women, Peace and Security.
What does the term ‘gender-responsive evaluation’ means to you?
To me, gender-responsive evaluation means ensuring that no one is left behind, and that the needs and voices of women which might otherwise go unheard are raised. By incorporating Gender Equality and Women’s Empowerment (GEWE) dimensions into evaluation methods and processes, gender responsive evaluation not only promotes learning and accountability for gender equality and women’s empowerment through its findings and recommendations, but the inclusiveness of the process itself empowers rights holders and can help to mitigate the discrimination against and exclusion of women.
One anecdote which to me typifies why gender responsive evaluation is so important involves bathrooms in refugee camps. Reading project reports during the inception phase of our WPS evaluation, I learned that UN Women’s programming includes measures to ensure that women’s bathrooms in refugees camps have lighting at night, as without lighting, women will avoid using the bathrooms for fear of being assaulted.
Something as simple as how lighting critically impacts the safety and security of women reflects the necessity for Peace and Security/Humanitarian programming, and in turn the evaluation processes that assess this programming, to take into account the lived realities of women specifically
Why do you think the use of good evidence is important in making evaluation more gender responsive?
Coming from an academic background, strength of evidence, particularly through triangulation of multiple streams of data to support a finding or conclusion, is key for promoting accountability, and in turn fostering change.
Using evidence to promote understanding of the structural, cultural, and other causes which inhibit gender equality and women’s empowerment and incorporating learnings into programming is key for advancing gender equality and women’s empowerment agendas, and gender responsive evaluation, through the process itself and its results, helps to achieve this.
As an emerging evaluator, what are your initial impressions about the evaluation profession? Where do you think the field can improve to make evaluation more accessible?
My initial impression of evaluation is that it is very fast paced! During the process, you are often juggling many different tasks- you might be conducting stakeholder interviews while simultaneously doing desk reviews or creating surveys. The dynamic nature of the process necessitates adaptability – sometimes new questions emerge from the interview process that you didn’t think of before, or old questions need to be reformulated or abandoned based on the evidence you are collecting. Then there are other considerations you need to balance, such as political sensitivities, and making sure interviewees feel comfortable sharing their insights. Every day has brought new learnings, and that is one of the things I’ve enjoyed most!
In terms of making evaluation more accessible, I think it would be useful to focus on advocacy to translate the ‘high level’ more abstract thinking into concrete examples of how evaluation works and the results it achieves (which I understand is a goal of this publication!)
I recently attended a Webinar on an evaluation that did this really well: the process and findings were presented for a non-evaluator audience, with civil society members involved in the process also getting to share their insights. It was very engaging and left me wanting to know more.
Have you heard about the term VOPE, and are you part of one?
The acronym was familiar and I looked it up. I’m not currently part of one but would definitely be interested to join. I think bringing together different types of practitioners (academics, government, NGOs) is a great idea. The more diverse backgrounds and skill sets that can be brought into evaluation the better!
Why did you enter this field? Was there any specific personal story that made you decide to work as an evaluator?
I entered the field of evaluation because I was keen to put my academic skills to use in a concrete and purposeful way, and it seemed an ideal profession to do so. While I don’t have an impact story that brought me into the field, during my time in evaluation I have come to really appreciate the personal interactions that come with the job. There are not many professions where you get to engage with such a wide range of stakeholders, from government officials to civil society representatives to the rights holders being directly impacted by the projects the evaluation is assessing. This engagement has been really rewarding.