Shifting from extraction to nourishment
A 4 Step Guide to start decolonising evaluation in your work
On Monday 24 April 2023, inFocus and Data Collective held a short exploratory workshop on ‘Decolonisation in Evaluation.’ The purpose was to start a conversation on what decolonising impact and evaluation is, and what it could look like in the context of working from the UK.
Decolonising evaluation is about addressing the inherent power dynamics that exist in the evaluation process. Decolonisation is an on-going process; a verb rather than a noun.
Summarised from our discussions, we’ve created this short step-by-step guide to start decolonising evaluation in your work. This guide is not comprehensive, but we hope that it will help you start asking the right questions. To help you shape your evaluation process, each section to this guide starts off with some questions you can apply to your work.
Step 1: Co-creating your evaluation framework
Questions to explore
How are participants recruited into the evaluation programme? Are you reaching marginalised communities, or just working with those who are easy to engage?
Does the evaluation have any value for the participants? If not, why should the participants be involved?
How are you defining what to evaluate and measure? Who is shaping the evaluation framework? Are the participants involved in the process?
Is the evaluation method suitable for the participants? Is it accessible, engaging and genuinely going to make a difference?
How are you acknowledging and addressing power imbalances in the evaluation methodology? Are you allowing the space for interrogation and reflection of who makes the decisions?
How are you building in evaluation and learning into your programme? Have you resourced enough time, capacity and funding to ensure that the learnings are useful to everyone involved?
So often, evaluation frameworks are extractive.
They are frequently an add-on to programmes and projects to meet the needs of the funder or investor without producing much value for the people who have engaged in the process. Therefore they do little to help the programme implement the learnings they gather. This means that the knowledge extracted from participants isn’t actually useful for the programme or the participants.
Co-creation of evaluation methods can change that. By shaping the evaluation framework with the participants, you are far more likely to produce reflections which are useful for the people involved, and show the funder and project delivery team what the communities actually need and want from the evaluation process.
While this does take more time and capacity, co-creation means that the evaluation method is accessible to the participants, and gives them the opportunity to own their own stories.
Step 2: Collecting data joyfully
Questions to explore:
How are you making the evaluation process nourishing for the participants? Are you compensating them for their time? What are the participants gaining out of this evaluation?
Have you considered how culture may shape the data which is shared? How have you designed collection methods to suit the local context?
Is the format of the data collection suitable for the individual participants? For example, are you offering alternative methods for people with differing needs?
How are you allowing participants to challenge and interrogate the assumptions behind the evaluation data collection?
How are you making the data collection process joyful and energising for the participants?
Processes of collecting evaluation data can be reductive, and purposefully so. For example, some surveys may predetermine and narrow down the possible learnings from an evaluation process, in order to produce answers which reinforce the status quo or match a specific narrative the programme or project managers want to tell, and to make it easier to evaluate.
This means that organisations do not collect data which actually allows for learning in the way the participants may want. As one of the workshop participants said, “Learning is a living, embodied thing” and the process of collecting evaluation data can be a learning experience in itself.
To find evaluation data that is actually useful, you need to create space for multiple ways of engagement and knowledge sharing, working with the participants’ needs and desires. One way of doing this is using holistic design methodology during the collection of the data, following the knowledge and learnings rather than focussing on the outcomes.
Step 3: Analysing data with context
Questions to explore:
Who is processing the data?
What is lost in the process of translation and analysis? How are you grounding the analysis in the context of the communities and people participating?
How are you analysing the data? Are there voices which are more prominent than others? How are you allowing for nuance and diversity?
How can the participants be part of the analysis process?
Who are you analysing the data for? How can you ensure that the learnings which aren’t required for reporting are also observed?
Who is sense-making the analysis? How is this work accountable to the communities and participants involved?
Data analysis is the politics of translation. When we analyse evaluation data, we bring in our own assumptions and understandings, which we use to translate to produce a report. We are often accountable to those in power, rather than the participants involved. This can mean so much information and knowledge is lost or forgotten, and the value of the research is diminished.
Co-analysis can allow for a more holistic understanding of the evaluation process. It can help ensure that what the communities actually want to learn is centred. It also allows participants to own their own data, and tell their own stories.
Step 4: Sharing the learnings
Questions to explore
How can you involve participants in the reporting process?
What ways are you using to share your learnings? Are they accessible to the participants and wider public?
People engage with information in different ways. What creative methods can you use to share the learnings?
Who tells the story? How are the stories of the people impacted framed?
Are you overstating the programme’s role and contribution? How are you ensuring that the agency of the participants and communities is part of the narrative?
When we share our learnings, we are really communicating a story. So often the story demonstrates a ‘white saviour complex’: that an organisation came in and through the project we ran, we ‘saved’ the participants involved. While this feels important to improve funding and donations, it tends to erase the work conducted by the people and local communities involved.
When producing the reporting, it’s important to allow the participants to own and tell their own stories. As one of the participants stated, we need to ‘put participants in the driving seat’. This can be done by allowing participants to present the learnings and participate in the process of creating the narrative itself. Otherwise, getting participants to check the reporting before it is published is respectful.
In Summary
Decolonising evaluation is about addressing the inherent power dynamics of the whole evaluation process – from shaping the process to collecting data, analysing the data and how results are shared. There is not a simple or quick solution to solving these power dynamics; it is an on-going process which needs to be considered and re-evaluated iteratively.
To start the process of decolonising evaluation, we need to design evaluation processes with intention – ensuring that participants are involved in every stage of the process, and the learning process is not extractive, benefits the participants, and communicates learnings which accurately reflect what the participants want to share rather than what the organisation wants to communicate.
This workshop was supported by Catalyst, a network helping the UK third sector grow its digital skills and processes. Data Collective and inFocus are members of their collaborative circle, which helps tech and data for good organisations connect and collaborate together.
This post was initially published on the Data Collective website, and was written by Nish Doshi, Zainab Ekrayem and Tom Keyte.