New Linked In Group on Feedback

Hello,

Exciting news- following a series of learning events in London on DFID’s Beneficiary Feedback Mechanisms pilot programmes, a Linked In group has been set up to allow us all to share resources, ideas, experiences….

Come and join https://www.linkedin.com/groups/7053295/profile

You can find case studies for the pilots here

and a really excellent film that is great for advocacy/ training purposes here

Advertisements

New feedback/ participation in evaluation resources

Irene Guijt and myself are doing a blog series on Better Evaluation for June and July. You can find our first blog here

http://betterevaluation.org/blog/four_reflections_on_participation_in_evaluation

  

Keystone recently guest edited the Alliance magazine special feature on feedback – Beyond Accountability: Feedback as Transformation – read the lead article here.

Do please share any other good and relevant reads with me and I can add them. Continuing on from the bibliography that you can find here.

DFID Working Paper: Here is the final version

Thank you to everyone for comments, inputs and support. The report is now out, although still to be launched. You can find it attached here.

Please do feel free to print out the checklist and analytical frameworks that I posted below. Do use them, revise them and share your improved versions….DfID Beneficiary Feedback, Feb 2015 – Final

Also attached is the ppt presentation that I shared at the UK Evaluation Society Conference last week.Beneficiary Feedback in Evaluation_UKES Methods Workshop

Beneficiary Feedback in Evaluation: Downloadable Checklist for Commissioners and Evaluators

Attached are a set of checklists that I developed for

1. Evaluation Commissioners

2. Evaluation Practitioners

to help integrate a beneficiary feedback approach to evaluation.

Please feel free to pilot, use and feedback on what was and wasn’t useful. This is a live tool that I will happily keep reviewing based on feedback. Let me know how it goes….

Checklist for evaluation commissioners

Checklist for evaluators

Capitalising on Momentum

So it has been a busy month finalising the report on Beneficiary Feedback for DFID and we already have people up for piloting/ testing the approach:  Two forthcoming evaluations in the Africa Region,  one joint monitoring mission between DFID and the World Bank and hopefully another evaluation of a DFID funded NGO programme. The draft report  has been presented to DFID, to the UK Learning Group on Beneficiary Feedback and will be presented to Interaction next week. Positive responses so far. World Bank are keen to review also, particularly in view of their commitment to 100% beneficiary feedback. Hopefully this paper and this new paper from Interaction will help with the “How To” do this practically, meaningfully and ethically in an evaluation context.

But I would love to get some discussion going here.

Are you trying to engage a beneficiary feedback approach to evaluation? What are your successes and challenges?

Have you read Interaction’s new paper and what do you think?

As for my paper, it is currently being formatted and will be published as a DFID working paper in the next few weeks. I will share then.

Analytical framework for engaging a beneficiary feedback approach to your evaluation

I have designed what I hope is a really simple tool to help map out what types of beneficiary feedback might be most appropriate at different evaluation stages. Would anyone be interested in testing out this tool in any evaluations that they are commissioning/ tendering for/ designing?

As you can see below, it is a straightforward matrix that requires you to consider and map out the type of feedback that you think would be most appropriate at each stage of your evaluation. You can then use this to consider which methods might be most useful as well as what resources you might require. Sometimes there may be no extra cost, sometimes there may be.

You can copy the matrix from the below or I have attached it so that you can (hopefully) print it out and use it. Beneficiary Feedback in Evaluation Matrix

Please note that different types of feedback may be more appropriate at different stages.

One-way feedback to beneficiaries One-way feedback from beneficiaries Two-way feedback – inter active conversation between beneficiaries and evaluators Two-way feedback through participatory evaluation
Evaluation Design
Data Collection
Validation and Analysis
Dissemination and Communication

Do let us know how it goes! What were your challenges? Which approaches/ methods worked best? This website has received over 1100 views so far from all over the world so I hope we can share our learning together.

Breaking down “Feedback” A typology

Literature review, interviews done, I have established in my mind that feedback can mean everything and pretty much nothing. And it definitely does not need to fall under “participation” (topic for the next blog post). Specificity is everything in this case. I have therefore developed the following simple typology to help me to distinguish and analyse different types of feedback:

  •  One-way feedback to beneficiaries
  • One-way feedback from beneficiaries
  • Two-way feedback with inter active conversation between beneficiaries and evaluators but with evaluation team retaining independence and power.
  • Two-way feedback through participatory evaluation with beneficiaries as part of the evaluation team.

Neither of these is “good” or “bad”. Each has value in and of itself- choices will be defined by your context.

To illustrate, I have tried to map examples of each of these types of feedback to the evaluation stages that I discussed in my previous post.  This is very much early thinking so feel free to add your thoughts…

Matrix 1: Illustrative examples of beneficiary feedback at different stages of the evaluation process

One-way feedback to beneficiaries One-way feedback from beneficiaries Two-way feedback – inter active conversation between beneficiaries and evaluators Two-way feedback through participatory evaluation
Evaluation Design Dissemination of evaluation protocol to enhance informed consent and meaningful engagement with content and process of evaluation Views of beneficiaries sought on evaluation questions/ protocol Joint discussion on evaluation protocol/ questions. This could be at different stages of design. Ultimate design decisions rest with evaluation team Beneficiaries input into evaluation design as evaluation team members
Data Collection Beneficiaries informed of data collection processes, including with other stakeholders Views of beneficiaries sought alongside other data collection methods. This could be through survey, interview, focus group discussion where evaluators extract data with no return of/ joint discussion on data Beneficiaries question each other and evaluators as data is collected through e.g. interactive focus group discussions or discussion of survey findings in order to challenge own and evaluators’ assumptions and interpretations Beneficiaries engaged in data collection as members of team
Validation and Analysis Sharing of early findings and/ or analysis with beneficiaries Beneficiaries share their views of early findings with evaluation team- questioning or validating these Beneficiaries and evaluation team discuss and refine findings together, jointly discussing recommendations. Ultimate decisions rest with evaluation team. Beneficiaries engaged as part of evaluation team in validation and analysis of evaluation findings
Dissemination and Communication Dissemination of relevant evaluation findings in appropriate format (poster/ document/ radio/ video/ face to face) Beneficiaries share their views on materials shared: this can be on content or methodology Beneficiaries and evaluation team or programme team (depending on dissemination and communication plan) discuss findings together, learning jointly from process Beneficiaries, as part of evaluation team, design and support dissemination and communication activities