Breaking down “Feedback” A typology

Literature review, interviews done, I have established in my mind that feedback can mean everything and pretty much nothing. And it definitely does not need to fall under “participation” (topic for the next blog post). Specificity is everything in this case. I have therefore developed the following simple typology to help me to distinguish and analyse different types of feedback:

  •  One-way feedback to beneficiaries
  • One-way feedback from beneficiaries
  • Two-way feedback with inter active conversation between beneficiaries and evaluators but with evaluation team retaining independence and power.
  • Two-way feedback through participatory evaluation with beneficiaries as part of the evaluation team.

Neither of these is “good” or “bad”. Each has value in and of itself- choices will be defined by your context.

To illustrate, I have tried to map examples of each of these types of feedback to the evaluation stages that I discussed in my previous post.  This is very much early thinking so feel free to add your thoughts…

Matrix 1: Illustrative examples of beneficiary feedback at different stages of the evaluation process

One-way feedback to beneficiaries One-way feedback from beneficiaries Two-way feedback – inter active conversation between beneficiaries and evaluators Two-way feedback through participatory evaluation
Evaluation Design Dissemination of evaluation protocol to enhance informed consent and meaningful engagement with content and process of evaluation Views of beneficiaries sought on evaluation questions/ protocol Joint discussion on evaluation protocol/ questions. This could be at different stages of design. Ultimate design decisions rest with evaluation team Beneficiaries input into evaluation design as evaluation team members
Data Collection Beneficiaries informed of data collection processes, including with other stakeholders Views of beneficiaries sought alongside other data collection methods. This could be through survey, interview, focus group discussion where evaluators extract data with no return of/ joint discussion on data Beneficiaries question each other and evaluators as data is collected through e.g. interactive focus group discussions or discussion of survey findings in order to challenge own and evaluators’ assumptions and interpretations Beneficiaries engaged in data collection as members of team
Validation and Analysis Sharing of early findings and/ or analysis with beneficiaries Beneficiaries share their views of early findings with evaluation team- questioning or validating these Beneficiaries and evaluation team discuss and refine findings together, jointly discussing recommendations. Ultimate decisions rest with evaluation team. Beneficiaries engaged as part of evaluation team in validation and analysis of evaluation findings
Dissemination and Communication Dissemination of relevant evaluation findings in appropriate format (poster/ document/ radio/ video/ face to face) Beneficiaries share their views on materials shared: this can be on content or methodology Beneficiaries and evaluation team or programme team (depending on dissemination and communication plan) discuss findings together, learning jointly from process Beneficiaries, as part of evaluation team, design and support dissemination and communication activities
Advertisements