4 Steps to Beneficiary Feedback in Evaluation

Feedback for me is about a conversation. This is what distinguishes it from simple data collection. In evaluation, this conversation may occur during the data collection phase. However, what about the other stages of the evaluation process? What about a conversation with beneficiaries during design? This could be as part of participatory design or it could be about respecting basic evaluation ethics to ensure that consent to participate in the evaluation is informed. Rather than evaluators turning up only to find that they participants of a focus group discussion, for example, are quite unclear as to why we/they are there. What about a conversation to ensure that our provisional evaluation findings, including judgements, are on track? That they haven’t been subject to our world view to the point that we may have missed crucial cues or been unable to break through entrenched power relations? What about ensuring that findings, including lessons learned, are shared to ensure that beneficiary groups involved in a global programme can learn from and adapt successful practice in other parts of the world? This isn’t just about manners, ethics, respect. It is also about ensuring we have robust evaluation findings.

Could we expect evaluators to consider a four part process for engaging beneficiary feedback? The inability to conduct any of the steps would need to be argued in the inception phase- it may be a lack of resources, including time, for example. But the expectation would be that due consideration be given to these steps, in the same way as it would be to developing evaluation questions, for example.

Here are the four steps:

1. Feedback as part of evaluation design: e.g. Sharing of/ consultation on/ participatory design of evaluation

2. Feedback as part of data collection: Could be extractive/ interactive/ participatory collection of information
3. Feedback as part of joint validation and or/ analysis of information: Could be extractive or participatory
4. Feedback on end product/ response and/or follow up: Could be simple dissemination or participatory engagement for future actions.

Would this be reasonable?

NB: Appropriate methods would need to be selected in response to the evaluation questions. They may be participatory or they may be extractive. Each evaluation design will need to select what is most appropriate to answer the given question.

4 thoughts on “4 Steps to Beneficiary Feedback in Evaluation

  1. Hi Leslie,
    This is good, but I would add one more part, and that is participation in making the recommendations based on the four steps you listed above. For a detailed description of an action research approach (and what I consider to be the approach we all should take to development) see ‘Flawless Consulting’ by Peter Block. His approach works for evaluations because a good evaluation is should be a process for learning to do something better.

    And there is one other thing. An organization’s ability to effectively use feedback depends on its level of cooperative capacity. If an organization can not cooperate internally or externally, then it won’t be able to listen and respond to feedback from the field. Feel free to check out my website for (much) more detail on cooperative capacity.

    Thanks and cheers

    Frank Page

    Like

    • Many thanks Frank for your comment. I fully agree with you and thanks for the nudge to be more explicit about participation in the making of recommendations. I have now added this to my Step 3 “Feedback as part of Joint validation and analysis”. A two way feedback process here would involve joint analysis of early findings, and development of recommendations. Very helpful and much appreciated.

      Your point on cooperative capacity is also well noted. I think this ties in with current discussions the NGOs are having here in the UK around “adaptive programming”. Requiring feedback is all very well, but if logframes and performance management systems are not flexible then feedback is at the very best tokenistic during the programme cycle and at the worst unethical and a waste of everyone’s time. Leading to “uncooperative capacity” maybe? This is where we need to urge honesty and transparency around what “feedback” actually means and what the implications of it really are for those who give of their valuable time to provide it.

      Like

  2. Hey, thanks for this interesting article.
    I like (and share) the idea of looking ‘beneficiary feedback’ as a conversation.

    In this perspective, I view one important aspect of this conversation as the shared capacity/power for both interlocutor (evaluator and beneficiary) to initiate it. Ideally this conversation should happen throughout the project cycle (incl. planning and project implementation) and – as pointed by Frank – the information management tool should be flexible enough not only to integrate this feedback, but to be re-shaped and potentially modelled by it.

    This is one of the (many) challenges I try to tackle with a new application (e-smile.org).

    Cheers,
    Christophe

    Like

    • Hi Cristophe, you are so right about the information management tools being flexible enough to be re-shaped by feedback, let alone modelled by it. Your app looks really interesting. Please share any examples of how it works in practice when it is up and running. Would be great to follow the process.

      Like

Leave a comment