“From Feedback to Action” Why so much talk and so little action?

May 30, 2017 | Sarah Cechvala

Share this article

Technological advances have compelled the humanitarian sector to become increasingly more digitally versed. Progressively, the use of mobile and digitally based tools provide humanitarian agencies with the means for wider capture of feedback data from beneficiaries. And yet, increased feedback data has not necessarily led to increased response or adaption. This blog explores new research undertaken by CDA and the IRC about why we have not seen increased feedback utilization in our decision-making processes. 


Globally, technology has accelerated our ability to listen and communicate with one another. Increasingly those living in the most remote places find themselves far less isolated as access to mobile networks and online platforms grow exponentially. In 2016, for example, there were 731 million SIM cards being used across Sub-Saharan Africa; this number is expected to reach one billion by 2020 with the number of mobile broadband connections reaching nearly half a billion doubling the number in 2016.

Technological advances have compelled the humanitarian sector to become increasingly more digitally versed. A steady rise in the use of mobile data collection platforms (such as KoBo Toolbox) and data visualization software (to name a few) have sought to enhance the efficacy of an agency’s response efforts. For one, increased reliance on online platforms and remote data collection mechanisms can increase connectivity and improve access between humanitarians and those at the receiving end of aid, particularly in insecure and remote contexts. Progressively, the use of mobile and digitally based tools provide humanitarian agencies with the means for wider capture of feedback data from beneficiaries. And yet, increased feedback data has not necessarily led to increased response or adaption. A recent ICRC report explains that even as the humanitarian community finds itself more ‘digitally present’, accountability to and engagement with affected populations remain areas that have seen limited progress over the years.

So much talk, and so little action!

If as a sector we seek to be more accountable to those we serve, then why do we so often exclude their perspectives when making program and operational decisions? And specifically, with the increased capabilities to reach communities and gather their feedback, why are their voices not used more deliberately and systematically in our decision-making? What can the sector do to more adequately encourage the use of feedback from affected populations?

Over the course of a year-long project, funded by the U.S. State Department’s Bureau of Population, Refugees, and Migration, and in partnership with the International Rescue Committee, we explored these questions. Our research posited that gaps in institutional capacities to understand beneficiary feedback along with limited incentive structures to support adaptive programming may be core inhibitors to wider-utilization of feedback in programmatic decisions. Therefore, if provided with tools and support to ease the use of feedback in process of decision-making, humanitarian agency staff will be more likely to use the feedback they have gathered.

We tested this hypothesis by working with agencies responding to the refugee crisis in both urban and rural Uganda. First, we used  a light-touch approach which provided agencies with tools and resources to strengthen capacity and awareness of the importance of beneficiary feedback – and tested progress through a baseline and endline assessment (lead by IRC). Second, we worked directly with six diverse organizations (national and international) in a coaching role to provide a diagnosis of existing practices and offer options for bolstering the use of feedback across the organization (lead by CDA). [We discuss the findings from the coaching part of the research below.]

What is missing when translating listening to action?

It comes from the top. Prioritization of feedback by senior management is a significant factor that enables agencies to effectively utilize feedback. Leadership buy-in can help to better resource a feedback mechanism, and often encourages staff

to undertake innovative and flexible approaches to collecting, analyzing, using, and responding to feedback. While champions are important, they are insufficient. As a recent CDA-Bond Report explains, “The positive deviants in the system are organizations who set the bar high for themselves whether or not donors require them to demonstrate the establishment of accountability and feedback mechanisms. There is no surprise then that numerous case studies and learning events described how people in management positions can hinder or advance staff commitments and practices related to accountability.”

It starts with your systems! There seemed to be a connection between consistent internal

processes that were well understood by all “users” (staff, volunteers, and beneficiaries) and the organization’s ability to effectively use feedback data. Understanding how existing internal referral processes – e.g. how feedback is shared, with whom, when, and in what format – and how to bolster those structures can fundamentally ease information flows and improve decision-making processes.

Put your money where your mouth is. Increased pressure from donors, international headquarters, and shifting agency priorities to focus on effective feedback practices have not been accompanied by increased resources or technical support. More often than not, gaps in effective practice directly relate to the lack of resources for feedback mechanisms, which includes the hiring and/or training of staff, so they have the adequate capacities and skills to manage feedback data for effective use. If the sector intends to prioritize accountability to affected people, then programs and funding mechanisms must include adequate resourcing to collect, analyze, use, and respond to beneficiary feedback. 

Asymmetric focus on the ‘what’ over the ‘how.’ Feedback collection is largely focused on questions about specific programmatic endeavors (e.g. location of the latrines or procedures for food distribution); and much less on how the organization operates, staff behavior, overall mission, mandate, and the content of services/programs. Narrowing feedback to programmatic issues allows for greater ease of immediate, usually field level, course-corrections, but limits the opportunity for feedback to spark or support more profound organizational shifts. Integrating feedback channels (e.g. targeted questions as part of standard reporting processes or strategic focus group discussions) that can adequately gather feedback about the organization (staff, mandate, program goals) more broadly may allow beneficiaries to more aptly inform and influence how we undertake our larger institutional objectives.

Reflect, Learn, Share, Repeat. Limited time to reflect and share lessons (positive and negative) across the agency means that good practices are overlooked, which are often dependent on the interest and capabilities of the staff leading those initiatives. Dedicating specific and consistent time for horizontal learning from success and failures and sharing those with our peer organizations can establish a ‘feedback culture’ within the organization and potentially across the sector.

“Accountability starts with us”

A colleague in Pakistan once said to me that in order to encourage more successful accountability practices institutionally, his organization needed to prioritize their internal functions and feedback structures. Fundamentally he explained that, “accountability starts with us.” His sentiment underscores a key theme across these findings, which is that successful feedback utilization practices require cogent and robust institutional structures, processes, and most importantly, cultures of feedback. Excitement about communication technology and digital data collection and visualization must therefore be accompanied by sound internal strategies, capable staff, adequate resourcing, and organizational will and desire. Otherwise, technological solutions offer little in the way of advancing the organization’s commitments and goals of accountability, and far too often only become a “means to their own end.”

You can find the report in its entirety here.

About this article

We value your comments, questions, and insights. Please feel free to post your comments to this blog, or to contact the author at scechvala@cdacollaborative.org with your reactions or suggestions for further research or discussion. Subscribe to our newsletter, here, to be notified of new blog posts, and studies once they are made public.

You might also be interested in reading the case studies this blog post draws from:

About the author(s)

Sarah Cechvala is a Senior Program Manager at CDA Collaborative Learning Projects. Her learning and advisory focus is on conflict-sensitivity, accountability and feedback loopsconflict-sensitive business practice and corporate social impacts. Sarah has facilitated collaborative learning processes and field research in Africa, Asia, and Latin America. Recently, she supported Kenya Red Cross Society in capturing their experiences in mainstreaming of accountability to communities in an operational case study. She holds an MA from Georgetown University and a BA from Boston University.