Here are some lessons drawn from GlobalGiving’s ongoing Feedback Fund, an experiment to improve the ways organizations listen to people. Last time I shared examples of feedback loops. This time the lessons are about how using data about feedback loops can help us make smarter funding decisions.
Feedback Labs (a consortium co-founded by GlobalGiving in 2013) provides a convenient self-diagnostic quiz that organizations can use to understand how well they are listening to the people they try to serve. It breaks the feedback process down into six steps:
Answering a few questions gives you an overall score. In my hypothetical example, I need to do more community dialogue.
We asked all Feedback Fund applicants to take this quiz and analyzed their existing feedback systems. It tells us what they do well and what they struggle with.
More effective organizations struggle the most with community buy-in
The chart below shows quiz scores for all applicants. Scoring at least a 100 (y-axis) means you’ve can listen effectively. If an organization has mastered the other five parts in a feedback loop (design, collect, analyze, dialogue, course correct), buy-in remains the hardest step.
At the opposite end of that chart, the red dots represent organizations that struggle with many stages of the feedback loop. They have no system to absorb feedback into their programs. Many of these organizations choose to start by “collecting feedback” first. Their applications were very focused on how GlobalGiving could help them collect data, sometimes ignoring the other five steps entirely.
However, we believe the first thing organizations ought to focus on, because it yields the greatest improvements, is better dialogue with some course correction based on feedback. Statistically, better dialogue correlated the most with higher scores. This is the hardest step, and it doesn’t require technology. It requires intentionality within the organization.
When we found ourselves in the unusual role of a Grantmaker choosing organizations, we decided to give organizations at both ends of the spectrum funding to experiment with feedback loops. What we cared about most was whether they had really thought about how feedback could help them improve a specific program. Later, I ran this analysis of quiz scores against the organizations we chose to fund (in green) and those we chose not to fund (in red dotted line below):
Guess what! The average feedback quiz scores for each part of the loop are pretty much the same between grantees and non-grantees, except in the case of dialogue and course correct. These steps are harder than the others, and differentiate great organizations from the rest, as I shared previously.
Even without the quantitative scores, our team could tell from the qualitative data (written applications) which organizations were more interested in using the fund to expand their community dialogue and course correct steps. Yay for qualitative data!
I believe an honest conversation with the people you aim to serve is far more valuable than pages of numbers in a spreadsheet. And it is much easier to quantify conversations with people than you think.