Over the years GlobalGiving has become increasingly focused on helping organizations around the world better listen to the people we serve. We launched the Storytelling project in 2010 as one way to give communities a voice and get organizations to listen. We collected over 60,000 stories from East Africa and provided analysis tools to extract insights from what people talk about.
Now, GlobalGiving’s Feedback Fund is taking a different approach. We’re supporting 19 of our nonprofit partners as they build feedback loops with the people they serve. We held a webinar to introduce the Feedback Quiz (a self-diagnostic tool) and the Feedback Labs Toolkit – where hundreds of tools have been reviewed.
We thought, if each organization could create a customized feedback system, we might glean insights and lessons from these experiments that would benefit all GlobalGiving nonprofit partners.
This is the first in what will be a series of blog posts about what we’re learning. Here are some of our initial insights about how technology-aided feedback loops work for NGOs in practice:
Simplest: Just ask people if it made a difference
Every organization has to start somewhere. If they’ve never asked their constituents for feedback before, their story will sound like this:
One GlobalGiving nonprofit partner recently sent 6 solar lanterns to a remote village in Liberia. Nobody has a phone there, according to the founder. He chose to start in this village because he knows the people. He’s from there.
“So what did the people think about these lamps?” I asked.
“I don’t know. We were going to hire a vehicle and send to volunteers to go visit and do a survey.”
“That seems like overkill. Why not just call someone, or send a letter by bush taxi?”
The question matters, because the organization has bought 186 more lanterns it plans to send. After some discussion, they agreed to turn this into a form of learning. Instead of giving all lamps out in one place, they will be splitting them up among 5 villages along with a cell phone which can be charged by the lamp. They will find a local person to act as a liaison and go around asking people this question concerning the lanterns:
“Describe one difference in your life since you got a lantern.”
The follow-up question will be: “How much of a difference is this (pick a number from 0 to 10)?”
In addition, the surveyor can note the gender and approximate age of the person. This is a 4-question survey relayed over the phone, and probably will work due to its simplicity. They are not asking, “did you like the lamp” directly, because we all know that the right answer is always “yes,” and “yes” isn’t something you can learn from.
For another project this organization has been installing road signs to put villages on the map, so ambulances can find places. They plan to add a note to the sign for villagers along the lines of, “Think this sign is useful? Need something else? FLASH us at [some number].”
Flashing is where you call and hang up. It’s a mobile phone way to do free voting that gives the organization the information it would need to call people back and engage in a deeper conversation about what they need.
Medium complexity: Defining a new hospice program
Binaytara Foundation wants to start a new project in rural Nepal to help end-of-life sufferers. The needs of this population are complex and it would be best if they could gather many different perspectives on what “hospice care” means. They are using the GlobalGiving Storytelling project to get 200 community members to answer the following with a story:
“Talk about a time when someone was dying. What would have made a difference in his or her experience?”
They will use our story analysis tools to parse the narratives by different groups and tease out what elements of a hospice program would align best with what emerges, and ensure their program addresses these issues.
Complex: Building a peer-benchmarked survey about a program’s performance
Several organizations have come to us with variations along the same lines:
- “We want to start a discussion with young people and get a sense of their aspirations and fears in Maharashta India. We offer vocational training and educational support so they can realize those aspirations.” (Karuna Trust)
- “Our goal is to get a sense of where are alumni of our past mentorship programs are now, and of the programs we offer, (mentoring, career building, etc), what support to they still need now?” (Ikamva Youth)
- “We work with artisans who have a wide range of personal goals. We want a system that will help them each define success for themselves (e.g. I want to earn enough to put one child through school) and then manage relationships against these goals. We also want to know did our program changes make a difference?” (Center for Amazon Community Ecology)
- “We want to learn why girls frequently drop out of school in our area. We offer youth training in crafts, tailoring, and help them start businesses after they drop out. We gather feedback on what they need informally, but we need more rigor if we want to turn that into some kind of understanding of our impact.” (Gram Vikas)
- “We support 25 fellows pursuing startup-style ideas in Rwanda. We want to learn what could help them the most.” (These Numbers have Faces)
- “We work with sexual violence survivors in East Africa and want to improve our project around After care (healing and education) and prevention (find root causes of violence).” (Freely in Hope)
What I’ve been realizing is that these organizations need a group like GlobalGiving to play matchmaker and pair them up with a similar organization who agrees to collect data in a benchmarkable way. Karuna Trust, These Numbers have Faces, Gram Vikas, and the Center for Amazon Community Ecology work with people who set personal goals and get relevant support from the organization. Each of these could benchmark against their peers. Freely in Hope and the Binaytara Foundation both work with populations in need of psychosocial-emotional support. They could benchmark their performance against a set of questions aimed at measuring emotional improvements, or they could benchmark their storytelling work against the 60,000 stories on similar topics we’ve already collected. All organizations can benchmark against each other on a fundamental level, asking (as companies do), “How satisfied are the people we serve?”
Listen for Good is doing exactly that.
Today the possibilities are wide open. The limits are really just time, money, and the ability to find like-minded peer organizations with whom they can form cohorts. The technical ability to collect, transform, merge, and compare these relationship and performance surveys is the core of feedbackcommons.org – a site I’ve been building with Keystone Accountability in 2015. I believe it will finally make data interoperability a realistic goal.
We will continue to post updates and write case studies from our Feedback Fund organizations as they listen, act, and learn.