Alison Carlman, GlobalGiving
This is the third article in a three-part series about GlobalGiving’s experiments testing the findings of The Narrative Project. Read the first article here and the second article here.
When the results of our first test of The Narrative Project email appeal started to appear, I hoped they were just a fluke. But soon the numbers grew to statistical significance: the Narrative Project language was performing significantly worse than our control language in terms of dollars raised per email opened. I suspected it could just be a matter of the particular cause featured in the email appeal, so then we ran tests with entirely different topics. When that test copy also underperformed the control, I blamed it on my own writing. So in our final test we pitted language from another major nonprofit against phrases pulled directly from the Narrative Project User Guide. The Narrative Project language still failed compared to the control.
At the same time that we were running A/B tests, my GlobalGiving colleague was running experiments with stories in our database. We have more than 50,000 reports written over the past 8 years by nonprofit leaders detailing their progress for their donors. While these emailed reports don’t usually generate a high volume of repeat funding, it was still possible to detect that reports that were highly correlated with Narrative Project Themes generally underperformed other reports in a statistically significant way.
After all of our testing, we could not prove that stories and reports that contain the themes of independence, shared values, partnership, and progress drove any more funding via email and online donations than stories or reports that don’t. In fact, they performed worse.