Like all marketers, fundraisers often work on pure instinct. How we solicit gifts and cultivate donors is often guided by assumptions, organization-specific mythology, and industry “best practices” rather than an evidence-based approach.
It’s easy to get trapped by our own assumptions. We humans aren’t very good at discerning what’s true from what’s not, and we often cling to assumptions even in the face of contrary evidence.
But there is a solution. We can use data and testing to constantly check our assumptions about what works with donors and make sure that what we think we know is actually true. That’s one of my chief roles at The Heritage Foundation’s Fundraising Innovation Lab.
When tests become lore
Even marketing tests can themselves become the stuff of myth. A decade ago, Heritage ran a two-year test to a portion of our donors who self-identified as social conservatives. We had an assumption about donor behavior and checked it in the marketplace—great!
The firm conclusion repeated around the office was that our existing practices are most effective. Unfortunately, the results of the test were never properly documented, which led to questions about whether this conclusion was real or simply reflected confirmation bias.
I dove into the data to find out what really happened.
Confirming fundraising lore
After the 2004 election, pundits argued that social conservatives had delivered the election to President Bush. Heritage hypothesized that we could drive more giving from this group by tailoring the message and tone of the fundraising messages we sent them.
Our traditional fundraising message emphasized fiscal issues and the role of government. Could fundraising language focused on questions of morality, family, and the like appeal more to social conservatives?
To test this hypothesis, we identified 70,000 self-described social conservatives among our existing donors. Over the next two years, half this group of social conservatives (the control group) received traditional Heritage messaging in the mail and online, and the other half (the treatment group) a more social-conservative message. The social-conservative messages were crafted by an agency that had successfully raised funds from this audience before.
Donors in the treatment group, receiving social conservative messaging, gave 22% fewer gifts and 26% less revenue compared to those in the control. While a handful of appeals during the two-year test weren’t adjusted in tone, a potential validity threat, it’s reasonable to conclude that adjusting our message and tone caused our donors to give less money less often.
Lesson: brand matters
In simple terms, the conventional wisdom about the test was confirmed: our traditional language worked best. But what could explain this? Why wouldn’t talking to donors based on their interests boost fundraising?
One possibility is that our social conservative appeals simply weren’t very compelling. On the other hand, these messages were crafted by an agency who had done considerable work with similar audiences in the past.
Another compelling possibility: by adjusting our message and tone, we effectively went “off brand” with our social-conservative appeals. We had set an expectation among our members about the message and tone we would use, and the new approach violated that expectation. Our brand, in other words, exists in the mind of the donor.
Testing trumps guessing, and data trumps intuition
At the end of the day, what works in fundraising isn’t a matter of opinion or conventional wisdom. It’s a matter of fact. And testing in the marketplace is the best way to confirm whether our assumptions about what works are true.
Equally important, however, is recording your experiments to make sure the results are properly understood in the future. Given our predilection for confirmation bias, it’s easy for a test result to reinforce the conventional wisdom even if it doesn’t!