Yes, that’s right. I ran an A/B test this year on my family’s Thanksgiving day turkey(s). But before I get into the juicy details of how all of this has direct application to your digital fundraising program, let me ask you a few questions (and feel free to reply with your responses):
Roasted turkey, smoked turkey, or fried turkey?
Dry brine, wet brine, or no brine?
Stuffed with stuffing, stuffed with aromatics, or no stuffing at all?
These are questions I wrestle with every year around this time. And the problem is, I can’t just answer these questions based on my personal preferences, because I’m not the only one eating—I’ve got a whole house full of people to feed! And herein lies our first lesson: if you are cooking up your fundraising recipes based on what you like, or your boss likes, or your board likes—you’re doing it wrong. You are not your donor.
My experience has taught me that often when everyone in the boardroom likes the fundraising appeal, the appeal bombs. But when everyone hates it, it usually slays. So if you want to optimize your fundraising campaigns, you need to find a way to get feedback from your most important stakeholders, your donors.
Okay, before I go much further the first question I probably should answer is the obvious, “why?” question. This is an important question to address first before you attempt any type of experiment. Why am I running an experiment on turkeys? Well, first, I love to cook and I’m usually doing most of the cooking at home. Funny how I hate to bake, though. Baking is too exact—a teaspoon of this and a tablespoon of that—but cooking seems to invite experimentation.
Second, I did have a specific research question I was trying to answer: which turkey recipe would maximize my conversion goal (which in this case was consumption)? It is important to point out the way this specific research question is worded. I’m asking the question, “which recipe does my family like the best?” My focus is not necessarily to understand the impact of each individual ingredient, but what combination of ingredients and/or cooking methods has the highest conversion rate. You will soon see how the focus on maximizing conversion influenced the design of this experiment.
For the turkey experiment, I started by trying to control the variable of the turkey. I selected three Butterball fresh young turkeys weighing between 16.1 and 16.3 pounds. This is the dependent variable. But if you recall my opening questions, there are several different “things” I can do to the turkey. Here are all the potential independent variables that I was excited to experiment with:
Now, in an ideal world, I would love to know the exact impact of the variables that contribute to my conversion goal of maximum consumption. But here’s the rub—if wanted to control each variable, I would need to cook 24 turkeys! And my experiment design would look something like this:
Each cell in the tables above is an individual turkey! If I ran this experiment, I would understand what cooking method combined with each treatment would produce maximum consumption. But here’s the wrinkle (aside from cooking 24 turkeys simultaneously), this experiment would not help me understand how the combination of multiple variables produces a completely different taste sensation that improves consumption. And considering that I am only cooking for 20 people, there is no way that my sample size would be big enough to reach statistical validity because every person would need to taste 24 turkeys and then go back for seconds on their favorite. Not likely.
Here’s my main point: if maximizing conversion is your goal, the best testing strategy is to start with a Radical Redesign.
A radical redesign experiment is less concerned with understanding which specific variable impacted results but instead is designed to create a completely different recipe by combining multiple variables based on a specific hypothesis to create the greatest potential relative difference between the control and treatments.
I know that’s a lot of science mumbo jumbo, so let me show you what I mean by walking you through the experiment design I chose for my turkey test.
Research Question: Which turkey recipe will my family like the best and eat the most
Participant Rules: Try one piece of each type of turkey and then go back for seconds and load up whatever you like
Conversion Goal: Maximize turkey consumption
Validation Metrics: The turkey that was most consumed wins
Experiment Design: A/B/C test with multiple variable clusters
Instead of making 24 turkeys, I only made three. In the Control bird, I tried to replicate what I thought to be the “traditional” turkey recipe: roasted, stuffed with stuffing, no brine, no injection. For the first Treatment, I started by doing a bunch of initial research online in some of the barbeque forums. I read a number of articles of how different BBQ gurus prepare their smoked turkeys and tried to find the common formula that most of the highest rated articles referenced: wet brined, stuffed with onion slices and herbs, smoked at 250 degrees and no injections. For Treatment 2, I did some research on the fried turkey nation. The consensus was: dry brined, fried at 325 degrees, and injected with Tony’s Chachere’s Cajun Butter.
The good news is that based on the experiment, we have a clear winner-winner, turkey dinner! Treatment 2, the dry brined, unstuffed, cajun butter injected fried turkey had a 72% higher conversion rate compared to the Control and a 45% higher conversion rate compared to Treatment 1. Here is the data:
The Key Takeaways (as it relates to your fundraising program)
- It is good to test. One of the things that drives me crazy is all the CRO and testing gurus on the internet that habitually post things to try to scare people out of testing.
“You shouldn’t test unless you have at least 1,000 conversions per treatment.”
“Testing is only for the larger organizations with lots of traffic and massive donor lists.”
“You shouldn’t test unless you are working with an expert. Here—here’s my card.”
Hogwash! Just the mental exercise of thinking through a new idea to test puts the fundraiser in a donor-centric state of mind. Instead of thinking about how you are going to get everything done, how you are going to keep all the plates spinning, you are now doing what Peter Drucker said is the essence of marketing (or fundraising), “Viewing the entire organization through the lens of the customer (donor).” Even though you may not have the traffic volume or sample size to run a statistically validated A/B test, you can still optimize by putting a new idea in front of your donors and observing how they respond.
2. When you test, start with a radical redesign. Remember the Radical Redesign is where you change multiple different variables and come up with a brand new recipe that you believe will improve the performance of your campaign. So for example, if you change a bunch of things on your donation page based on some new insight or idea you have about what your donors want, then test it against the existing donation page, you have the greatest chance to see a large relative difference between your control and your treatment.
The greater the relative difference in performance, the greater your chances of getting a result that you can trust. This is super important if you don’t have a lot of traffic to your website. If you only change one small thing on your donation page, like button color for example, the likelihood of that making a meaningful and noticeable difference is very slim. However if you change the page design, the copy, the images, the form placement, and the call-to-action button all at the same time, the chances of seeing a double or even triple digit relative difference improves which means you can validate an experiment with far less traffic.
3. But before you test, make sure you do your homework. Every experiment should begin with research. In the turkey experiment, I didn’t just start throwing ingredients together. I started by researching different recipes to find out what has worked for others in the past. Then I used that primary research to inform which recipe I was going to use in each of my treatments. Start your research by looking at your data. If you are testing a donation page, go into your Google Analytics and look at the traffic and metrics for that page. Look at the sources of traffic and the relative conversion rates by each source (note: if you don’t know what your conversion rates are by traffic source, then I probably need to write an article about that next).
Then go look at your fundraising campaign assets that are driving traffic from those various sources. What sorts of observations can you make? Is the messaging from each source similar or different? What sorts of observations can you make about the ongoing motivation coming from the best-performing traffic sources? What about the lowest-performing sources? You probably can begin to formulate several hypotheses about what you can change on your website just based on that analysis alone.
But don’t stop there—explore what has worked for others. At NextAfter we have documented over 4,000 digital fundraising experiments, and we have published every experiment we have permission to share openly in our research library. Oftentimes things that worked for others may work for you as well—but not always—that’s why it is important to test!
And finally, don’t underestimate the power of the gravy! At Thanksgiving in my house, it’s gravy on everything. But then the question is: giblet gravy or just the drippings? Cornstarch or flour? Milk or half and half? The good news is that I think I know how to find the right answer…but that’s an experiment for another day.
The Latest Experiments from the Lab
- How Centering an Ask Around a Specific Initiative Affects Donation Rates for Housefile Traffic During a GivingTuesday Campaign
- How Making the Organization the Actor of a GivingTuesday Donation Page Value Proposition Impacts Donor Conversion
- How Using a “Donation Interruptor” Affects Recurring Giving Rates During a GivingTuesday High-Urgency Campaign
- How Adding Specificity of Impact to a High Urgency Campaign Donation Page Impacts Donor Conversion Rate
- How Using a Faux Forward on a High Urgency Campaign Impacts Donor Conversion Rates
Some Conversations with Friends in the Industry…
- 3 Ways to Increase Giving in Tough Times [PODCAST with Bright Spot]
- There Are No Fundraising Experts [PODCAST with Mallory Erickson]
Want to Run an Experiment Like This in Your Online Fundraising?
In this free a/b testing guide you’ll get to walk through a/b testing methodology, think critically about how to optimize your online fundraising, and develop your own a/b test that you can run right away.
The 8-step workbook will help you
- Know what to test
- Set up an a/b test
- Discover a/b testing tools
- And run and document your tests
By the end of the workbook, you’ll be equipped to begin testing for your organization so that you can implement strategies that you know will reach more donors and raise more money.