How removing a video affects email appeal revenue Experiment ID: #8110

Focus on the Family

Focus on the Family is a global Christian ministry dedicated to helping families thrive. We provide help and resources for couples to build healthy marriages that reflect God's design, and for parents to raise their children according to morals and values grounded in biblical principles.

Experiment Summary

Timeframe: 11/27/2017 - 12/4/2017

The team at Focus on the Family had seen experiments from other NextAfter partners suggesting that video on donation forms could actually be harming conversion.  With an appeal featuring video set to go out, they decided to test into this effect to see whether their audience would respond the same.  This particular test was against a segment of existing donors, although a non-donor segment was also tested.

Research Question

Will removing the video from the landing page (and thumbnail from the email) increase donation revenue?

MECLABS Conversion Factors Targeted

C = 4m + 3v + 2( i - f) - 2a ©

Copyright 2015, MECLABS

Design

C: Watch Video
T1: No Video

Results

Treatment Name Revenue per Visitor Relative Difference Confidence Average Gift
C: Watch Video $0.10 $111.98
T1: No Video $0.09 -7.8% 65.4% $98.07

This experiment was validated using 3rd party testing tools. Based upon those calculations, a significant level of confidence was not met so these experiment results are not valid.

Key Learnings

We saw a number of different factors and potential learnings at play this test.

First off, eliminating the softer CTA (“Watch Video”) in the email reduced the open-to-click rate by 34.9% in the non-video treatment.  However, there wasn’t a corresponding reduction in donations—likely suggesting that the larger audience clicking the control was less motivated to give and more motivated to watch the video.

At first glance when reviewing overall giving, we saw a 2.1% increase in conversions for the non-video treatment . . . but at a 28.8% decrease in revenue. Neither met our 95% confidence threshold, although the revenue decrease had a confidence of 90.9%.  For this reason, we scrutinized the data more closely to ensure that we didn’t miss a learning.

After reviewing the individual transactions from both variants, we found that the gift amounts had significant outliers due to $,1,000+ donations.  Knowing when to filter data is a difficult challenge of testing and optimization—but since we have repeatedly seen that higher-value gifts are less likely to be motivated by an individual element of a broad-base ask, we typically exclude these from revenue validation.  After making this adjustment, we see the smaller difference noted in the results table.

Here are a few thoughts on this result, and why it requires additional testing to validate:

  1. Response sizes are too small
    Due to a low conversion rate among both target segments, there isn’t enough data to back up the normal distribution of gift amount due to its existing variability.  This is especially true of a non-donor segment that was tested with similar results, where conversion rate was just 0.007%.  Further testing, including during higher response windows, may improve the actionability of results.
  2. The video wasn’t the only thing tested
    When removing the video thumbnail from the email, it was replaced with a different image, which was also rotated.  The additional call to action button was also removed instead of being modified. Copy was added in that area, but from a different testimonial changing the value being conveyed.On the landing page, the same additional testimonial copy was used, but was added much later in the copy compared to the video.  When testing video as a medium, the core value proposition of the video should be represented in text—if not, you’re also testing between different value proposition or the removal of some of the value proposition presented completely.
  3. There were significantly more outliers in the control segment
    While excluding $1,000+ gifts from both segments made a difference, it’s difficult to tell what the normal distribution of gifts is and whether segmentation played a factor in the results.  Typically the only ways to answer these questions are to measure historical distributions to be able identify abnormal ones, or to have a larger number of results to help balance any small number of outliers.

Due to the significant impact we’ve seen when replacing video on appeal pages, we would recommend further testing to ensure a clear and applicable learning for this audience.


Share this research with a colleague

Our mission is to help elevate the field of fundraising by openly sharing our research and inspiring a wider community of testing and optimization. If you have found our research to be helpful, insightful, or even just interesting—please share it with a fellow fundraiser.






Experiment Documented by...

Justin Beasley

Justin is a Data Analyst at NextAfter. If you have any questions about this experiment or would like additional details not discussed above, please feel free to contact them directly.