AFTER 30+ YEARS AND THOUSANDS OF CAMPAIGNS, WE’VE LEARNED A LOT. TO PUT OUR LEARNING TO WORK, SIGN UP FOR OUR BLOGS AND NEVER MISS A POST!

Didn’t I Leave Testing Behind In School

March 25th, 2019

Testing is a crucial process that can make even successful direct marketing programs more successful. Using statistically sound methods to determine potential impact, you can gain valuable insights and the ability to confidently make changes without sacrificing results.

Why should you test?

Without constant change and updating, any direct response program can become stale and fail to perform. Testing creates evidence-based levers that can be manipulated to improve overall and campaign results.

Any new or radical idea can be a risk to a program—it should be tested first. Gaining learning from small but relevant sample groups is a cost-effective way to ensure it will have a positive impact. And by the way, if you do not have at least one test that completely bombs each year, you are not pushing far enough outside of your comfort zone!

Conducting tests can also provide unexpected insights. Shortening the length of a letter may depress results despite anecdotal evidence that would suggest otherwise. Digging deeper into your data, you might find that your donor pool is largely comprised of baby boomers who grew up writing letters before the advent of social media and texting.

That generation enjoys reading their mail, including learning about their favorite non-profit organizations. For them, making the copy shorter takes away from the experience.  Information like that can be valuable as your donor pool ages and the generational composition shifts.

For another example of unexpected lessons learned through testing (and a cautionary tale of reading results), read our recent blog, A/B or A+B testing?

Constructing a statistically valid test

Anything can be tested, including copy platforms, email subject lines, use of personalization, art, postage treatment, gift ask—the list is nearly endless. But a test is only as good as its design.

You want to ensure the results are an accurate reflection of what will happen if the “winning” test is rolled out on a larger scale. Consider business goals, audiences, costs, and any other possibilities that may complicate the test or skew the results.

Anything can be tested, including copy platforms, email subject lines, use of personalization, art, postage treatment, gift ask—the list is nearly endless.

The most straightforward test design is an A/B test that uses only two groups. One is a control group which gets a standard treatment with well-understood results, while the other is the test group. Statistically speaking, the larger the number of expected responses allows for smaller test groups, implying that donor house/renewal mailings can have smaller test panels since they have higher response rates, while acquisition tests with lower response rates require larger test panels. (MKT has an online statistical cell estimate generator available.) If the cells are randomized correctly, have the right volumes and show a significant difference in response, the increase or decrease can be attributed to the element being tested.

When you want to test more than one element, you have to use a multivariate test design. Let’s say you want to test the standard white against a red envelope, while at the same time decrease the size of the envelope. This can be done by having four groups instead of two:

  1. White Envelope/Standard Size
  2. Red Envelope/Standard Size
  3. White Envelope/Smaller Size
  4. Red Envelope/Smaller Size

This design allows you to understand the impact of envelope color, size, and the interaction of those two elements. Results have to be read carefully to ensure the correct groups are compared, appropriate conclusions are drawn and the test design yielded significant differences in response.

Other questions may need a long-term test with a longitudinal time design. An example would be comparing the standard mailing sequence for new supporters to a welcome series that cultivated new donors in a no-ask series that delayed opportunities for making a second gift. In this design, the results would be tracked for a number of months to determine whether the same retention rates and total revenue was generated by the cultivation delay for renewal efforts.

Opportunities await!

Testing should be second nature for direct response professionals, providing an ongoing opportunity to learn more about your supporters and gain insights in a systematic way. It can keep your program growing, agile and evolving as the mission changes over time. With appropriate design cautions, you can fearlessly experiment with your program with continuous learning.

Want more details about successful test design, reading results, and acting on results? Be on the lookout for my next blog, A+ Statistical Testing.

Blog written by Ryan Byrd | Senior Data Analyst

Share This Post

SIGN UP FOR OUR BLOG