Stay in the Lab: Testing as a Practice
My mother loves to tell a story about going to a fancy dinner with her brother – he was so excited to treat everyone and ordered a bottle of expensive wine. The waiter poured a bit for him to taste, and my uncle looked at him, quizzically: “Go on and fill it up – I paid for the whole thing!” My mother gently explained, “He wants you to taste it to see if you like it.” We laugh about that now, but many decades later, that anecdote underscores how truly important testing is.
Think about this: my favorite uncle may not have actually liked the wine. A completely different bottle might have been more pleasing to his palate. Make no mistake: your audience is the same! What leads one group to convert won’t necessarily work for another. While you’re doing the work of digging into your data to create and test custom audiences and personas, your campaigns are perfect for testing as well.
So why aren’t more people testing, consistently? Bottom line: many campaigns are created on tight timelines, and testing is an afterthought – a sure recipe for stagnation. Testing in the name of innovation must be an underlying principle and initiative. Everyone on your team should be asking themselves – what are we testing, what are we learning, and how is it changing our program? With that mindset, let’s get into the nuts and bolts of ensuring your testing is beneficial and leads to optimization, incremental program growth, acquisition, conversion, and a myriad of other goals.
Before we get started, let’s get one thing out of the way: a test that is well designedandfully executed does NOT fail. We hear that phrase a lot: “my test failed”. Here’s why that isn’t an accurate statement:
If you have a hypothesis, the only two options are results that validate or nullify the hypothesis. The well designed and fully executed test may not have given you the results you wanted or even expected, but it didn’t fail. Even an inconclusive test provides information for you (perhaps that particular element wasn’t significant enough to test, or there really is an equal response to the test AND control variables). It’s all in how you measure and report the results (more about that later!)
Whether you execute the test with the tools available in your deployment platform or through a third party vendor, you will still need to design a comprehensive ongoing testing plan. One-off, ad hoc tests are not going to lead you in a prescriptive direction.
One of my clients was interested in completely revamping their email template. We worked together to create a comprehensive six month plan, identifying specific areas of the template we wanted to change in a head to head test. We laid out the plan, identifying areas we felt had the most opportunity, tested fully and analyzed the results. In the end, their new template increased conversions by 15-20%.
1. Identify goals
What do you want to happen? What will impact the overall goals of your program? For example, if the goal is to acquire 10X more subscribers, you may want to test copy or placement of the section devoted to sign-ups. Ensure that this is measurable – do you have access to the data you need for reporting and measurement?
2. Identify audience
I understand that many teams are reluctant to test, because email is a profitable channel, and no one wants to risk that. Here’s what I know: you DON’T have to test on everyone! It doesn’t make sense to test on everyone, honestly: would you put a test campaign where clicks are the KPI of success in front of your most dormant audience? You might think it would be stacking the deck to test with your already active audience, but remember that incremental increases even with that audience indicate that you have a winning element.
3. Focus to help to isolate causal element
Multi-variate testing is possible, but should be carefully developed with an expert on your internal team, or with a vendor partner. If you don’t have the resources to create and deploy multi-variate tests, never fear! It might take longer, but it is worth it to isolate specific elements. That way, you’ll know for sure what made the difference! For example, let’s say that you test a subject line and a CTA. You might think the email with the higher open rate is the winning campaign, but if there are more clicks, the subject line could actually be the reason as well (if more people open, more people see the content, and more people click, regardless of the CTA in some cases.)
4. Run the test to the end
There are times when you’ll be tempted to end the test early. However, that can negatively impact the integrity of the test design. Imagine running 13 of 26 miles in a marathon, but behaving as if you completed the entire race. Your winning time wouldn’t be accurate, even if you tried to estimate the results from the miles you had run. Go the distance!
5. Measure and report and dig deep
When you have run your test to the end, identify areas where there may have been an anomaly or deviation from the testing plan. If you share reporting and analytics resources across multiple teams, be sure to get in their queue as soon as possible.
Be ready to dig into the numbers as well. I worked with a brand that tested two subject lines – one that specifically mentioned several products in the email, and another that was more vague. After the test, we noticed that the overall click percentages were nearly identical, but we kept looking (mostly because I was convinced we were missing something).
When we looked at the click percentages on each product, we found a BIG difference: in the version that had the products in the subject line, the click distribution was very different. Most of the clicks were on the products explicitly mentioned in the subject line vs. other products in the email.
6. Not a one time thing
Remember that marketing itself is an iterative process – testing, learning, optimizing, testing again. Your audience changes and grows; your capabilities change and grow; there is seasonality at play. Testing is a forever pursuit!
7. Go with the data
I worked with a client who wanted to run a two month time of day test. The same mailings were sent at three different times of day to audiences that were segmented by age. The results showed higher opens and clicks at a specific time, across all campaigns and audiences, and I was excited to share with the client.
We reviewed the results and made the recommendation, which our client contact shared with the CEO. Imagine our surprise when the marketing manager came back and said “Well, we aren’t going to use the winning time – the CEO gets our competitors’ emails at X time, and wants to be in the inbox at that time instead.”
Make sure your higher ups/final decision makers buy into the test, and understand the implications and possibilities. Of course, this is easier in an organization where testing is a priority, and obviously leadership makes the final choices when the buck stops with them, but fight for your hard work within a disciplined test environment, and a focus on data driven decisions.