A/B Test Manager - Always Be Testing!
Don't underestimate the power of testing!
Always Be Testing!
Whenever you create a digital advisor, you do so based on what you think will work best for your users. Our best practices can help you to start off on the right foot, but you never know what's really working for your audience until you put your advisors to the test.
Humans are unpredictable; we act differently in different environments. That's why a digital advisor that generates awesome results in one environment might underperform in another.
We believe that A/B Testing should actually stand for, "Always Be Testing." If you want to create successful advisors that perform well and have a positive impact on your bottom line, frequent testing is not just recommended: it is essential.
A/B testing (also known as split testing or bucket testing) is the process of running a series of randomized experiments that involve different variants of your advisor, then subjecting these variants to performance comparison in order to ascertain which will be the most effective.
This allows you to learn how certain elements of your advisors impact user behavior. Since your advisors should aid human decision making, you should (and must) take into account your users and their opinions.
The SMARTASSISTANT A/B Test Manager allows you to test and optimize your digital advisors without hassle.
These are the steps to successful testing
- Identify areas for improvement: Analytics is your friend. It will provide insight into the areas that can be improved. Inspect the bounce rates, the position in your advisor where users drop off, time spent, answers selected etc.
- Define your goals: What do you want to improve and which metrics determine whether a variant was more successful than the original version?
- Create hypotheses: Hypotheses are assumptions that identify the elements that might impact the metrics you would like to improve
Some examples include:
- adding images vs. no images
- changing the wording of questions and answers
- changing the decision paths and Q&A flow
- changing the design and layout
- changing button colors and positions
- changing the interaction and effects
- adjusting the number of questions and steps
- Setup variants: Make a copy of your advisor with the appropriate changes to effectively test your hypotheses.
- Kick off the experiment: Run your tests using the SMARTASSISTANT A/B Test Manager. Depending on the number of visitors, you might want to run tests for anywhere from a few days to a couple of weeks. Make sure that the sample size is large enough for your results to be statistically significant. Based on your setup, SMARTASSISTANT will randomly display different versions of your advisor to different visitors. The default distribution is 50:50, however you’re free to adjust this ratio via the A/B Test Manager dashboard.
- Analyze the results: After completing of the experiment, go back to Analytics to understand which advisor performed better and whether the difference is statistically significant.
- Cultivate a culture of experimentation in your company. A/B Testing is also a good method for dealing with HiPPO's (Highest Paid Person's Opinion). Test out your HiPPO’s suggestion with a subset of your customers and take an objective look at the results.
- Start soon. Based on multiple A/B Tests, we know that even small changes can have profound impacts on the advisor’s performance.
- Remember that some test results will show negative outcomes. Don’t give up when a test fails! Don't fall in love with your hypothesis. A negative outcome is no reason to despair. In fact, scientists embrace negative results because they provide you with valuable insight for optimization.
- Keep the team small to avoid delays. Involving too many people in your experiment setup will lead to long decision cycles that and will slow down your efforts. Keep your experimentation team small in order to move quickly and reap the benefits of your testing without dealy.
- Look beyond your core metrics. Your core metrics are significant, but they may not tell the whole story. Looking at different metrics allows you to understand what is really happening. Analyze your core metrics for different customer segments, geo, languages, locations, and so on. Make sure that you stay abreast of the external environment that may influence your test results, such as launched marketing campaigns, dispatched newsletters, etc.
- Test small changes. It's best to test one change at a time and give each test sufficient time to run.
Can I Test More Than One Thing At a Time?
Yes. The SMARTASSISTANT A/B Test Manager supports running multivariate tests that involve multiple variants.
PS: Your success manager is happy to help you define a suitable strategy that allows you to get the maximum out of your advisors. Just reach out if you need support!
- Data Management