Website conversion experiments with Google Optimize
I’ve been working on various A/B Tests for the past few years. Project managers always say “let’s do an A/B Test!”, then we will start to plan for a A/B Test implementation.
However, what does it take to perform a meaningful A/B Test?
As a growth engineer, I don’t only comment on technical implementation, but also help designing the experiment.
Figuring out the underlying WHY
What is our goal for doing an experiment?
- Improving Click through rate
- Improving engament
- Improving retention
- Improving revenue
- Improving user expierence
When we start designing A/B tests, we must be able to tell the goal of our experiment. For example: “We are trying to improve the conversion rate of the buy button on the product detail page.”
That will guide us to design the right experiment to solve the right problem.
Instead of coming up ideas like “let’s try to use a blue button instead of red”, we will be able to think of hypotheses that has higher possibilities to “improve the conversion rate of the buy button on the product detail page.”
For example (but not limited to):
- Does the call to action message matter?
- Does the product image size matter? (Yes, sometimes other factors are important too.)
- Does the number of images on the product page matter?
and the list goes on.
I suggest the Product Owner to pin the problem and run a creative session, there might be ideas that surprise you! One bonus is that everyone in the team feels they are part of the product as well! We actually ran a few ideas brainstorming sessions, which turns out great.
Figuring out the priority
You might have a tons of ideas generated by yourself (or by running a brainstorming session). But budget counts, how should we prioritize our ideas backlog?
Different teams will have a different approach.
Prioritization framework might be a good way to start.
By a simple sentence, we will prioritize “Low Effort + High Impact” experiments first.
That will work for most of the time. However sometimes we will face a problem that we will never be able to start working on some “Higher Effort” bets that might bring longer term benefits.
That will require wise.
Yet another framework might be “20% investment in bets and 80% implementation in low hanging fruits”. And of course the ratio depends on your team’s resource and current situation.
Set up a control
At this stage, you should have the draft of design ready in your hand, whether it’s graphical or just a piece of text to update.
Here’s an example below:
Variant A (Original)
Subscribe to Tech Newsletter
Variant B
Get Lastest Tech news in your inbox.
This is extracted from a real experiment, where we tried to update the call to action message for the subscription box. We used Google Optimize to update the message on the JS level. When the page loads, it distribute target audience into test groups and display either message A or message B.
In the test above, it is important that we keep a “control version”.
When we have several variants to test out, we shall compare them with the control.
Do:
- Test 1 : Variant A vs Variant B
- Test 2 : Variant A vs Variant C
- Test 3 : Variant A vs Variant D
- Test 4 : Variant A vs Variant E
Don’t:
- Test 1 : Variant A vs Variant B
- Test 2 : Variant B vs Variant C
- Test 3 : Variant C vs Variant D
- Test 4 : Variant D vs Variant E
The main reason: Think about other factors e.g. time of running test. Let say Variant C is having the best conversion in the “Don’t” series, but you can never tell if it’s affected by the spike of traffic or change of traffic source (which is extremely hard to tell).
You may also consider to run the test once for all:
- Test 1 : Variant A vs Variant B vs Variant C vs Variant D vs Variant E
However, by spliting into many variants in an experiment will require a longer time to get a “significant” result.
How long should the test run?
Don’t wait for a significant result.
The sample size of your test should be calculated before you run the experiment.
Technical implementation
Different test will require a different technical effort to set up the test. Some can be done purely on the front-end UI:
- Call-to-action message change
- Simple page layout
- Show hide elements
Some will require server-side effort:
- Cutting Infinite-scroll (to measure user engagment)
- Login page flow
and some might require infrastructure updates:
- Retention EDM sending cycle
Of course, for most experiments we don’t always implement a “full feature”. We try to make less effort to test out the hypotheis (such as hide the element on UI directly)
Find out actionable next steps
(To be continued)
« Hello World (again)
Setting up a streaming service with AWS IVS platform »