top of page
Team of Industrial Engineers


Are you shooting in the dark with your A/B Testing efforts?

In today's data-driven world, businesses are constantly seeking ways to optimize their marketing strategies and improve user experiences. A/B testing, also known as split testing, has emerged as a powerful technique to achieve these goals.

Winning on KPIs that matter via A/B Testing

Almost all startups and data driven organizations are using A/B testing for improving some aspect of their business.

Netflix had published how it had seen significant improvements in user satisfaction and content consumption by its extensive use of A/B testing across user interface, content recommendations, and artwork to improve user engagement and retention. On the other hand, Airbnb used A/B testing to experiment with different variations of its booking flow. By testing different layouts, messaging, and trust-building elements, they achieved a 30% increase in booking conversion rates, leading to significant revenue growth.

Isn’t that awesome! How can we derive similar results in our business with successful A/B testing?

What is A/B Testing?

Measurement science has long used the concept control vs test design method that employs ceteris paribus principle, i.e. to keep all variables other than the test variable constant to isolate and test effect of a variable. A/B testing leverages the same scientific method to compare two or more variations of a marketing or product element to determine which one performs better in achieving a specific goal for a website or an app.

It involves dividing a target audience into different groups (control vs test groups) and exposing them to different versions of a webpage, email, advertisement, or other marketing assets. By measuring the performance of each variation, businesses can make data-driven decisions to identify the more effective options to optimize their strategies.

A/B testing can hence be valuable for businesses of all sizes, from startups to large enterprises who strive to enhance user experiences, increase conversion rates, and improve overall performance.

Pitfalls and challenges of A/B testing?

However, we see 3 big challenges most organizations are dealing with which we like to call the D.O.G

1. Deterministic and yet not always optimal: One of the biggest challenges of A/B testing, is its inability to give insight into the whys. Why did one marketing message or feature get better impact? What is the underlying consumer need? Why did pricing A fare better than B? The same deterministic nature of A/B testing which is a boon from a ROI perspective, becomes limiting in understanding the subject and drawing inferences in totality which will help limit the number of test options.

“A mistake that some companies make is to start moving a bunch of levers around without clear planning upfront for what they're trying to optimize—and what will be impacted by those changes.” Dan Siroker, A/B Testing: The Most Powerful Way to Turn Clicks Into Customers

1. Over-reliance on Testing: Having access to user database and being able to conduct an A/B test without buying data is leading organizations to conduct an insane number of A/B test. Over-reliance on A/B testing can delay speed-to-market and increase the cost and time of deployment of key initiatives. In many cases, it can be seen as a safety net and a way to avoid critical problem-solving.

2. GIGO: Small changes in the input can have a big impact on the output! We can be very sure we chose the best amongst options tested but how do we know that another option which was not tested could not have delivered similar KPIs with higher efficiency? In the absence of clear hypothesis and underlying learnings, A/B testing may be like shooting in the dark.

Prioritize and Power Up A/B Testing

How then can we prioritize and design A/B testing in the organization to be more effective?

How to Prioritize: On critical areas such as pricing, marketing communications, it may be useful to have a perspective on underlying perceptions and belief systems amongst customers. This can help formulate sharper hypothesis to guide and structure our A/B testing efforts in terms of the inputs to test and the output we seek to impact.

A health-tech start-up with a premium offering had tested multiple pricing and while it got some insights into which pricing effort tested better, it still wasn’t sure if it hit the optimal pricing that will help them grow the market and the results felt incremental.

We conducted qualitative research amongst its customers and potential users to understand the reactions to a few pack-price offers from them as well as the overall triggers and barriers to adoption to their programs. As a result, we could recommend 3 types of product bundles to test with an approximate pricing range. They could leverage this to focus their A/B testing on pricing to clear tests and a clear pricing strategy emerged. They are now looking to scale-up to newer geographies with their validate product-pricing bundles.

Not everyone may have formal research to lean into or have multiple hypothesis to test. The VICE Framework (Velocity, Impact, Confidence, Ease) can help prioritize the hypothesis to test.

Powering up your A/B testing:

Often, we get into the trap of incremental improvements with A/B testing.

“If you want to increase your success rate, double your failure rate.” Thomas J. Watson, Former Chairman and CEO of IBM”

Very often, to get disruptive results, one may need to take a step back, explore the subject differently or take an alternative approach to improve our results.

An ecommerce startup we worked with had spent considerable monies on new user acquisition and had tested multiple variations of marketing messages. It realized the need to pause and reboot.

We worked together to understand their consumer persona, the reasons for buying into their platform as well as life context of their consumer to understand the behavior patterns and the current belief systems. Leveraging key behavioral economics principles, we could recommend some fundamental shifts to completely reshape their communication efforts vs the incremental improvements they were getting via A/B testing

1. LIST: What are the specific hypotheses we need to test to improve X output KPIs (define the Key KPI and 1-2 other KPIs to watch eg New users vs GMV vs No of Transactions vs. AOV etc)

2. POWER: If struggling with specific hypothesis ask yourself, do we understand enough about the subject? Would some pre-work or learnings be useful to define clear hypothesis? Should you consider powering through a disruptive approach if it’s a tricky or a critical output KPI through brainstorming or outside-in lens?

3. TRANSLATE: Translating hypothesis to a clear input or test options also requires some work. Does it make sense to test multiple single variations in the test or test a few different copies with multiple differences? What is the right representative audience and how long the test should be run? Each design element can influence the test result.

When there is a clear strategic intent and a deeper appreciation of the consumer and the business levers at play leading up to hypothesis to test, we can magnify the power of A/B testing. Not only do we save a lot of time and effort vs iterative testing, but it allows us to jumpshift impact on KPIs. So, instead of shooting in the dark and just launching into to A/B testing, spend time on building and prioritizing hypothesis to test!


"Want to elevate your brand with us? 

Let's have a conversation and take your business to new heights!"

bottom of page