top of page
  • Editorial

The ABC of A/B Testing for OTT Platforms

Updated: Feb 11, 2022

The ABC of A/B Testing for OTT Platforms

A/B testing has been always around in different format. In its simplest form, it is about giving two choices for a number of users/consumers and then take their feedback. And then based on feedback, we can decide the changes required in product or service to suit the users.

One basic example is size and format of the ‘Subscribe’ button on the application. The OTT platform can run A/B tests and enable different shapes and sizes of ‘Subscribe’ button. And then based on the results in a given period, the platform can find out which format is more likely to persuade the users to click the button. Now it may sound too convenient as simply a format of a button on app can be too small a factor. And the other factors like content on platform, subscription fee etc can make a difference too. But in reality, combined with the user’s info, content consumption habits, journey to that page and format of button can lead a user to click the Subscribe button.

General usage of A/B testing

With more and more platforms using same type of interfaces, it has become a necessity for product owners to define a likability metric which is based on A/B testing. Marketers or Product Owners would like to answer questions like; What is the most likely to make people buy product? Or click on a specific watch or subscribe button? Or even simply registering on the platform? A/B testing is now used to evaluate everything from website design to online offers to headlines to product definitions. Even language has been a parameter for lot of A/B tests to determine likeability.

Mistakes to avoid while doing A/B testing

A/B testing requires patience and time. With increasing competition and pressure, platform managers want quick results. In most cases, not enough time is given to run the tests and thereby impacting the results. Quite often the platforms utilize the real-time optimization testing method and try to find the quickest way to the resolution. As A/B testing is based on randomisation, the results may vary based on the time allocated for results and optimizations.

Second most common issue is too much analysis which brings in decision paralysis. Most software vendors will provide 100s of metrics for testing. In majority of scenarios these many metrics are not required. Rather a more customised statistical model linked to the actual platform is more suitable. But in order to not under utilize the service, most managers end up using a large number of metrics thereby impacting the desired outcome.

36 views0 comments

Recent Posts

See All


bottom of page