A/B testing is one of the most important methods of testing not only in e-commerce, but also in the areas of software, user experience and online marketing. In an A/B test, two variants (A and B) are tested against each other. By evaluating the results, presentations and offers can be changed to allow users to interact better with them. In e-commerce, for example, an increase in the click rate or more sales can be achieved.
When conducting an A/B test, the original version (A) is compared with a modified version (B). The changes in version B can refer to the website, parts of the website or to individual elements or changed links. The users on the website are distributed randomly: one part receives version A (original version), the other part receives version B (test version).
With the help of A/B tests it is possible to analyze what reactions individual elements cause in direct comparison with the users. For example, it can be evaluated which variant of an online shop or even just the colour design of a call to action achieves better results (e.g. more purchases or newsletter registrations). If webshop operators want to completely redesign their website, they can also test two versions against each other.
For the execution of an A/B test it is important to define the objectives of the test and KPIs, such as the click or conversion rate, on which the success of the measure is determined. The size of the target group is also crucial to ensure that a valid and statistically robust result is obtained from the test. Further decisions can then be made on the basis of this result.
Significance of an A/B test
The probability that a result is not accidental can be determined by calculating the statistical error probability in a so-called significance test. Various significance calculators are available online to facilitate the calculation. However, most A/B testing tools provide their own significance calculation by default.