If you’re looking to run tests as part of your website personalization strategy, choosing the right test setup and laying the groundwork properly is crucial. In this article, we explain what makes a good testing process – and what you should absolutely avoid.
The Do’s: How to Set Up a Test Successfully
1. Define a clear objective upfront
Before starting any test, define a specific goal: What is your hypothesis? What are you trying to achieve? A measurable KPI should also be established – for example, increasing the average order value (AOV) by at least 10% through a product recommendation in the shopping cart.
2. Proper audience segmentation
Personalization is only effective when it reaches the right audience. With clear segmentation, your tests become more targeted and relevant – ensuring the right users see the right variations.
3. Plan test duration realistically and consider your traffic
Reliable results require sufficient data. Ending a test early due to low traffic leads to inconclusive outcomes. Generally, a test should run for several days or even weeks – depending on the amount of traffic it receives.
4. Use consistent data sources
Make sure your data models are aligned across systems. Metrics like user counts or conversion rates must be tracked consistently to avoid evaluation errors.
The Don’ts: What to Avoid in Testing
1. Testing without a control group
Statements like “Let’s show the new version to everyone and see if it works better” or “We’ll roll it out to 100% – we know what our users want” may sound tempting, but they break a fundamental rule of testing. Without a control group, you can’t measure impact accurately. You always need a baseline for comparison.
2. Testing multiple variants without a clear plan
Running four different versions simultaneously without knowing exactly what was changed where? That’s a fast track to chaos. Instead, keep it structured: one hypothesis, one test. Multiple variants are possible – but only with sufficient traffic. For lower traffic volumes, we recommend a step-by-step testing approach.
3. Making design changes without consulting your dev team
A common pitfall: launching a new design that unintentionally breaks something else on the site. Always review potential side effects and ensure your variation is technically sound before going live.
Real-World Mistakes
1. CTA not fully visible
One client tested a new call-to-action – unfortunately, it was hidden by a sticky banner on mobile. The test underperformed and had to be repeated with a revised test setup.
2. Changes made during the test
We’ve seen tests where changes were made mid-run. Unsurprisingly, this compromised the results and made them unusable.
3. Inconsistent impression tracking
A recurring issue: Impression tracking isn’t uniform. For example, Variant A is counted on page load, while Variant B is only tracked when the element becomes visible. This kind of inconsistency invalidates the entire test.
Conclusion: Testing Is Not a Game of Chance
A strong test setup requires strategy, technical accuracy, and clear goals. When done right, testing can deliver powerful insights. Our platform helps you avoid common pitfalls – supporting you from hypothesis to evaluation.
About this article
This blog post was contributed by one of our Customer Success Managers. With direct insights from daily client work, our CSM team shares hands-on experience, practical learnings, and proven approaches from real-life optimization projects. Thanks to Patrick for this valuable contribution.