Testing and experimentation

What is testing and experimentation in digital journeys?

Testing and experimentation is the practice of running controlled changes to pages, flows, or elements and measuring which variants perform better against a defined metric. It turns optimisation into a structured, ongoing process rather than one-off redesigns.

What is A/B testing?

A/B testing compares two versions of an element or page (A and B) by splitting traffic between them and measuring which version drives higher conversions, engagement, or another chosen outcome. It is the most common and straightforward form of experimentation.

What is multivariate testing?

Multivariate testing evaluates multiple element variations and combinations at the same time—for example, different headlines, images, and CTAs together. It is useful when you want to understand how several changes interact, but it requires more traffic and careful planning.

Why is experimentation important for CRO and growth?

Experimentation replaces opinion and internal debate with data. It reveals which ideas and design choices genuinely move conversion rates, activation, or retention—and which do not—so teams can scale the winning patterns across journeys with confidence.

What makes a good experiment hypothesis?

A good hypothesis clearly states the problem, the proposed change, and the expected impact on a specific metric. For example: “Because users are dropping off on the pricing page, simplifying the layout and adding a ‘Talk to sales’ CTA will increase demo requests by 10%.”

Which areas are most often tested first?

Teams often start with high-impact touchpoints: headlines on key pages, call-to-action copy and placement, page layouts, form length and fields, pricing or packaging messages, and critical onboarding or sign-up screens.

How is statistical significance relevant in experiments?

Statistical significance indicates that the observed difference between variants is unlikely to be due to chance. It gives decision-makers confidence that the winning variation is genuinely better and worth rolling out more broadly.

How long should experiments typically run?

Experiments should run long enough to collect sufficient data across normal traffic and behaviour patterns. In practice, this often means at least one full business cycle (for example, one to two weeks or longer) depending on traffic volume and decision thresholds.

How should experiment results be documented?

Each experiment should have a concise record of the hypothesis, variants, dates, target metrics, results, screenshots, and final decision. Storing this in a shared log or knowledge base builds a reusable library of learnings for future campaigns and product decisions.

Who should be involved in experimentation programmes?

Effective programmes involve collaboration between marketing, product, design, analytics, and engineering. Marketing and product shape ideas and goals, design crafts variants, engineering implements them, and analytics ensures rigorous measurement and interpretation.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.