Conversion rate optimisation is a systematic practice. It identifies the actions of value within a digital product or website, implements the technology to measure them accurately, and designs experiments to improve the rate at which visitors take those actions.
That is the whole thing. Everything else - the A/B tests, the heatmaps, the redesigned checkout flows - is a method in service of that practice. CRO is not any one of those methods. It is the discipline that decides which method to use, when, and why.
The word "conversion" is misleading
The term conversion implies a binary event: someone converts or they do not. A form is submitted. A purchase is completed. A trial is started. This framing is useful but incomplete.
In practice, what matters to a business is rarely a single event. It is a sequence of behaviours that indicate intent, build commitment, and eventually lead to an outcome the business values. We call these actions of value - the moments where a visitor does something that matters.
An action of value might be a purchase. It might also be watching a product video to completion, returning to a pricing page for the third time, or completing the first step of an onboarding flow. These are signals. CRO works with the full set of signals, not just the final event.
What CRO is not
It is worth being specific about what CRO is not, because the term gets applied to a lot of adjacent work.
- CRO is not a website redesign. A redesign changes everything at once and makes it very difficult to attribute outcomes. CRO changes specific things, measures the result, and builds on what works.
- CRO is not just A/B testing. A/B testing is one experiment type within CRO. Multivariate tests, self-learning experiments, holdout tests, and sequential experiments all have their place. The right tool depends on the question.
- CRO is not a landing page audit. An audit produces a list of recommendations. CRO produces validated changes backed by data. The audit might be a useful starting point, but it is not the discipline.
- CRO is not UX design. UX design and CRO overlap significantly, but their goals are different. UX optimises for usability and satisfaction. CRO optimises for specific business outcomes. Sometimes these align. Sometimes they do not.
The two streams
CRO operates across two parallel streams. Both are required. Doing one without the other produces incomplete results.
Technology
The first stream is infrastructure. Before you can optimise anything, you need to know what is happening. This means implementing the analytics and tracking technology that gives you accurate, granular data about visitor behaviour.
For most businesses, this is the diagnostic prerequisite - the work that must happen before anything else. Client-side tracking (the JavaScript tags running in the browser) is increasingly unreliable. Browser privacy features, ad blockers, and consent frameworks mean that client-side data captures a fraction of actual behaviour. Server-side tracking - where events are recorded on the server before they reach the browser - is now the foundation of credible measurement.
Beyond tracking, the technology stream includes event architecture (defining what gets measured and how), experimentation platforms (the tools that run A/B tests and allocate traffic), and the data pipelines that connect these systems to your analytics.
Design
The second stream is human. Why do people take action? Why do they hesitate? What creates friction in a funnel, and what removes it?
This stream draws on behavioural psychology, marketing strategy, and sales thinking. It is concerned with the motivations, anxieties, and decision-making patterns of real people interacting with your product or website. Friction mapping, funnel architecture, conversion copywriting, and interface design all live here.
The design stream produces the hypotheses. The technology stream tests them. Neither works without the other.
The full funnel
One of the most common mistakes in CRO is scoping too narrowly. A business asks: how do we improve the conversion rate on our landing page? This is a reasonable question. But it often misses the bigger picture.
The landing page is one point in a pipeline. Before the landing page, there is the ad, the search result, or the email that brought the visitor there. The quality and intent of that visitation matters as much as what happens on the page. If the ad is sending the wrong people, optimising the landing page will not fix the problem.
After the landing page, there is the form, the confirmation, the follow-up email, the sales call. For B2B businesses, the pipeline extends well beyond the website. Leads stall in CRM queues. Sales teams respond too slowly. The handoff from marketing to sales loses context.
CRO looks at the entire pipeline. The constitution of visitors entering it. The behaviour at each stage. The outcomes they produce. Optimising just one point is useful but limited.
How CRO is measured
The core metric is straightforward: conversion rate. The number of visitors who take a defined action of value, divided by the total number of visitors. If 1,000 people visit a pricing page and 30 request a demo, the conversion rate is 3%.
But conversion rate alone can be misleading. A change that increases conversion rate by 10% on a page with 50 daily visitors is less valuable than a 2% increase on a page with 10,000. Volume matters. Revenue per visitor matters. Lifetime value matters.
Sophisticated CRO tracks multiple metrics simultaneously. The primary metric is the action of value being optimised. Secondary metrics capture unintended effects - does improving the signup rate decrease the activation rate? Does reducing form fields increase submissions but decrease lead quality?
Every experiment should define its success criteria before it runs. What metric are we moving? By how much? Over what time period? With what statistical confidence? This is the difference between experimentation and guessing.
The experimentation cycle
CRO follows a cycle. The steps are not complicated, but skipping any of them reduces the value of the work.
- Observe. Look at the data. Where do people drop off? Where do they hesitate? What do the numbers actually say?
- Hypothesise. Form a specific hypothesis: if we change X, we expect Y to happen, because Z. The "because" is the important part - it forces you to articulate your reasoning.
- Design. Build the experiment. Define the variant, the control, the metric, the sample size, and the duration.
- Run. Execute the experiment with statistical rigour. Do not peek at results early. Do not stop the test when it looks good.
- Analyse. Evaluate the results. Did the hypothesis hold? What did we learn, regardless of whether the experiment won or lost?
- Implement or iterate. If the experiment won, ship the change. If it lost, the learning feeds the next hypothesis.
This cycle runs continuously. Each round builds on the last. Over time, the body of evidence about what works for your specific audience, product, and market becomes substantial and valuable.
CRO is a practice, not a project. It does not end when the first experiment runs. It is an ongoing discipline of measurement, experimentation, and validated improvement - applied to the full pipeline, across both technology and human psychology.
The businesses that treat it this way see compounding returns. The ones that treat it as a one-off audit tend to see a brief bump followed by a return to baseline.