A/B Testing Glossary
A/B testing helps product managers to rapidly grow and optimize digital experiences. Our A/B testing glossary breaks down commonly used terms and concepts used when A/B testing across different channels – mobile, web, server and OTT.
An experiment comparing two versions of a web page or mobile app to see which one performs better.
A/A testing is a form of A/B testing, in which both the control and variation are identical.
The number of users (usually a percentage of your total user population) assigned to an A/B test or feature flag.
The number of downloads a mobile app receives.
The software development phase following the alpha (sometimes after the canary or pilot) release. The feature is usually stable and complete but likely to have bugs.
The earliest stable release of a new feature or product following internal QA and the alpha release.
A variant of an experiment that is the same as the original app (no change) against which new variants are compared.
The percentage of visitors/users who clicked on a link compared to the page/screen’s total visitors.
The number of users who performed a certain action (e.g. click, purchase, registration, etc.)
The percentage of conversions compared to total users.
An iterative, data-driven process to increase conversion rates.
An estimate of the average revenue that your customers will generate throughout their lifespan as a customer.
The number of users that visited a website or mobile app on a given day.
The first release of a new feature’s code hidden behind a feature flag. A dark launch is the first step to a gradual, staged, or phased rollout.
A link that leads directly to a specific screen of a mobile app that is not the homescreen.
A variable defined in your code that can be configured and adjusted to control the experiences of your users at run-time without the need for code changes or an app store release.
A user activity or behavior on a website or mobile app being tracked by analytics.
An A/B test or experiment whose participants are not simultaneously allotted into any other experiments.
An A/B test.
A code switch or toggle that contains a feature’s code so that it can be independently turned on/off. Feature Flags separates the code development process from the deployment process.
The software development process of gradually introducing a new feature to a set of users.
A variable defined in your code that can be configured and adjusted to control and personalize the experiences of your users at run-time without the need for code changes or an app store release.
An order of steps a customer or user takes toward a key action.
Percentage of users who progressed through all steps of a funnel out of all the users who enter into a funnel by converting on the first step of the funnel.
An event or combination of events that measure the success of an experiment.
<!–
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor.
–>
A visual representation showing where users interacted with a website or mobile app using colors to indicated areas of higher engagement.
The process of A/B testing an iOS app despite the App Store approval process.
The process gradually rolling out new features on an iOS app despite the App Store approval process.
The first webpage that users land on.
The process of improving elements on a landing page to increase conversions.
An estimate of the average revenue that your customers will generate throughout their lifespan as a customer.
Percentage better or worse a variant is compared to the control in an experiment or A/B test.
The number of unique users visiting a website or mobile in a given month.
A ratio of daily active users to monthly active users. A common measure of stickiness.
The measurement and analysis of data to inform an understanding of user behavior across a mobile app.
The practice of A/B testing mobile apps despite App Store and Play Store approval processes.
The process of using controlled experimentation to improve a mobile app’s ability to drive business goals.
The retention rate of users in a mobile app.
A variation of A/B testing where multiple changes are tested in the same experiment.
An experiment comparing two versions of a web page or mobile app to see which one performs better.
Learn about the key differences between A/B testing and multivariate testing, and when is the best time to use each approach.
The start of the customer experience with a digital product, oftentimes the first session or the first run experience.
The process of A/B testing video streaming app made for a TV viewing device.
A user in an experiment’s population who has seen a variant.
Users who meet the criteria to be entered into an experiment or A/B test.
The path taken by a potential customer through a website or app as they move towards becoming a customer.
The process of A/B testing on a server running Python.
The process of A/B testing on a mobile app developed using the React Native framework.
The path taken by a potential customer through a website or app as they move towards becoming a customer.
The process of enhancing webpages to rank higher in search engines results.
A form of experimentation where unlike in client-side testing, the variations of a test are rendered directly on the web server before it is delivered to the client.
When a potential customer starts the check out process for an online order but drops out of the process before completing the purchase.
A strategy for conducting controlled, randomized experiments with the goal of improving a conversion metric on a website or mobile app. A synonym of A/B testing.
A landing page which is specifically designed to ‘squeeze’ e-mail addresses out of visitors and prospects.
The likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance (rejects the null hypothesis).
The process of dividing your visitors to your website or mobile app based on a specific criteria, such as demography, geography, or user behavior.
A statistics term used to refer to an error that is made in testing when a conclusive winner is declared although the test is actually inconclusive.
A statistics term used to refer to a testing error that is made when no conclusive winner is declared between a control and a variation when there actually should be one.
The competitive differentiators that a business has over its competitors.
A method of evaluating a website or app’s readiness for release by testing it with real users who are part of your target audience.
A method to get qualitative user feedback via user interviews.
The path taken by a prototypical user on a website or app to complete a task.
A diagram of the entire user experience, starting with initial contact or discovery, and continuing through the process of engagement into long-term loyalty and advocacy.
The essence of the benefit that your product or service provides to the customer.
A standard measure of ad viewability defined by the IAB to be an ad which appears at least 50% on screen for more than one second.
The measurement and analysis of data to inform an understanding of user behavior across web pages.
The process of using controlled experimentation to improve a website’s ability to drive business goals.
The process of creating customized experiences for visitors to a website.