A/B Testing
Comparing two versions of something to determine which performs better based on data.
Definition
A/B testing (split testing) shows different versions of a page, feature, or experience to different user groups and measures which performs better on a specific metric. Version A (control) is the current experience; Version B (variant) is the change. Statistical significance ensures the observed difference isn't due to random chance.
Beyond simple A/B tests, multivariate testing changes multiple elements simultaneously, and multi-armed bandit algorithms dynamically allocate traffic to winning variants. Tools like Optimizely, VWO, and Google Optimize make testing accessible.
Why it matters for founders
A/B testing removes opinion from product decisions and replaces it with data. Instead of debating whether a green or blue button converts better, you test it. Compounding small wins (1-2% improvements) creates massive gains over time.
Example
Booking.com runs over 1,000 A/B tests simultaneously. One test discovered that changing "Book Now" to "Reserve - you won't be charged yet" increased conversions by 17%. This single test generated hundreds of millions in additional revenue annually.
How Foundra helps
Foundra's validation methodology is built on testing principles. The Proof Signals card helps you identify which metrics to A/B test for maximum impact.
Start your free trial→Related free tools
Related terms
Feature Flag
A toggle that enables or disables a feature in production without deploying new code.
Activation Rate
The percentage of new users who complete a key action that predicts long-term retention.
Iteration
The process of making incremental improvements based on user feedback and data.
Product-Qualified Lead (PQL)
A user who has experienced meaningful product value and is likely to become a paying customer.