A/B testing (also called split testing) is the practice of sending two or more variations of an email to segments of your list to determine which performs better. Over a year of consistent testing, it compounds into a dramatic performance advantage.
This Week’s Lesson
The core principle: change one variable at a time. Subject line A vs. subject line B. CTA copy A vs. CTA copy B. Send time A vs. send time B. If you change two things simultaneously, you can't know which change caused the result.
What to test (in order of impact): Subject line (highest impact on open rate). CTA copy and design (highest impact on click rate). Email length (impacts both engagement and unsubscribes). Send time (impacts open rate). From name. Preview text. Image vs. no image.
Sample size: for statistically significant results, you need at minimum 1,000 recipients per variation. Below that, you may be seeing random noise rather than real differences. With lower volume, focus on patterns across multiple tests rather than individual results.
Test duration: subject line tests can conclude in 4-8 hours (when 80%+ of opens come in). Multivariate tests with behavioral outcomes (purchases, signups) need 24-72 hours for the full conversion window to close.
Document your tests. Create a simple spreadsheet: date, variable tested, hypothesis, result, decision. Over a year, this becomes an organizational knowledge asset that informs every future campaign — and prevents teams from repeatedly re-testing things that are already known.
The most important thing about A/B testing: actually doing it. Many teams talk about testing but never build the cadence. Commit to one test per send for 90 days and measure the cumulative impact.