Home
Blog
Conduct A/B testing correctly and increase conversions

Conduct A/B testing correctly and increase conversions

Du willst mehr Conversions, aber alle im Team haben eine andere Meinung dazu, wie es funktioniert. Der Designer will mehr Weissraum, der Vertrieb will mehr Argumente, der Chef will's knalliger. Das A/B-Testing beendet genau diese Diskussionen. In diesem Artikel erfährst du, wo und wie A/B-Testing eingesetzt wird und du lernst, wie das Testing abläuft und welche Tools du dafür verwenden kannst.

11.23.2025
16
min reading time
Author
Editorial Team
Axisbits GmbH

What is A/B testing?

A/B testing is a process in which two variants of an element are tested against each other to find out which works better. Variant A is usually the existing version (control group), variant B contains a targeted change. Both are played out simultaneously to randomly divided groups of visitors.

The goal of A/B testing: Clearly find out which variant leads to more conversions, be it a click, a login, a purchase or another defined action.

A/B testing example

You run a landing page with the aim of getting visitors to sign up for a free demo. Your hypothesis: The current headline is too general. You set up the system so that half of the users receive headline variant A and the other half receive headline variant B. You're testing:

  • variant A: “Everything under control with our CRM”
  • Option B: “Try our CRM — 14 days free, no credit card required”

After 2 weeks, it is clear that B brings 28% more registrations. Now you know: This change is effective, based on the higher Conversion rate.

Differentiation: Which tests differ from A/B testing?

Typical A/B testing use cases

A/B testing is typically done at Conversion rate optimization used by landing pages, in e-commerce, for SaaS, in email marketing and for ads and campaigns.

Good A/B testing focuses on where decisions are made. Where users hesitate, cancel or convert more frequently through targeted changes.

A/B testing for landing pages

Landing pages have a clear goal, such as signing up, downloading, or making a purchase. Typical test objects within landing pages:

  • headings
  • call-to-actions (text, color, position)
  • Hero graphics or video integration
  • Text length (short vs. long)
  • Trust elements (e.g. logos, customer testimonials, seals)

A/B testing in e-commerce

Online shops are about decisions in a short period of time and small hurdles in the process are already causing interruptions. A/B testing can help reduce such points of friction in the buying process. Get tested:

  • Product images (individual vs. gallery, neutral vs. in use)
  • Price presentation (CHF 49.— vs. CHF 48.90)
  • Discount communication (“—20%” vs. “save CHF 10.—”)
  • button label (“Add to cart” vs. “buy now”)
  • Checkout steps (1-page checkout vs. multi-stage)

A/B testing for SaaS & digital products

Here, A/B testing is particularly worthwhile for feature communication (before registration) and in onboarding (afterwards). So-called onboarding is the user's first personal contact with the software. It is often decided here whether he continues or jumps off again. Objective: Lower registration barriers and then quickly explain the benefits.

What can be tested sensibly with SaaS & digital products:

Feature communication

  • Name of functions: Technical term vs. benefit-oriented language
  • → e.g. “versioning” vs. “undo changes”
  • Tooltip text or placeholder in form fields
  • Feature enabled or disabled by default?
  • Product videos vs. GIF previews

Pricing models & upgrades

  • Pricing model: View monthly vs. yearly first
  • Positioning the “free” plan: at the top vs. at the end
  • CTA when upgrading: “Upgrade now” vs. “Unlock more features”
  • Discount display: Amount vs. percentage
  • Paywall style: Block + hint vs. just hint

Onboarding & activation (= get users to the first real usage step)

  • Number of steps: complex setup at once vs. step-by-step
  • Help: Tooltip overlays vs. short walkthrough
  • Sample data: Blank surface vs. predefined content
  • Skip options: “Skip now” visible or not?
  • CTA texts: “Start now” vs. “Create first task”

A/B testing in email marketing

A/B testing is often easy to implement directly into email tools and quickly provide insights there. What is frequently tested:

  • Subject line (question vs. statement, with or without emoji)
  • Sender name (brand vs. real first name)
  • CTA in the email (text, placement, number)
  • Dispatch time (morning vs. afternoon, weekday)
  • structure of the mailing (short vs. long, text vs. image)

A/B testing for ads & campaigns (e.g. Meta, Google Ads)

Even the first impression determines whether someone clicks or continues to scroll. Here, A/B testing helps to optimize the click rate (CTR) and thus the overall funnel performance. examples:

  • ad title
  • Descriptive texts
  • pictures or videos
  • Keyword combinations
  • Landing page variants per ad

Guide: Do A/B testing correctly

A/B testing is typically carried out in 6 steps: Define the goal, establish a hypothesis, create a test variant, set up a test, interpret and decide on significance and test duration and result.

1. Define the goal of A/B testing

Before you test, you need to know what you want to improve. If you can't answer that relatively quickly for a specific page, that's an important clue for you to redefine the page's goal first. Then you also know where the A/B test should work:

  • More purchases on the product page
  • More webinar registrations
  • More clicks on the CTA button
  • More users completing the onboarding step

Important: The goal must be measurable, otherwise you won't be able to evaluate the success of the test.

2. Establish a hypothesis for A/B testing

A hypothesis is a well-founded guess on which the test is based. This hypothesis is your guideline. It determines what you test and how you measure success.

“We believe [Amendment X] results in [Objective Y] being improved because [Reason Z] . ”

example:

“We believe that a clearly visible discount in the shopping cart increases the purchase rate because the price advantage then becomes clearer. ”

3. Create a test variant for this A/B test

In this step, you bring both versions to the starting position. :

  • Option A: Your current version (control group)
  • Create variant B: With one, targeted change

Important: Change just one variable! Otherwise, you won't be able to say what triggered the effect afterwards.

example: Just the headline, not tandem also image, button and layout within an A/B test.

Special case: Can you run multiple A/B tests at the same time?
Running several tests in parallel can be useful, for example if you want to test different elements of your page independently of each other (e.g. headline, button, arguments). It is important to note that the tests must not influence each other.

What works:
- You test various elements in separate groups of visitors
- (e.g. test 1: only two headline variants against each other, test 2: only two button variants — each for own user segments)
- You test on completely separate pages or URLs
- You use a tool with a targeting function that ensures that each person only takes part in one test

What you should avoid during parallel A/B testing:
- Change the same element in multiple tests (e.g. headline and button at the same time, without control)
- Uncontrolled combinations (e.g. headline A + button B + argument C)
- That the same users appear in multiple tests
Interference effects falsify your data and in the end you don't know what worked.

Alternative: Serial testing
I
In many cases, it is therefore better to test one by one:
- Test the headline first → adopt the best variant
- Then test the button and accept the best variant
- Then adopt arguments and the best of each case
It may take longer, but you'll get clear results that you can work with.

4. Set up A/B testing

The testing tool distributes traffic evenly between A and B: randomly but fairly. Pay attention to:

  • Even distribution (e.g. 50/50)
  • No double load (a user does not see A and B)
  • Sufficient running time: At least a full week, ideally 2—4 weeks or until at least 1,000 users were in
Tip: In this article, you will also find *tools and software for A/B testing

5. Consider significance and test duration

The test must run long enough and collect enough data, otherwise the results are worthless. Rules of thumb:

  • At least 1,000 visitors per variant
  • At least 100 conversions per variant, better more
  • No interim evaluation (“peeking”), wait until the end

Statistical significance means that the difference between A and B is not just a coincidence. Common threshold: 95% confidence.

Do I have to calculate the statistical significance myself?
No, most tools calculate the statistical significance for you or offer other evaluations that allow you to classify the result of the test.
Statistical significance means: The difference between variant A and B is so great that it is highly likely Not a coincidence is. Current threshold: 95% (means that you are 95% certain that the better option truly is better).

6. Interpret and decide on the results of A/B testing

The goal of A/B testing is not only to see what won, but also why.

This is followed by the conclusion: What does this result mean for our users, our hypothesis, and our next steps?

Case 1: One variant is significantly better ✅

Your testing tool shows you: For example, variant B performs 18% better, with 97% significance. Great! What now?

  • Implement the variant as a new standard
  • Documenting and validating a hypothesis: Has it been confirmed? Why
  • Save test setup (screenshots, numbers, interpretation)
  • Check effects in the overall context: Are there side effects, for example, on other KPIs?

example: Option B brought more clicks — but also more abortions in the next step? Then it's not all won yet.

Case 2: No significant difference ⚖️

There can be many reasons for this:

  • The change was too small or not important
  • The hypothesis was wrong
  • The test duration or sample size were insufficient
  • The effect exists, but is weak — or is below your measurement limit

Do what?

  • Don't count it as a failure, you've still learned something
  • Revise hypothesis: Was the reasoning conclusive?
  • Test major or more noticeable changes
  • Set up a further test: e.g. test the next position in the funnel

Case 3: Variant B is worse ❌

That is also a valuable result. And that's if you don't ignore it.

Do what?

  • Clearly document: What did we try, why, what happened?
  • Don't accept variant B of course
  • Derive: What could have irritated or deterred users?
  • Build the next hypothesis based on these learnings

example: You've put more information in the headline. Result: poorer performance. Possible conclusion: Less is more. Next test: Shorten instead of expand.

Are you stuck in A/B testing during your conversion rate optimization?

From the experience of numerous website and landing page projects, we at Axisbits know how complex A/B testing can be. And yet the test phase is by no means everything on the way to a thoroughly optimized website.

When the performance of your pages Always driving below your expectations And if you are sure that more needs to be done there, a neutral look at your testing setup and your previous hypotheses and results may help.

Your shop should implement more, your landing page deliver more leads and is there still too much room in your pipeline?
Maybe it's time for a neutral look at your content, landing pages, and the entire website. Together, we can set up A/B testing that increases the performance of your site again. Get in touch with us and we'll also show you how we would approach optimizing your site's conversion rate.

{{fs-btn-cta}}

Du willst Marktchancen nutzen und Wachstum fördern?

Wir schaffen leistungsstarke Plattformen und Websites für Startups, Scale-Ups und KMUs, von Konzept bis Go-Live.

Share this article
https://www.axisbits.ch/ab-testing

A/B testing — common questions and answers

The rule of thumb: At least 1,000 visitors per variant, better more. To do this, you also need a sufficiently high conversion rate so that your tool recognizes statistically meaningful differences. If your site has low traffic, it's better to test major changes or use longer-term testing.

At least a full week, ideally 2—4 weeks, so that weekday effects are calculated out. And: Only stop when your tool says the results are significant. Breaking out too early falsifies everything.

Then you either need several, clearly separated tests (with segmentation), or you work with multivariate testing. However, this is more complex and requires significantly more traffic. For most cases: It's better to test serially, one hypothesis at a time.

See where users jump off or hesitate: scroll depth, abandonment rates, heat maps, click behavior. Good test ideas often come from the behavior of your real visitors. Tools like Hotjar, Clarity, or your web analytics system help you ask the right questions.

More articles

05.03.2026
7
min reading time
Heat maps for websites

Do you optimize your website based on gut feeling, but the conversion rate remains stubbornly low? The problem: You don't know what your visitors are really doing, where they're clicking, and when they're jumping off. In this article, you'll learn how website heat maps reveal user behavior and how to measurably increase your conversion rate with color-coded user data.

02.03.2026
8
min reading time
Guide: Your product roadmap step by step

Bringing a new product from idea to market readiness is challenging. A product roadmap can help you structure and share your vision with stakeholders. It makes all steps understandable for everyone involved, from the initial sketch to delivery, and promotes team collaboration. At the same time, it creates external trust and arouses interest among potential customers.

27.02.2026
4
min reading time
Understanding upselling: That's how the big shops do it

For customers, using upselling correctly can be a helpful guide, as they discover products that better meet their requirements. For providers, upselling is a proven method for specifically increasing average sales value. Find out what upselling means, how to use it sensibly and in a customer-friendly way, and find examples from e-commerce, stationary retail and services.