Altitude Design LogoAltitude Design
  • Web Design
  • Web Apps
  • Mobile Apps
  • Automation
  • Blog
  • Get Started
Background
Back to Blog

A Guide to a b Testing Landing Pages for UK SMEs

Altitude Design16 April 202619 min read
A Guide to a b Testing Landing Pages for UK SMEs

You’ve launched the site. It looks sharp, loads quickly, and says the right things about your business. Then a few weeks pass and the phone isn’t ringing as often as you expected, the quote form gets patchy use, or product page visits don’t turn into orders.

That’s a familiar spot for a lot of small businesses in Dalkeith, Midlothian, and across Scotland. The problem usually isn’t that the whole site is wrong. More often, one or two parts of a landing page are holding it back. A weak headline. A vague button. A form that asks for too much too soon. A mobile layout that hides the next step.

That’s where a b testing landing pages helps. Instead of rebuilding everything, you compare one version of a page against another and let real visitor behaviour tell you which works better. Done properly, it’s one of the lowest-risk ways to improve results from the traffic you already have.

Your Website Is Live But Is It Working

A local business owner usually spots the issue in a simple way. People are landing on the page, but they aren’t taking the next step. For a joiner, that might be quote requests. For a salon, it might be booking enquiries. For a small shop, it might be product sales.

The first instinct is often a redesign. New colours. New layout. New copy everywhere.

That’s expensive, and it often guesses at the problem.

Your Website Is Live But Is It Working

Small changes often beat big rebuilds

A better approach is to test one meaningful change at a time. Keep the page structure stable. Change the headline, or the main call to action, or the form layout. Then compare.

That matters even more for Scottish SMEs because most advice online assumes you’ve got huge traffic. Many local firms don’t. Some pages only get a modest stream of visitors each month, which changes how you should test.

For small businesses in Scotland, especially those with fewer than 500 monthly visitors, standard A/B testing often fails. However, adaptations like using bandit algorithms, tracking scroll-depth events adapted for the 80% of Scottish users on mobile and leveraging server-side testing tools, which saw a 35% rise in Scotland in Q4 2025, can still produce useful gains without needing massive sample sizes, as discussed by Apexure’s landing page testing framework .

That’s the key point. You don’t need to behave like a national retailer. You need a testing approach that respects low traffic, mobile-heavy visitors, and the fact that local buying patterns can be lumpy.

What a local business should expect

A/B testing isn’t magic. It won’t rescue a poor offer. It won’t fix weak service. It won’t manufacture demand where none exists.

What it will do is help you answer practical questions such as:

  • Headline choice: Does a specific promise work better than a generic welcome line?
  • CTA wording: Do more people click “Check Availability” than “Get a Quote”?
  • Mobile layout: Are visitors dropping off because the important action sits too far down the screen?
  • Form friction: Does reducing optional fields improve lead quality or just boost quantity?

If you want a broader outside reference on the basics, Otter A/B has a useful ultimate guide to landing page split testing which explains the core logic well.

A key benefit is that testing replaces opinion with evidence. If you’re already tracking site behaviour properly, your first stop should be understanding page performance and user drop-off points, not redesigning blindly. That’s why it helps to monitor the site closely from the start, including speed, engagement, and conversion paths, as part of routine website performance monitoring .

Most underperforming landing pages don’t need a full rebuild. They need one clear problem identified and one sensible test run properly.

Laying the Groundwork for Your First Test

A common small business scenario looks like this. The landing page is live, enquiries are inconsistent, and two people in the office have two different views on what to change. One wants a new headline. Another wants a shorter form. Before any of that, define what a win looks like.

If success is vague, the test will be vague as well.

Laying the Groundwork for Your First Test

Pick one conversion goal

Every landing page needs one main job.

For a roofer in Midlothian, that may be a completed quote request. For a private clinic, it may be a booking enquiry. For an online shop, it may be a completed purchase. Secondary signals such as button clicks or form starts are still useful, but they should support the main goal rather than replace it.

A practical setup looks like this:

  • Lead generation pages: form submissions, booked calls, or tracked tap-to-call clicks
  • Booking pages: completed bookings, not calendar opens
  • Product pages: purchases, not just add-to-basket clicks
  • Service pages: the next committed step, such as an enquiry or availability check

This matters even more on low-traffic sites. If your page only gets a modest number of visits each week, you cannot afford to split attention across three competing goals and hope the result is clear.

Choose a setup you can maintain

A small Scottish business does not need a complicated testing stack. It needs a method that runs cleanly, tracks properly, and does not create extra maintenance work.

That usually means choosing between a visual testing tool, a duplicate page in your CMS, or a developer-built test if the change affects forms, CRM connections, speed, or tracking. The right choice depends on the page and on your tolerance for breakage. Visual tools are quicker for copy or button changes. Developer-led tests are safer for anything structural.

If you want an outside overview of the practical options, Dynares has a useful guide on split testing landing pages .

On low-traffic sites, tool choice also affects test length. A setup that slows the page down, flickers on load, or breaks event tracking can do more harm than the experiment itself.

Write a hypothesis before you build anything

Good tests start with a reason.

“Let’s try a different button colour” is not enough. A proper hypothesis ties the change to a user problem and a business outcome. I usually frame it like this:

If we change [specific element], then [specific audience] will be more likely to [desired action], because [reason based on evidence].

Examples:

  • If we change “Request a Quote” to “Check Availability”, then mobile visitors will be more likely to click, because the wording feels lower pressure and quicker to act on.
  • If we move reviews and guarantees above the form, then first-time visitors will be more likely to enquire, because reassurance appears before the ask.
  • If we rewrite the headline to state the service and area clearly, then local visitors will be more likely to stay and read, because relevance is obvious at a glance.

That final part matters. The reason keeps the test tied to something you have observed, not office preference.

Use ICE to rank test ideas

Most small businesses have the opposite problem from larger brands. They do not have too few ideas. They have too little traffic to test all of them.

A simple way to choose is the ICE framework: Impact, Confidence, Ease. Growth practitioners have used it for years because it helps sort the sensible tests from the speculative ones. Wider has a clear explanation of the framework in its guide to ICE scoring for growth experiments.

Here is the short version:

FactorWhat to ask
ImpactIf this works, will it improve the main conversion action?
ConfidenceDo you have real evidence that this is a problem?
EaseCan you build and measure the change without creating technical trouble?

For a low-traffic website, high-confidence ideas usually deserve priority. A headline that does not mention the service area, a form asking for too much too soon, or a weak mobile CTA can be tested sooner than cosmetic changes that are easy to argue about but hard to prove.

Check the page before traffic hits the test

A test is only as reliable as the page underneath it.

Make sure forms submit properly. Make sure thank-you pages or confirmation states are being tracked. Check the page on mobile, because many local service businesses get a large share of visits from phones. Avoid running a test while another campaign, seasonal promotion, or site rewrite is changing the traffic mix.

It also helps to review whether the page structure supports the action you want people to take. If the landing page has grown out of an older service page, weak hierarchy and muddled copy can distort the result before the test even starts. Good content pages design makes cleaner testing possible later.

Some businesses benefit from an A/A test first, especially if tracking has been patched together over time. It will not improve conversions on its own, but it can expose measurement problems before you spend weeks comparing two versions on shaky data.

The best first test is usually the one you can explain clearly, build safely, and measure without guesswork.

Designing and Building Your Test Variations

The quickest way to ruin an A/B test is to change too much at once.

If you alter the headline, the image, the button text, the page order, and the form length in one go, you may get a result, but you won’t know why. That makes the next decision harder, not easier.

Change one variable that matters

The basic rule is simple. Test one meaningful variable against the original page.

That might be:

  • the main headline
  • the primary CTA text
  • the order of trust signals
  • the form length
  • the hero image
  • the placement of a booking button

Keep everything else the same.

Change one variable that matters

Sample test ideas for a Scottish service business

Here’s a practical set of examples for local SMEs.

Element to TestOriginal (Version A)Variation (Version B)Hypothesis Example
Primary CTA buttonGet a QuoteCheck AvailabilityVisitors may click more readily when the wording feels lower pressure and more immediate.
HeadlineWelcome to Our Roofing ServicesRoofing Repairs in Midlothian with Fast Local CalloutsA more specific headline may hold attention better because it states the service and area clearly.
Form lengthName, email, phone, postcode, service type, messageName, phone, service typeA shorter form may increase enquiries because it reduces effort on first contact.
Trust section placementTestimonials lower on pageTestimonials directly under heroEarlier reassurance may help first-time visitors trust the business sooner.
CTA placement on mobileButton below long intro textButton visible near top of screenMobile users may act faster when the next step appears without extra scrolling.
Offer framingBook a ConsultationFree Initial AdviceClearer value may increase response because visitors understand the benefit before committing.

Build the variation without breaking the page

How you build the test depends on the complexity.

For simple changes, a visual editor in a testing tool can do the job. This works well for copy swaps, button text, order changes, or simple blocks.

For medium changes, editing directly in your CMS may be cleaner. Just be careful with duplicated scripts, tracking, and mobile spacing.

For more technical changes, developer involvement is usually the sensible option. That includes:

  • Event tracking setup: Scroll depth, CTA interactions, form progression.
  • Server-side testing: Better control and fewer front-end flickers.
  • CRM connections: So leads don’t just get counted, they get valued properly.
  • Schema or semantic adjustments: Useful when SEO-safe testing matters.

Keep mobile at the front of your mind

Many local businesses fall short in this area. The page can look tidy on desktop and still fail on a phone.

If your audience is mostly on mobile, then the variation should be judged on mobile behaviour first. Button visibility, text length, tap targets, spacing, and load order all matter more than fine visual polish.

That’s especially important on hand-coded or performance-focused builds, because layout decisions affect both user experience and page speed. If you want a useful reference point for that side of things, this overview of mobile-first website design explains why the smallest screen should lead the decision.

Practical rule: If the user can’t understand the offer and find the main action in a few seconds on their phone, the variation probably isn’t ready to test.

Running Your Test and Reading the Numbers

You launch a new landing page on Monday. By Wednesday, one version is a few leads ahead. It is tempting to pick a winner and move on, especially if your site only gets a modest number of visitors each week.

That is usually where small businesses spoil a decent test.

Running Your Test and Reading the Numbers

How long should a test run

Run the test long enough to cover normal weekly variation. For many local businesses, that means at least two full business cycles, not just a few busy days.

A plumber in Midlothian, for example, may see different enquiry patterns on weekdays, weekends, and during cold snaps. A solicitor may get stronger traffic after payday. A page can look like a winner because it happened to catch a better patch of traffic.

If you have healthy traffic, a 50/50 split is usually fine. If traffic is light, patience matters more than strict timelines. Some pages will need several weeks before the pattern means anything, and some will never produce a clean statistical winner. That is normal on low-traffic sites.

What statistical significance means in plain English

Statistical significance answers one question. Is this result probably real, or are you reacting to random variation?

On a small Scottish business website, random variation is common. One extra phone call, one cancelled appointment, or one large quote request can skew the numbers badly if you only had a handful of conversions to begin with.

So do not treat early movement as proof. Wait until the gap holds over time, across enough visits, and preferably across the same traffic sources. If your test platform reports confidence levels, use them as a guardrail, not a magic stamp of certainty.

What not to do while the test is live

Leave the page alone while the test runs. Mid-test changes muddy the result and waste the traffic you have already paid for or earned.

Common mistakes include:

  • Editing the copy halfway through
  • Changing form fields on one version only
  • Starting a new ad campaign without checking both variants get equal exposure
  • Calling a winner because one version is ahead after a few days
  • Focusing on click numbers while ignoring actual enquiries or sales

Low-traffic testing leaves less room for error. A high-traffic national brand can recover from a messy week of data. A local trades business often cannot.

A test ends when you have enough evidence to trust the decision, not when the graph looks exciting.

What numbers matter most

Start with the one number tied to the job of the page. If the page exists to get quote requests, measure quote requests. If it exists to drive bookings, measure bookings.

Then look at supporting signals to explain the result:

  • Primary conversion, such as enquiries, bookings, or purchases
  • Form starts and form completions
  • CTA clicks
  • Scroll depth
  • Device split
  • Lead quality, especially for service businesses where not every lead is worth the same amount

Bounce rate can help, but only as supporting context. A high bounce rate on a tightly focused page is not always a problem if the right visitors still convert. A lower bounce rate is also not a win if it produces more weak leads.

That wider view is part of conversion rate optimisation for service-based websites . The goal is not more activity on a dashboard. It is more valuable business coming through the site.

Why low-traffic businesses need extra care

This is the part many guides skip. Small sites in Scotland often do not have enough visitors to test tiny tweaks with confidence.

If your landing page gets a few hundred visits a month, changing one word on a button may never produce a reliable signal. Bigger, clearer changes tend to be more useful. A stronger headline, a shorter form, better trust signals near the CTA, or a different page structure can produce a result you can plainly see.

Performance also matters. If the variation is slower, cluttered on mobile, or harder to scan, you can lose good traffic before the offer has a chance to work. On low-volume pages, that lost traffic hurts more because every visit carries more weight.

The practical approach is simple. Test meaningful changes, watch the numbers tied to real enquiries, and give the test enough time to survive a slow week, a busy week, and a bit of normal local fluctuation.

Analysing Results and Taking Action

A test only earns its keep if it leads to a decision.

For a local business site, that decision is usually one of three things. Keep the original. Replace it with the variation. Or accept that the result did not give you a clean answer yet.

When you have a clear winner

If one version produced more of the right enquiries and the tracking stayed clean during the test, put that version live as the new default.

Then write down what changed, what you expected to happen, and what happened. I have seen firms skip this because the winner feels obvious at the time. Six months later, nobody remembers whether the lift came from a stronger headline, fewer form fields, or better trust signals near the call to action.

That note matters because a good result often applies beyond one page. If a clearer offer works on your boiler repair landing page, it may help your air source heat pump page too. If a shorter form gets more enquiries but they turn out to be poor fits, that is a warning, not a win.

When the result is inconclusive

An inconclusive test is common on lower-traffic websites. It does not mean the exercise was pointless.

Usually it means one of three things. The change was too small to matter. The test ran on too little traffic to separate noise from a real effect. Or the original page and the variation were both good enough that neither created a strong advantage.

That is why small Scottish businesses should judge tests by what they learned, not only by whether Version B beat Version A. A page that gets a few hundred visits a month cannot support endless button-colour experiments. Bigger changes tend to give you a fairer shot at a useful answer.

A practical post-test checklist

After each test, check five things:

  • Apply the winner properly: Update the live page itself instead of leaving a temporary test setup in place.
  • Retest the conversion tracking: Make sure calls, forms, and thank-you pages still register correctly after rollout.
  • Save the evidence: Keep screenshots, dates, the hypothesis, and the result in one place.
  • Check lead quality: Count qualified enquiries, booked jobs, or sales conversations, not just raw form fills.
  • Choose the next test from the result: If the page showed weak trust, test proof. If the offer looked unclear, test the message.

For service businesses, the last point is usually where the money sits. More conversions are useful only if they turn into decent work.

If your form submissions can be matched to your CRM or even a simple sales spreadsheet, review which version produced better enquiries after the test ends. That is far more useful than celebrating a lift in low-value leads. On a small site, one extra quality lead can outweigh ten weak ones.

If you want help reviewing a result or deciding what to test next, speak to Altitude Design about your landing page testing .

Partnering with Altitude Design for Continuous Growth

A/B testing sounds manageable on paper because the logic is simple. In practice, the workload adds up.

Someone has to set the hypothesis, prepare the tracking, build the variation, test it across devices, monitor the run, interpret the result, and then implement the winner cleanly. If you’re already running a business day to day, that tends to slip down the list.

A managed setup makes sense. One option is to have the testing work handled as part of the wider website management process rather than as a one-off experiment. That approach is especially useful when the site is custom-built, mobile-first, and already structured around performance and conversion tracking.

Altitude Design handles that kind of work for SMEs through its managed website service, including implementation, page adjustments, ongoing monitoring, and iteration where needed. If you’d rather discuss your site, your landing pages, or whether testing is realistic for your traffic level, you can get in touch here: https://altitudedesign.co.uk/contact .

The practical trade-off is simple. DIY gives you direct control, but it also demands time, consistency, and enough confidence with analytics and front-end changes to avoid muddying the data. A managed partner reduces that burden.

For many local firms, that’s a true value. You stay focused on quoting jobs, serving customers, and running the business while the website gets measured and improved in the background.

Frequently Asked Questions about Landing Page Testing

Is A/B testing the same as multivariate testing

No. A/B testing compares one version against another, usually with a single main change. Multivariate testing checks combinations of multiple changes at the same time.

For most SMEs, standard A/B testing is the better starting point. It’s easier to interpret and far less demanding on traffic.

Will A/B testing hurt my SEO

Not if it’s set up properly.

Problems usually come from messy implementation, such as duplicate indexing, unstable page scripts, or slow front-end test layers. If the test is technically clean and temporary, SEO risk is usually manageable. The bigger danger is running sloppy experiments that interfere with user experience.

What if my landing page gets very little traffic

You can still test, but you need to adjust the method.

Focus on bigger changes, not tiny cosmetic edits. Prioritise mobile behaviour. Track supporting actions like CTA clicks and scroll depth. In some cases, server-side testing or bandit-style allocation can make more sense than a classic evenly split test.

What should I test first

Start with the elements closest to the conversion decision.

That usually means the main headline, the primary call to action, the form, or the content people see before they scroll. Don’t start by testing minor visual details unless there’s a strong reason.

What if the test shows no winner

Treat it as a useful result, not wasted effort.

An inconclusive test can show that the change was too subtle, the page lacked enough traffic, or the problem sits elsewhere. Save the learning, tighten the hypothesis, and run a better next test.

Is a b testing landing pages only for e-commerce

No. It’s just as useful for service businesses.

A local accountant, dentist, joiner, beauty clinic, or solicitor can all test the wording, structure, and friction points on a landing page. The conversion is different, but the logic is the same. You’re trying to make the next step clearer and easier for the right visitor.


Share this article

Table of Contents

  • — Your Website Is Live But Is It Working
  • — Small changes often beat big rebuilds
  • — What a local business should expect
  • — Laying the Groundwork for Your First Test
  • — Pick one conversion goal
  • — Choose a setup you can maintain
  • — Write a hypothesis before you build anything
  • — Use ICE to rank test ideas
  • — Check the page before traffic hits the test
  • — Designing and Building Your Test Variations
  • — Change one variable that matters
  • — Sample test ideas for a Scottish service business
  • — Build the variation without breaking the page
  • — Keep mobile at the front of your mind
  • — Running Your Test and Reading the Numbers
  • — How long should a test run
  • — What statistical significance means in plain English
  • — What not to do while the test is live
  • — What numbers matter most
  • — Why low-traffic businesses need extra care
  • — Analysing Results and Taking Action
  • — When you have a clear winner
  • — When the result is inconclusive
  • — A practical post-test checklist
  • — Partnering with Altitude Design for Continuous Growth
  • — Frequently Asked Questions about Landing Page Testing
  • — Is A/B testing the same as multivariate testing
  • — Will A/B testing hurt my SEO
  • — What if my landing page gets very little traffic
  • — What should I test first
  • — What if the test shows no winner
  • — Is a b testing landing pages only for e-commerce

Need a Professional Website?

Let's discuss how we can help grow your business online.

Get Started
Altitude Design Logo

Services

  • Website Design
  • Web Applications
  • Mobile Apps
  • Business Automation
  • AI Resources
  • AI Integration
  • Rapid Prototyping
  • AI Voice Agents
  • Restaurant AI

Company

  • About
  • Blog
  • Portfolio
  • Pricing
  • Monthly Websites
  • Contact

Legal

  • Privacy Policy
  • Terms of Service

© 2026 Altitude Digital Solutions Ltd, All rights reserved. Company Number SC813673.

Locations
  • Aberdeen
  • Airdrie
  • Alloa
  • Arbroath
  • Ayr
  • Barrhead
  • Bathgate
  • Bearsden
  • Bellshill
  • Bishopbriggs
  • Blantyre
  • Bonnyrigg
  • Cambuslang
  • Clydebank
  • Coatbridge
  • Cumbernauld
  • Dumbarton
  • Dumfries
  • Dundee
  • Dunfermline
  • East Kilbride
  • Elgin
  • Erskine
  • Falkirk
  • Glasgow
  • Glenrothes
  • Grangemouth
  • Greenock
  • Hamilton
  • Inverness
  • Irvine
  • Kilmarnock
  • Kilwinning
  • Kirkcaldy
  • Larkhall
  • Livingston
  • Montrose
  • Motherwell
  • Musselburgh
  • Newton Mearns
  • Paisley
  • Penicuik
  • Perth
  • Peterhead
  • Renfrew
  • Rutherglen
  • St Andrews
  • Stirling
  • Wishaw