Thought leadership

How user research and A/B testing fit together

User research and A/B testing give more powerful results when combined and can dramatically improve business metrics. So how do they fit together? It’s actually quite simple.

You can use research methods to:
  • Identify problems that are preventing users from accomplishing their goals.
  • Identify the best designs to put into ab tests.
  • Understand why the results of an ab test are working out the way they are.

You would use A/B testing to objectively determine whether one design results in greater success (however that’s measured) than another.

Identifying problems

So, you have a website or app which people use to achieve certain objectives. That could be an ecommerce website, a SaaS site, an app for discovering music, or an internal application for employees to raise a business case… etc. The methods I’m talking about here aren’t limited by category.

You’ll want to be monitoring how well things are working, and whilst your dashboards will show how many widgets you sell at what prices or how many business cases are raised, you’ll also want to know if you could sell more with higher customer satisfaction or if your staff could achieve more in less time.

A few methods are available to help with this tracking – some qualitative and some quantitative.

Quantitative methods
Analytics

The starting point for many is analytics. Tools such as Adobe or Google analytics are an excellent way to get a handle on what people are doing on your site, and a funnel analysis presented visually can help to understand key user journeys and drop-off points.

If you find that a lot of people put items into the shopping basket but don’t purchase there could be a problem with the checkout, or it could be that it’s the only way of getting shipping information that would be better provided at an earlier stage.

Sometimes it may be obvious what the issue is when you go through the process yourself and look critically at the pages, but other times it won’t be clear. That’s where other methods can help you to understand why something’s happening.

Session replay and heat maps

I think of session replay and heatmaps as a half-way house between analytics and qualitative methods. Session replay tools effectively give you a video replay of individual user sessions – the data behind it is pretty much what’s in the analytics but it’s easier to understand as it’s visual. The downside is that whilst analytics can tell you what’s happening across thousands of user sessions, you can only watch one replay at a time.

For our checkout problem you’d filter session recordings to find those where users had progressed to checkout and then watch a number of recordings, hoping to spot what the issue is, such as seeing people getting as far as a shipping quote and then leaving, or getting an error because there’s a tick box they didn’t spot.

Another benefit of session replay is that it can help you discover issues resulting from variants of a page where users see different advertising or personalisation. These issues are often hard to pick up from analytics alone.

Heatmaps, which include clickmaps (and tapmaps), mouse trails and scroll maps can all add insights to help diagnose the issue.

Different session replay vendors have varying additional functionality such as forms analysis which highlights problematic form fields.

Qualitative methods

There can be a temptation to stick just with quantitative methods if it’s what people are familiar with. Qualitative research can seem a bit daunting and complicated, but it needn’t be – and some qualitative researchers may feel similarly about quantitative methods.

Surveys

There are many survey tools on the market ranging in cost from free to very expensive depending on your need. It’s quite easy to find a free tool that will let you put a simple survey on your site. To help with our checkout problem we could implement a survey that appears as a tab on the side of the page that says ‘feedback’ which asks something along the lines of ‘what could improve this page for you?’ or ‘what could improve your experience of this site?’.

With a little more effort you could have a more intrusive popup invite to a survey that would be more noticeable and only show it to users who do abandon the shopping cart.

With this approach (provided your users can be bothered answering the question) you should quickly get some useful answers.

Writing good survey questions isn’t always as easy as it may appear so if you’re new to it get some good advice, or at least read a book or two. As with all of the techniques described here it’s something that Daydot has in-depth experience and knowledge of, so get in touch with us if you’d like to talk about it.

Interviews

Interviews are pretty much the gold standard. With all the previous methods discussed the information flow is one way and limited. With an interview you can ask follow-up questions.

Interviewing well is a whole skill set in itself and the ability of the interviewer can make a big difference to the quality of the insights that you get. You also need to make sure that you’re talking to the right people – if possible, talk to people who have used your site and abandoned the cart.

The interview process will start with writing a discussion guide to give structure and consistency across the interviews. Bear in mind it is a guide and not a script. As you work your way through the discussion (getting answers to the questions you originally had) you’ll find that new questions arise as a result, and there will be nuances to understand between different people.

A series of interviews done well will provide you with a rich understanding of the issues involved and ideas for possible solutions.

Unmoderated 'interviews'

One-to-one interviews – even those done remotely – can be expensive and time-consuming. Sometimes a good option is to use a service that provides for unmoderated tasks. People video themselves (using their pc/tablet/phone camera) and their screen, talking aloud, whilst carrying out tasks that you’ve set them. You get to see what they do and why, but obviously you can’t ask follow-up questions.

Bear in mind that with interviews you can directly ask people to give feedback on competitor sites as well as your own. This is often useful for picking up ideas on where you need to improve to catch up, and where you have an advantage to leverage.

Identifying potential solutions

We’re now at a point where we’ve used both analytics and qualitative research to identify what problems exist with checkout and why. The next step is to work on solutions.

You can of course just redesign whatever is problematic, implement the changes, and hope for the best. Sometimes you do get problems for which the answer is obvious. But it’s not always so simple.

Of all the design options available, how do you know whether the one that the senior manager likes best actually is the best? The answer is always to ask the user.

To generate candidate designs you’ll follow your standard UX and UI processes. You could just pick one of those designs and A/B test it but even if the result is an improvement in metrics you won’t know whether that one design still has some issues that could be easily fixed if only you knew about them. It’s also quite common to generate a number of design solutions and therefore you need to whittle them down because you don’t have the bandwidth to A/B test them all. Doing some user research on candidate designs can help to ensure that you get the most from your A/B tests – which will usually have a more constrained pipeline than your research capability.

Of the research methods mentioned above the best for feedback on the new ideas are usually surveys or interviews (moderated and unmoderated).

Other methods

There are a number of other methods and tools that can also be used at this stage. Which you use to assist in gaining additional insights will depend on the issue, budget and timescale.

  • Card sorting – this is a technique that allows you to understand how your users group content. It’s useful in deciding the structure and navigation of a site.
  • Tree testing – use this to understand whether the current navigation of a site is structured in a way that users find effective.
  • Design comparison – some tools allow you to upload static design variants and get feedback on them.
  • Eye tracking – a method to understand directly where people look, and how long they look for.
A/B testing

I’m not going to get into the details of A/B testing here. There’s plenty of material online and my colleague Alex has written an excellent article taking the mystique out of how to run such tests. I’ll just say that you need to be confident in the tools used for presentation of the variants, and in the statistics that give you the results. Using a well-known tool such as Google Optimize or Optimizely will manage this for you.

The amount of traffic that you get to the page in question will determine how long you need to run the test for in order to get a statistically significant result, and for very high traffic sites you can do multi-variate testing where a number of elements are presented in different combinations, and the testing tool works out which of those elements have the most beneficial impacts.

If you’re nervous about whether a new design could have a bad impact, then you can start by showing it to just a small percentage of visitors and ramp up progressively as you get positive results.

Research methods can help even during a test. If you have an A/B test you can put a survey on each page to get some qualitative feedback on the two designs and find out some of the reasons why you get the results you do. It can also help to have session replay and heatmaps recording on the variants to increase the depth of insights you have on what’s happening. Just make sure that your survey and replay tools can distinguish which variant they are operating on.

Research instead of A/B tests

Sometimes because of the way a site is built it could be very expensive to run an A/B test. For example, you might want to test an alternative structure to your site navigation, but to do so would actually involve restructuring the whole site and running two sites in parallel for a time. There might be the resource available to restructure the site but only to do it once.

In such circumstances you could do some extensive card-sorting and tree-testing exercises so that you had confidence in going ahead with a full rebuild without the luxury of an A/B test.

Post launch

Once you’ve implemented a new design then you go back to the beginning and monitor analytics and customer feedback ongoing to see if you discover new problems. It’s not uncommon to find pages on sites that were implemented some time ago and are no longer fit for purpose. These can be pages that in-house teams don’t visit much and so some method of monitoring is essential to maintain the customer experience.

Summary

Hopefully it’s clearer now how user research and A/B testing fit together. It’s not that one is more important than the other. They offer different perspectives on the same questions and therefore give more powerful results when combined. Doing this well is something that can dramatically improve business metrics.

At Daydot we're experts in research and A/B testing. We can create bespoke audit, benchmarking and testing packages to suit your needs. Our goal is to help you gain competitor advantage and demonstrate proof of concept for optimisation to senior stakeholders. If you’d like to talk, get in touch.

(Also if you’re interested in a classification of different research methods, see my earlier article on Customer Research Methods and Tools.)

Nick Gassman

Senior Manager, Research, Design & Tech, Daydot

Related articles
Experimentation
Personalisation check
Thought leadership
AB testing and AI driven optimisation
Thought leadership
Continue your growth beyond Google Optimize