A lot of words get bandied about when talking of 'finding out' what customers do and want. Generally this 'finding out' comes under the heading of 'insights' and there are a multitude of methods for getting insights. I suspect that there are many people who aren’t at all clear about these different methods even if they use the outputs in their jobs. Maybe their work brings them into contact with analysts and researchers who talk fast and don’t explain much but expect their colleagues to immediately latch on and understand why and what they are doing, and what the results mean.
That’s a bit of a generalisation I know, but I have found that even people who work quite closely with insight providers can still find it difficult to understand why one method is picked over another to answer a given question.
If you share some of that confusion, then I’ll try to help.
It's important to be clear from the outset that there’s no single insight method that’s better than the rest. It depends on what you want to find out, and sometimes it’ll depend as well on how much money or time you have. Often, the best insights are gained from a combination of methods.
One of the key distinctions in types of insight is the split between qualitative (qual) and quantitative (quant). Quant is to do with numbers and qual is broadly about what people say. Quant methods analyse numbers – typically using a range of statistical methods and may need sufficiently large sample sizes if measures of statistical significance are required. Qual methods most commonly gather words but can also use other media such as photos, videos or sound. Qual research will often use small sample sizes but if sufficient data is collected then it’s possible to generate numbers to which statistical methods can also be applied (and yes I know there are statistical methods for small data sets too).
Site analytics tools such as Google Analytics or Adobe Analytics are quantitative methods. They deal with the analysis of numbers and statements of fact about behaviours - for example how many people who landed on the homepage came from paid search and then went on to make a purchase (or not).
Interview methods such as usability trials and focus groups typically generate qualitative insights as they result in a description of what people think, what issues they had, what they say they like, how they react to concepts etc. If enough interviews are done it’s possible to create some statistically significant analyses – such as whether a particular age group liked a picture on a page more than another age group.
Another key distinction is between self-reported and observed behaviours. Web analytics, for example, are quantitative, and they also observe behaviours and technology use. Analytics allow us to virtually observe what people have done and the technology they used on our site – we can (virtually) see the behaviour from the digital footprints that are left. By contrast, a survey asking respondents to use a rating scale is also a quantitative method, but it is self-reported. If someone rates themselves as a 10 in likelihood to recommend your service that number can be quantitatively analysed along with all the other responses, but it doesn’t mean that we have observed that person making a recommendation. They may or may not actually recommend you when it comes to it.
Observed behaviours are typically more accurate in that they have happened in reality. X number of people did in fact land on the homepage from paid search and went on to convert. When we do usability testing with a prototype there is a mix of methods. We can observe how respondents actually use the interface, and we can see whether they complete the task or not. At the same time we ask them to self-report on the reasons why they are doing what they do, and what other features would be useful. It’s useful to hear what they say although we have to bear in mind that there are a number of factors that cause people to be inaccurate when self-reporting.
This inaccuracy is why there’s sometimes a big difference between an opinion poll and how people actually vote. When reporting their own behaviours, thoughts and feelings (in the past, present and future) it’s naturally human to want to please others and some may offer a reply that they think the interviewer will like. They may also not reply truthfully if they feel it’s a touchy subject, such as when declaring how much alcohol they drink or whether they will vote for a controversial candidate.
And so we see that reported methods are only as valid as the truthfulness and accuracy of the report. We can see on video surveillance that someone did in fact commit a crime. When asked in court why they did it the perpetrator may lie, but even if they intend to tell the truth their memory may be inaccurate, or they may not understand their own motivations. Humans aren’t always good at understanding and explaining their own behaviour and we’re all good at post-rationalising.
Even with all these caveats reported behaviour does have the great benefit of being able to explain WHY someone behaved or plans to behave in a certain way. Often, it’s stated that quant tells you WHAT happened and qual gives you the WHY. That’s because the quant is frequently an observed method and the qual is frequently a reported method. But it’s more accurate to say that observed methods tell you what happened and reported methods tell you why.
Taking these two dimensions we can create a 2x2 matrix (that’s so beloved in business circles) and we can position different methods into the quadrants.
My intent on writing this short article is, as I stated up front, to help the confused get a better understanding of the relative merits and whys and wherefores of different insight methods. This matrix is intended just as an aid in that intent – to give some structure to how to think about it. Depending on exact context and methodology some of the methods might shuffle around a little. And this view isn’t a nailed-down definitive model. Modern neurological methods of insight can blur the lines.
What I’ve described here is a way to think about what different insight methods give you. My intent is that it’s useful for the stakeholder who is working with research professionals and just wants to make sense of what all these things are good for.
The next part of the story which I’ll pick up in my next article is to address the question of how you actually decide which method to use in a given context. That’s where we’ll overlay the constraints of budget, time, and stage in the project lifecycle, to guide us the best method for your circumstance.
If you want help with research or optimising digital experience for your customers drop us a line to email@example.com
Head of Research, Daydot