A Guide to Research Methodologies

Laura Carroll
Medium.design
Published in
8 min readApr 10, 2020

--

Photo by David Travis on Unsplash

Part of the job of a design researcher is making sure everyone on the product team is comfortable participating in, and even conducting their own research. As senior design researcher at Medium, I put a lot of effort towards making sure I’m providing the design team with useful tools and frameworks that will help them develop that level of comfort and empower the team to get the information they need for their work.

Whatever your question, the right research method is essential to getting the answers you need, and so I’ve outlined some of the most popular research methods, when to use them, and how.

  • Types of Research covers the differences between quantitative, qualitative, attitudinal, and behavioral research.
  • Methods covers a set of methods and when to use them. It’s not an exhaustive list, but some of the most common combined with some of my favorites. I imagine it will grow!
  • Participant Count covers how many people to include for each method mentioned.

Types of Research

A given project should include two or more complementary research methods, because different methods are best employed for different objectives and at different phases of the product development lifecycle.

Research methods as they relate to the product lifecycle may look something like this:

  • Field studies, surveys, diary studies, and user interviews during the discovery and ideation phase.
  • Card sorting, concept testing, usability, and acceptance testing during design.
  • Beta testing, diary studies, A/B testing, or surveys to measure success.

Research used early in the product lifecycle is typically generative or exploratory, with the objective of understanding the opportunity or problem space, while research used later in the lifecycle is typically evaluative, measuring the efficacy of our work.

Separate from sprint work will be foundational research, which is not tied to a given project or sprint but used to deepen our overall understanding of users.

In addition to being generative or evaluative, research is either quantitative or qualitative and attitudinal or behavioral.

Nielsen Norman Group

This graph illustrates how different types of research methods address different types of research needs. Methods complement each other best when they fall into different quadrants. This is evidenced by the common user interview + usability test combination, wherein we conduct attitudinal and then behavioral research.

In design research particularly, it’s important to always use at least one behavioral method during a project so that we are not over-indexing on what people say they want.

Methods

This section covers the following research methods and when to use them:

  • Field studies
  • Blueprinting
  • User interviews
  • Concept Testing
  • Usability Testing
  • Card Sorts
  • Surveys
  • Acceptance Testing

Let’s go!

Field Studies

Qualitative, Behavioral

Use this method when:

You want to understand the user and their actions, environment, pain points, etc., in order to inform upcoming product work — or even upcoming research.

Field studies range from totally unobtrusive and observational to extremely participatory. Some of my own past field studies have entailed:

  • Observations: Shadowing editorial meetings to better understand how a story is pitched, budgeted, published, and promoted (purely observational).
  • Contextual inquiries: Watching producers build a piece of content from start to finish — and then asking them a set of questions afterward — to better understand where they struggled and why.
  • Acting on behalf of the user: Running checkin for a series of user events to better understand how a host app can reduce stress at the door (fact: running an event door is a nightmare).

Field studies are best used in the early stages of discovery and are great for building empathy and getting ideas.

User Interviews

Qualitative, Attitudinal

Use this method when:

You want to know what people think about something or to get a sense of their general needs and wants, typically during exploration of an idea, problem, or opportunity space.

User interviews are a really low-effort way to better understand our users, and we’re already doing a lot of them, which is great! What’s important here is to make sure you are speaking to the right mix of people — more on that in another post.

Blueprinting

Qualitative, Attitudinal

Use this method when:

You want to better understand workflows or other granular activities — typically of a group — and potentially compare them to other groups.

Example research questions include:

  • What is the end-to-end workflow of an editorial team?
  • How does the process of one group of users compare to the process of another?

Blueprinting requires a higher level of commitment from both parties than the other methods listed here, but it’s one of my favorite ways to explore process as well as attitudes toward process.

Diary Studies

Qualitative/Quantitative, Attitudinal

Use this method when:

You want to better understand user actions, experience, and sentiment over time, ranging from a few days to a month or even longer.

Examples include:

  • Asking users to log their activities, thoughts, and pain points throughout the work day.
  • Asking users to record what they’re looking for, why, and whether they found it every time they execute a search.

Diary studies are really versatile and can be used either generatively when you’re trying to understand a problem or when you’re evaluating a design, such as during a beta test or even post-launch. Depending on how structured the questions or prompts are, diary studies can also provide some valuable tallies (e.g. None of our eight diary participants did [x]).

Usability Testing

Qualitative/Quantitative, Behavioral

Use this method when:

You have a clickable prototype and you want to validate it in enough time to make changes to the flow. It’s important to get ahead of any major usability issues by testing with users as soon as possible (see also: concept tests, below).

In person or remote? Moderated or unmoderated?

In a perfect world we’d all be running moderated, in-person usability tests, but time and location don’t always permit. I tend to follow these guidelines:

  • If the user is local and you are able to meet them in their environment, do so!
  • If the user is not local, run a remote moderated test, or
  • If your prototype is very straight-forward and you are confident that the user can easily complete most of your test’s tasks, run a remote unmoderated test.

Usability tests can and should be quantified. I like to add task results to a success, partial success, or failure table and then calculate the success rate.

Nielsen Norman Group

The success rate formula is:

(S+(PS*0.5))/total tasks = %

For example, the success rate from the table above is:

(9+(4*0.5))/24 = 46%.

This is a great way to measure one prototype against another prototype, or measure usability improvements to a given prototype as you iterate.

Concept Testing

Qualitative/Quantitative, Attitudinal

Use this method when:

You have a very early concept (or possibly two that are quite different) with little to no interaction (possibly on paper!), and you want to test the direction(s) before taking any next steps.

Example research questions include:

  • Do users understand the notion of [x]?
  • Which sketch better matches the user’s mental model?

Note that it’s very important to ask the right questions here — since the user won’t have much to do, concept tests run the risk of becoming very opinionated. Create a script that gets to the core functionality of the design, and keep concept tests short.

Surveys

Quantitative, Attitudinal

Use this method when:

  • You want to gather preliminary information about users and their experience at the start of a project.
  • You want to measure the success of something you’ve recently released.
  • You want to measure satisfaction at scale and/or over time and set benchmarks accordingly.*

*This is a much larger effort and isn’t tied to the product development lifecycle, but does provide a lot of direction. My favorite benchmark survey is Google’s Happiness Tracking Survey — more on that in another post.

Card Sorts

Quantitative, Attitudinal

Use this method when:

You need to understand how users prioritize, sort, or classify a set of items.

Example research questions include:

  • Which Heated articles do readers think belong in Nutrition, Agriculture, and so forth?
  • Which permissions do pub owners think apply to a given role type, and what would they label those roles?
  • Which of the following features or capabilities do users want most or first?

Card sorts can be open or closed (or both).

In a closed card sort, users are given a set of items and asked to sort them into pre-labeled columns, such as Nutrition, Agriculture, etc. In an open card sort, users group the items and then label the columns themselves.

You can also run a card sort that includes both by giving users a set of predefined columns as well as some that they are able to label themselves.

Card sorts can be done in person using index cards or online using a tool like Optimal Workshop.

Acceptance Testing

Qualitative/Quantitative, Behavioral

Use this method when:

You’ve got a near complete prototype (or even beta state) and you want to know whether any minor changes need to be made.

Example research questions include:

  • Can users successfully complete all tasks?
  • Is success criteria met?

Remote unmoderated testing is usually fine here since the tasks should be pretty straightforward at this point. Quantify these tasks using the success rate mentioned above.

Participant Count

User interviews and concept, usability, or acceptance tests.

The rule of thumb is 5–8 users because you should start seeing patterns after that, but personas (or user groups or characteristics in absence of personas) should be taken into account here. For example, if you want to speak to pub editors and pub writers, try to speak to five of each.

This doubles the amount of interviews of course. If you are tight on time and trying to represent different user groups, start with three of each and add more if patterns are not emerging.

It’s important to speak to more users if you’re getting wildly different results. And if you reach the 5–8 interviews and still see no patterns, you might actually be talking to the wrong people or not segmenting your user groups properly.

Card Sorts

Recommendations here range from 15–30, so start with 15 and the more the merrier!

Surveys and sample size

If you are using surveys to gather some information at the start of a project then sample size is probably not terribly important. But if you are using surveys to measure satisfaction at scale or make other major conclusions about users, having the right sample size and resulting confidence level is very important.

If you ever need to figure out your sample size, this calculator by SurveyMonkey is a great resource.

Summary

Hopefully this article gives you some method inspiration! Feel free to leave responses if you have comments or questions.

--

--