10 Kinds of Cognitive Bias in User Research

by | Feb 13, 2019 | General

 | 6 min read

photo of a woman walking down a street

The term “cognitive bias” describes certain mental prejudices that researchers harbor in a number of research fields such as psychology, social sciences, or economics (famously studied by Amos Tversky and Daniel Kahnemann). Cognitive bias stems from the fact that we’re usually ill-equipped with information we need in everyday situations. It’s actually pretty useful as it helps us make decisions quickly and efficiently.

So why talk about cognitive bias in user research? Contrarily, cognitive biases can influence or even distort research results. In this article, we’d like to point out 10 kinds of cognitive bias that UXers should be aware of when conducting user research.

Our brain tends to process a lot of information, many of which is not useful to us. Because of this, the first natural reaction is to aggressively filter out every piece of input that doesn’t obviously stand out from the mass. At the same time, we often lack the information necessary to assess the overall situation.

To remedy this, the human brain is excellent at filling the gaps with “fitting information”. As long as these fillers are in line with our individual view of reality, the question of whether they are true is of secondary importance.

What is the influence of cognitive bias in user research?

The mechanisms are useful for the successful completion of our daily routines and usually aid us in making appropriate decisions. Still, they pose a problem during research and this includes user, UX and usability research. Here, cognitive bias has a more negative connotation. It’s seen as a systematic, illogical error in reasoning that restricts our judgments. It can negatively influence the validity of whole surveys and observation series. In the following, we subdivide the different kinds of cognitive bias into three categories. Researcher/experimenter, the participant/test person, and the experimental situation itself can introduce cognitive bias.

Cognitive bias by the researcher

1. Confirmation bias

Confirmation bias is the tendency to exclusively search for, register, focus on, and store information that aligns with one’s opinion. This confirming of your own, prejudiced assumptions is often not a conscious decision. It can happen quickly that occurring problems are overlooked or disregarded as unimportant, e.g., if testing a prototype with the assumption that it should work flawlessly. The researcher can avoid this kind of bias by becoming aware of the fact that the ultimate goal of their research isn’t affirming their own opinion but to listen to the user carefully. On top of this, a researcher may particularly analyze the patterns in the data she collects that contradict her own opinion.

2. Pygmalion/Rosenthal effect

Researchers always have certain expectations concerning the result or course of an experiment. This can lead to an unconscious display of these expectations. Humans are very good at registering nonverbal cues, no matter how subtle. Even tiny things like changes in the pitch of one’s voice, body posture, or the attitude of the experimenter can give participants hints towards the answer or behavior the experimenters expect. The person conducting the research should practice interviews with colleagues. It’s important to get feedback concerning the neutrality of their own behavior.

3. Leading questions

Especially during face-to-face interviews, it’s not uncommon for the interviewer to phrase questions in a way that suggests a certain answer or influences the answer. Extreme examples of this kind of bias are questions that start with  “Don’t you also think that…?”. The question “On a scale of one to ten, how difficult was reaching XY?” influences the participant since it implies that it was, in fact, difficult to reach XY. The question merely allows the test person to rate how difficult. If you use questionnaires for your research, you should double-check them keeping this in mind. If you’re doing face-to-face interviews you should consider thinking of neutral ways to phrase your questions openly beforehand, such as “How would you describe… ?”.

4. Framing

This problem arises during the evaluation of your data and the presentation of your findings. Differently stated, the same raw data can lead to different results and suggest different decisions. As an example, the statements “30% of users rejected the feature.” and “70% of users desired the feature.” have a very different connotation. To prevent this kind of bias it can help to deliberate why a certain outcome would be considered positive or negative and what the reverse would mean for the current research.

Bias by the participants

5. Hawthorne effect

The Hawthorne effect is pretty much the opposite of the Pygmalion/Rosenthal effect. It states that participants are aware of the fact that they are being observed and change their behavior accordingly. Users who are recorded while using an app are extra careful to avoid mistakes. After all, they don’t want to make a fool of themselves. You could say that participants are usually trying to “ace the test”. Thinkable effects of this behavior are users reading hints and help texts that they wouldn’t usually even consider skimming. Because of this, you should always explicitly tell users that you are only interested in their opinions and natural behavior and that there are no right or wrong answers or ways to finish the task. Some kind of “warm-up” before you start the test itself to help the participant get comfortable with the test subject can help as well.

6. Social desirability

Participants tend to answer in a way they deem socially accepted instead of stating their true feelings and opinions. In situations where participants are part of a group (e.g. during focus groups), participants often try to avoid rejection and confrontation. Participants also tend to give polite answers in an interview. If your experimental setup contains controversial questions, you should assure participants in some way that you’re only interested in their opinions and you won’t judge them in any way. It can also be helpful to mention again that you treat the situation and their data confidential.

7. The principle of least effort

Analog to the “path of least resistance” from physics, this kind of bias describes participants’ tendency to answer in a way that they assume will terminate the test in the quickest possible way. This effect is closely related to the so-called “survey fatigue”. Especially if you repeat similar tasks or question formats you should limit the test or interview as much as possible. Focus on a relatively small number of tasks or questions. You should eliminate additional and less important questions from the questionnaire as much as possible.

Situational effects

8. Sampling bias

Since you won’t ever be able to recruit the whole relevant population for your interview or experiment, the common practice is to do research using a random sample that is representative of the population. However, sometimes certain parts of the user group can be unintentionally excluded from the examined sample. Sampling bias can result from self-recruitment or certain sub-groups of the population not being known. To avoid this kind of bias you should try to characterize your target group and their characteristics as closely as possible and sample accordingly.

9. Time-related bias

Even the time of day or day of the week at which you conduct the research can lead to bias. On the one hand, this can lead to parts of the target group being systematically excluded from taking part in the research (e.g., working people if you only test during the morning on workdays). On the other hand, participants’ behavior can also differ significantly. For example, you can assume that the general mood on Monday mornings will be different than on Fridays shortly before closing time.

10. Task-selection Bias

You can trace back this bias to the fact that most participants will try to figure out the purpose of your study. Consequently, they might assume that the fact that a task is part of a usability test means that this task can be solved. In this case, participants will probably try to solve the task more persistently. On their own, using their own device tasks might be given up much more quickly since users can’t be sure whether it is even possible to reach their goal on the given website or application.

Consequences for UX/usability research and user testing

Breaking away completely from cognitive bias in user research is difficult if not impossible. If you tried to master your everyday life without the influence of cognitive bias, you’d likely spend an enormous amount of time just trying to make certain decisions.

On top of that, not all kinds of cognitive bias occur regularly. User researchers should be aware of their own mental models and heuristics. This helps to notice when these factors influence your own perception and allow you to actively work against skewed and misinterpreted data. Likewise, collaborate with your fellow researchers, as an additional pair of eyes can help to avoid your personal biases.

What strategies do you have to avoid cognitive bias in your research?

consider.ly is a fast-growing tool for quali­tative data analysis and UX research repository.

Follow consider.ly

Mara Weingardt

Mara is interested in all topics around user research, user testing, as well as usability and UX.
Share This