The term “cognitive bias” is used to describe certain mental prejudices or present points of view the researcher harbors in a number of research fields such as psychology or social sciences. Cognitive bias stems from the fact that we’re usually ill-equipped with information we need in everyday situations and is actually pretty useful as it helps us make decisions quickly and efficiently. On the other hand this bias can influence or even distort research results.
Our brain tends to be supplied with a huge amount of information, many of which are not useful to us. Because of this, the first natural reaction is to aggressively filter out every piece of input that doesn’t obviously stand out from the mass. At the same time, we often lack the information that is needed to assess the overall situation. To remedy this the human brain is excellent at filling the gaps with “fitting information”. As long as these fillers are in line with our individual view of reality, the question of whether they are true is of secondary importance.
How is Cognitive Bias influencing User Research?
The mechanisms described above are useful for the successful completion of our daily routines and usually aid us in making appropriate decisions. Still, they pose a problem during research and this includes user, UX and usability research. Here, cognitive bias has a more negative connotation: it’s seen as a systematic, illogical error in reasoning that restricts our judgments and can negatively influence the validity of whole surveys and observation series.
Generally, the different kinds of cognitive bias can be subdivided into three categories, depending on the source of the cognitive bias. Cognitive bias can be introduced by the researcher/experimenter, from the participant/test person or by the experimental situation itself.
Cognitive Bias by the Experimenter
Confirmation Bias: the tendency to exclusively search for, register, focus on and store information that aligns with one’s opinion. This confirming of your own, prejudiced assumptions is often not a conscious decision. For example, if a prototype is tested with the assumption that it should work flawlessly, it can happen quickly that occurring problems are overlooked or disregarded as unimportant. This kind of bias can be avoided by the researcher to become aware of the fact that the ultimate goal of their research isn’t affirming their own opinion but to listen to the user carefully. On top of this, patterns in the collected data that contradict the researcher’s opinion should be analyzed carefully.
Pygmalion/Rosenthal effect: researchers always have certain expectations concerning the result or course of an experiment. This can lead to an unconscious display of these expectations. Humans are very good at registering nonverbal cues, no matter how subtle. Even tiny things like changes in the pitch of one’s voice, body posture or the attitude of the experimenter can give participants hints towards the expected answer or behavior. To remedy the person conducting the research should practice interviews and instructions given during sessions with colleagues-or even outsiders-and get feedback concerning the neutrality of their own behavior.
Leading questions: especially during face-to-face interviews it’s not uncommon for the interviewer to phrase questions in a way that suggests a certain answer or at least influences the answer. Extreme examples of this kind of bias are questions that start with “Don’t you also think, that…?”. The question “On a scale of one to ten, how difficult was reaching XY?” influences the participant since it implies that it was, in fact, difficult to reach XY. The question merely allows the person questioned to rate how difficult. If questionnaires are used for your research, you should double-check them keeping this in mind. If you’re doing face-to-face interviews you should consider thinking of neutral ways to phrase your questions beforehand.
Framing: this problem arises during the evaluation of your data. Depending on how you analyze your data, the same raw data can lead to very different results. As an example: the statements “30% of users questioned rejected the feature.” and “70% of users questioned desired the feature.” have a very different connotation. To prevent this kind of bias it can help to deliberate why a certain outcome would be considered positive or negative and what the reverse would mean for the current research.
Bias by the Participants
Hawthorne effect: the Hawthorne effect is pretty much the opposite of the Rosenthal effect described above. It states that participants are aware of the fact that they are being observed and change their behavior accordingly. Users who are recorded while using an app are extra careful to avoid mistakes. After all, they don’t want to make a fool of themselves. You could say that participants are usually trying to “ace the test”. Thinkable effects of this
Social desirability: participants tend to answer in a way they deem socially accepted instead of stating their true feelings and opinions. In situations where participants are part of a group (e.g. during focus groups), participants often try to avoid rejection and confrontation. There’s also a tendency to give polite answers when questioned by an interviewer. If your experimental setup contains controversial questions, you should assure participants in some way that you’re only interested in their opinions and they won’t be judged in any way. It can also be helpful to mention again that the situation and data gathered will be treated as confidential.
The principle of Least Resistance: this kind of bias describes participants’ tendency to answer in a way that they assume will terminate the test in the quickest possible way. This effect is closely related to the so-called “survey fatigue”. Especially if you use a big number of similar tasks or question formats you should limit the test or interview as much as possible. Focus on a relatively small number of tasks or questions. Additional and less important questions should be eliminated from the questionnaire as much as possible.
Sampling Bias: Since you won’t ever be able to recruit the whole relevant population for your interview or experiment, the common practice is to do research using a random sample that is considered to be representative for the population. However, sometimes certain parts of the user group can be unintentionally excluded from the examined sample. Sampling bias can result from self-recruitment or certain sub-groups of the population not being known. To avoid this kind of bias you should try to characterize your target group and their characteristics as closely as possible and sample accordingly.
Time-related bias: even the time of day or day of the week at which research is conducted can lead to skewed data. One the one hand this can lead to parts of the target group being systematically excluded from taking part in the research (e.g. working people if you only test during the morning on workdays), on the other hand, participants behavior can also differ significantly. For example, you can assume that the general mood on Monday mornings will be different than on Fridays shortly before closing time.
Task-selection Bias: this bias can be traced back to the fact that most participants will try to figure out the purpose of your study. Consequently, they might assume that the fact that a task is part of a usability test means that this task can be solved. In this case, participants will probably try to solve the task more persistently. On their own, using their own device tasks might be given up much more quickly since users can’t be sure whether it is even possible to reach their goal on the given website or application.
Consequences for UX/Usability Research and User T
Breaking away from cognitive bias completely is difficult if not impossible. As has been mentioned before: if you tried to master your everyday life without using cognitive bias you’d likely spend an unnecessarily big amount of time trying to make certain decisions. On top of that, not all kinds of cognitive bias explained above occur regularly. User researchers should be aware of their own mental models and heuristics they use. This helps to notice when the own perception is influenced by these factors and allows you to actively work against skewed data.