In the last blog post, I talked about ethical automation, which is about ensuring that automated decisions and actions are aligned with ethical principles. A key point was that an ethical automated system should also help humans to act ethically. Clearly, this involves many challenges. It is important that the system design is informed by insights from psychology. Relevant topics include theories on behaviour change and emotion regulation. This post will focus on cognitive biases and will identify some that are relevant to ethical decision-making. The next post will discuss mitigation strategies for these biases, and how to apply the strategies to technology design.
Background
Cognitive biases are errors or oversimplifications in human reasoning. Daniel Kahneman’s famous book “Thinking Fast and Slow” presents two types of thinking - called “system 1” and “system 2”. System 1 is fast, effortless and intuitive. It is good at pattern recognition, imagination and associative thinking. System 2 is slow and is good at reasoning and reflection, but it requires effort. System 1 produces biases because it provides short-cuts in decision-making, meaning that some detailed reasoning is omitted. The short-cuts conserve energy and enable fast reactions, but can cause serious errors.
The balance between system 1 and system 2 is not necessarily a trade-off, since biases do not always cause problems. For example, it is often necessary to make a decision that is “just good enough” without the need to optimise every detail. (This has been called “satisficing” by Herbert Simon). It may actually be quite bad to optimise every detail (more on that in later blog posts).
Even if some biases are helpful some of the time, many biases can cause problems for ethical decisions. I have grouped these ethically relevant biases into four categories as follows:
Biases causing unfairness towards others
Biases affecting self-understanding
Biases affecting perception
Biases favouring established assumptions
The categories are not precise and there may be some overlap between them, but I think they are useful. They are elaborated in the sections below. The purpose is not to give a complete list of biases, but instead to provide some representative examples. Each bias name has a link to a definition from a reference source.
Biases causing unfairness towards others
These are social biases relating to self and other. They include :
Just World Fallacy (also called Just World Hypothesis) - assumes that if adversity happens to a person (e.g. a crime or poverty), it is because they were negligent or morally deficient, so they deserve the adverse circumstances. Clearly this causes unfair judgment, such as victim blaming.
Fundamental Attribution Error (also called “Actor-Observer Asymmetry”) - this is when an observer attributes another person’s actions to their personality traits, but they attribute their own similar actions to a situation they are experiencing. For example, a person who forgets a detail at work might attribute their mistake to work pressure, but if someone else makes the same mistake, they are considered to be “lazy”.
Egocentric Bias – failure to understand the world from the viewpoint of others. This bias causes us to assume that other people see the world as we do. That also includes the way we see ourselves. This is not the same as self-serving bias (which is about biased self-perception), though the two may be related.
In-Group Bias - the tendency to focus on the positive aspects of one’s own group, but negative aspects of other groups. For example, the groups might be different ethnicities.
Biases affecting self-understanding
These biases can cause unfairness and they could have been included in the above category. I have listed them separately because the errors are more specifically about oneself, and not directly about unfair judgment of others:
Self-serving Bias (or self-enhancement bias) - the tendency to emphasise one’s own positive contributions to a good situation or outcome, and to blame others when things go wrong.
Over-confidence Bias - over-estimating one’s capabilities or expertise (related to the Dunning-Kruger effect)
Bias Blind Spot - failure to recognise one’s own biases, but noticing them in other people.
Biased self-understanding can be countered by seeking constructive feedback and developing metacognition skills (thinking about one’s own thinking), which will be a topic in later posts.
Biases distorting perception
Biases relating to perception are ethically relevant because they cause us to neglect important information, particularly concerning others. They also make us vulnerable to manipulation. Examples of these biases are listed below:
Availability Heuristic - bias towards information that is visible, or things that can be immediately remembered. Kahneman called this “What you see is all there is” (WYSIATI), though it’s not always about visibility - it could be about training or automatic habits of thinking. Related biases include Availability Cascade which is a bias towards statements that are repeated .
Framing - bias towards options that are presented with positive associations.
Anchoring– bias towards the option that is first presented.
Automation Bias - the tendency to accept decisions from an automated system because it is believed to be error-free.
Each of these biases are related to the understanding of complex information, and the tendency to do what is easiest, particularly when there is time-pressure or stress. WYSIATI can cause serious logical errors if the decision-maker assumes that only what they can “see” (or what they know about) exists. For example, medical practitioners might be unaware of rare diseases and conclude wrongly that a patient with the disease must have anxiety.
I have included automation bias here because it is often in the form of selecting the recommended answer, which is like availability bias. But it may also involve more complex issues, such as lack of experience or training. See for example this study on clinical decision support systems.
Biases favouring established assumptions
Established assumptions can be individual or group assumptions. Biases of this sort are ethically relevant because they can cause a lack of critical thinking and a failure to raise concerns in difficult situations. They include the following:
Confirmation Bias - the tendency to focus on information that confirms beliefs and to ignore conflicting information.
Conservatism Bias - reluctance to change beliefs when presented with new evidence.
Groupthink - a tendency to agree with others in the group when making decisions.
Authority Bias - the tendency to give weight to someone’s opinion (or to be influenced by them) because they have authority, and not because of the actual opinion.
Groupthink and Authority biases are particularly dangerous in organisations, as they can lead to abuse or corruption. For example, Authority Bias was a factor in the infamous Milgram Experiment where subjects followed orders even when they believed they were inflicting pain.
Summary
Cognitive biases are errors and short-cuts in human reasoning. Not all biases are problematic, as they can enable a quick decision that is “good enough” without optimising unnecessary details. However, some biases can cause problems in ethical reasoning. In particular, these include errors in our understanding of ourselves and others, the tendency to neglect things that are not immediately “visible” (WYSIATI), and the reluctance to challenge established assumptions. WYSIATI is particularly problematic when people assume that unknown things are impossible (as can happen in medical decisions about patients with rare diseases).
The next post will discuss bias mitigation strategies.
Sources used
There is a long history of psychological research on cognitive biases. In the links within the text, I have used definitions of biases listed in the American Psychological Association dictionary, when available (not all biases are listed here though). Sometimes I have referenced a Wikipedia article, when it is useful source. Although Wikipedia is not recommended by universities, it can be more reliable than other internet sources if there are a large number of editors (see this article by academics presenting this argument). I have avoided popular psychology sites as they are often not rigorous.
Note: This blog post is human produced, without any generative AI tool.