More experiences are using machine learning and artificial intelligence to help users make decisions about all aspects of their life. To date, this was often considered to be a “good thing.” However, misplaced reliance on ML/AI can have irritating to disastrous results — from picking a boring movie to watch to not bringing machinery in that has a fault. Careful consideration needs to be given on how users will evaluate a tool’s decision and incorporate it into their own decisioning process.

ACM Queue just released a paper titled The Effects of Mixing Machine Learning and Human Judgment, where researchers tested the efficacy of having humans decide cases with the support of analytics.

The paper used data and tools on the risk assessment for recidivism. Similar analytical tools could be used for decisioning in business (assessing credit risk, medical conditions, or whether to bring equipment into the shop) or for consumers (which stocks to buy, what show to watch).

As machine learning and artificial intelligence tools are more often being used to provide algorithmic assessment for decision-making it is important to understand how algorithmic scores act as anchors that induce cognitive bias.

Human vs Computer decisioning

The following abstract from a paper on collaborative human-computer decision making sums up the relative benefits of humans versus computer decision making.

Computer optimization algorithms can only take into account those quantifiable variables identified in the design stages that were deemed to be critical. In contrast, humans can reason inductively and generate conceptual representations based on both abstract and factual information, thus integrating qualitative and quantitative information. While humans are not able to integrate information as quickly as a computer and are sometimes susceptible to flawed decision making due to biased heuristics such as anchoring and recency (Tversky & Kahneman, 1974), their ability to leverage inductive reasoning and effective heuristics such as bounded rationality (Simon et al., 1986) and fast frugal decision making (Gigerenzer & Todd, 1999) can compensate for optimization algorithms’ inherent limitations. — Cummings.

When designing we want to provide experiences that enhance the user’s ability to participate in the decision-making process — inductive reasoning and conceptual representation — while minimizing their biases, while simultaneously minimizing the tool’s biases. We need to mitigate biases the tool creates due in part to the fact that it doesn’t take into account variables outside of its design.

How do users interpret decisioning tools’ feedback?

The ACM Queue paper provides insights into the user’s consideration of the decision tools input. These “categories” of insights can be used to understand how much influence, or bias, a tool precipitates.

[R]esearch in psychology implies that algorithmic predictions may influence humans’ decisions through a subtle cognitive bias known as the anchoring effect: when individuals assimilate their estimates to a previously considered standard. — Vaccaro.

If we break the user’s decision-making process into the Initial Thought, Considerations used to reach a decision and the actual Decision action we can look at how decision tools influence the user’s decision.

Rely Heavily — User assumes the tool is correct. They may not be trusting their rationale or not evaluating decisions fully.

Deference — Similar to Rely Heavily, user is trusting the tool. They may start with an unbiased initial thought but defer to the system when weighing their considerations and supporting their decision.

Starting Point — When the user uses the tool’s input as their starting point their initial thought is colored by the tool’s bias and considerations of their own may be minimalized.

Tipping Point — When the user is not sure of a decision they use the tool to pick which way to go. Their initial thought and considerations may be unbiased but the tool is getting a final say.

Guideline — The user may have an uncolored initial thought but is considering the tool's input and using it as guideline, which puts guardrails pushing the user towards its decision.

Validation — User starts with an unbiased initial thought and considerations but when they make their decision they check it against the tool. If the tool disagrees then they reevaluate. This reevaluation gives the tool’s decision additional weight.

Factor — To the extent possible the tools decision is just another factor the user considers, having no more or less weight than other things the user considers.

Ignore — User does not factor the tool in at any part of their process.

Importance of the decision vs Value of tool’s feedback

It is important when designing, to investigate and understand which way users will lean when taking into account the output of a decisioning system. Depending on how strongly they are likely to consider the tool's input you will want to adjust the user’s ability to take action based on the decision.

Consider the examples — recidivism, assessing credit risk, a medical condition or whether to bring equipment into the shop, deciding which stocks to buy, what show to watch —