Introduction
Opportunities to improve business processes by making use of algorithms have never been as abundant as they are today. Algorithms in the domain of machine learning and artificial intelligence specifically have been receiving a lot of attention, as they have further expanded the range of tasks that can be transferred from humans to machines. The latter are often perfectly capable to achieve above-human levels of performance on specific tasks.
However, people who resist integrating algorithms into their workflow remain a frequent occurrence in practice. Even in situations where the performance of algorithms has irrefutably proven to be better, this resistance persists. This short article deals with the two most common approaches when dealing with this phenomenon - and why to prefer one over the other.
Why does algorithm aversion occur?
Algorithm aversion stems from the fact that we intuitively put algorithms to much higher standards than we do human decision-makers. Think of Tesla's autopilot, which always again comes under scrutiny when an accident happens, although statistics clearly show that the use of the autopilot function is statistically safer than human drivers (source).
Naturally, in matters of human safety, waiting until something is irrefutably proven is a very rational strategy. This is in contrast with situations where an algorithm is conclusively proven to work better than expert human decision-makers.
Effectively, studies show that even when people see algorithms working better in practice, they are still reluctant to defer control to these systems (source). People are more inclined to abandon an algorithm than a human decision-maker after making the same mistake.
Academic solutions to this problem are either to create perfect algorithms - which is obviously infeasible for most applications - or never to show algorithm performance to decision-makers, giving the semblance of perfection. The latter option is of course, mightily uncomfortable for any decision-maker.
Hence, in practice, we often see two different approaches taken to deal with algorithm aversion.
Algorithm aversion: Misplaced distrust of algorithms which would improve overall performance if relied upon.
A first approach: human overrides
A first approach for dealing with algorithm aversion is to include a human as a failsafe in the system. In practice, this often means that the human evaluates all the predictions made by the system and can make changes if (s)he believes that the proposed prediction or optimization is not optimal.
Whereas this approach has its merits to ensure that algorithms become part of the process, there are two significant issues with this approach in practice.
The first key problem is that people will evaluate the algorithm based on their own beliefs of what the value should be, assuming that when opinions differ, they are correct, and the algorithm is wrong.
In practice, this means that the value of an algorithm is reduced to automation which replicates a (hopefully large) fraction of the decisions taken by human decision-makers.
By allowing human overrides, the algorithm is effectively reduced to nothing more than automation, losing much of its value.
The second key problem is that measuring performance becomes highly problematic. Whether the actual results are due to the human expertise or the algorithm is nigh impossible to prove with statistical certainty.
An argument is often made in analogy with the Centaur teams (human + machine combinations in chess games), which have been highly successful in the past.
However, people making this argument often fail to mention that these teams also have a clear division in tasks; in this case, machines are making tactical decisions and humans higher-level strategic decisions.
Man + Machine combinations can work admirably, but it first has to be ascertained on which domains the algorithm outperforms the human and vice versa. Only then a clear division of tasks can take place.
A second approach: Generalized rules
A second mitigation strategy for algorithm aversion entails the definition of clear boundaries within which the algorithm can operate, and allowing users to specify these rules during the adoption phase.
Let's take the example of a demand prediction algorithm. This approach would entail that the user is required to specify a rule which gives a clear reason for why in a given situation such as the one at hand, the predictions (or subsequent adjustments) should be changed. Such, in contrast to the old way of doing things where one would merely adjust the prediction of a specific product for the next period.
This approach has several obvious benefits.
Firstly, this immediately implies that a clear reason is to be given, explaining the deviation from the suggestion. In doing so, acting on an implicit hunch is discouraged, since these by definition cannot be specified as a more generalistic rule.
Secondly, this approach ensures that the effort required by human agents decreases over time. A rule only has to be defined once. In contrast, ad hoc changes have to be formulated ad infinitum.
In short, formulating generalized rules scales a lot better than does the human override solution.
Forcing users to formulate generalizable rules increases performance quality, and decreases long term effort invested in the process.
Naturally, there are downsides to be aware of in this context.
The primary reason for choosing for human overrides is that formulating generalized rules requires a more significant initial investment, both from the users of the system and the people building the system.
Short term cost considerations may, therefore, lead to a decision in favour of solving the problem using overrides. However, be careful that these short term cost savings may dramatically reduce the return on investment on the complete system.
A second possible pitfall is that cunning users might try to disguise their ad-hoc corrections as rules for the system. Typical symptoms of this are large amounts of constraints specified on small subsets; such as for an individual product (e.g., "orders for product X must never exceed 100 units per week").
A third downside to take into consideration is the following. When one defines large constraint sets, such might result in conflicting constraints and infeasible solution spaces; i.e. there are so many conditions that they can never all be satisfied. The main line of defence here is to actively guard people against adding too many constraints and challenging them on the validity of added constraints.
Where do you go from here? Two clear messages
Algorithm aversion is an especially acute problem during the adoption phase of a new algorithm system in an organization. The fact that people are quick to abandon an algorithm after they have seen it make anecdotal mistakes only serves to exacerbate this problem.
Hence, it is of paramount importance to prime people before starting this process.
Two clear messages have to be conveyed:
The first is to - always - expect mistakes in situations where uncertainty abounds.
The second message should inform them on how to make structural suggestions that serve to improve an algorithm, rather than allowing people to effectively circumvent the system, leading to a failure to adopt that system.