Operational Research

Algorithm Aversion

So, you’ve created a brilliant solution to an operational research problem. But — not everyone is using it. What’s going on? Read on to find out.

Operational researchers spend their time trying to come up with solutions to problems businesses face such as: how much stock a business should order in each week; the most efficient route a delivery driver can take; the most profitable combination of products to sell. 

But on the other side, are the businesses that are going to use these solutions. Researchers’ solutions might well be rigorous and elegant (and they should be), but: these solutions are going to be used by people.  

And these people can choose whether to use it or not.

It turns out they might take some convincing.

In the last 10 years, several papers have come out exploring what researchers can do to encourage organisations to use OR solutions — when they are better than human judgement alone. 

Not everyone, it seems, has absolute faith in the power of mathematical or algorithmic solutions to problems like forecasting.

My dog Markus (pictured below) for example, will almost certainly prefer to use his nose, plus a certain amount of running about in random directions, to search for snacks, over an optimised search strategy.

Markus: and a very fine nose it is too.

On a more serious note, some studies have shown that people are less likely to use an algorithm for prediction if they have seen that it is can get things wrong. This is known as Algorithm Aversion[1]. If they know the algorithm is not perfect, they are put off from using it.

Anecdotally, I see this with — for example — political polling. A lot of people seem to write off polls as nonsense, because they don’t always get things 100% right. Either they are perfect and worth following, or they contain error and are rubbish.

Back to Algorithm Aversion: one way to overcome this [2] is to allow people to adjust the output of the algorithm, in a controlled manner.

Markus photographed moments after I tried to explain an optimised search strategy to him.

Dietworst, Simmons and Massey (2018) found that if people were allowed to adjust an algorithm’s forecast, they were happier with it. Restricting the amount that users could adjust forecasts did not make a lot of difference to their satisfaction.

Of course, in a real life situation, it may make a lot of sense for someone in a business to adjust a forecast produced by an algorithm: if they know something the algorithm doesn’t[3].

For example, if the business is about to launch a big advertising campaign or slash prices — or if a close competitor has just opened a shop right opposite yours.

This is a new area of research and so far relies on some limited field experiments, with sometimes seemingly contradictory results.

Beware our own expertise

A paper published last year[4] suggested that people were likely to choose an algorithm’s advice over that of other people.

However, they were a little bit less likely to pick an algorithm’s opinion over their own.

The paper also found that people they determined to be experts were much less likely to take algorithmic advice over their own opinion, and that that hurt the accuracy of their predictions.


[1] Dietvorst, B.J., Simmons, J.P. and Massey, C., (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), p.114.

[2] Dietvorst, B.J., Simmons, J.P. and Massey, C., (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3):1155-1170.

[3] Fildes, R., Goodwin, P., Lawrence, M. and Nikolopoulos, K., (2009). Effective forecasting and judgmental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning. International Journal of Forecasting, 25(1):3-23.

[4] Logg, J.M., Minson, J.A. and Moore, D.A., 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151:90-103.

Leave a Reply

Your email address will not be published. Required fields are marked *