{"id":212,"date":"2020-04-22T17:11:17","date_gmt":"2020-04-22T17:11:17","guid":{"rendered":"http:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/?p=212"},"modified":"2020-05-01T13:53:53","modified_gmt":"2020-05-01T13:53:53","slug":"algorithm-aversion","status":"publish","type":"post","link":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/2020\/04\/22\/algorithm-aversion\/","title":{"rendered":"Algorithm Aversion"},"content":{"rendered":"\n<p>So, you\u2019ve created a brilliant solution to an operational research problem. But \u2014 not everyone is using it. What\u2019s going on? Read on to find out.<\/p>\n\n\n\n<p>Operational researchers spend their time trying to come up with solutions to problems businesses face such as: how much stock a business should order in each week; the most efficient route a delivery driver can take; the most profitable combination of products to sell.&nbsp;<\/p>\n\n\n\n<p>But on the other side, are the businesses that are going to use these solutions. Researchers\u2019 solutions might well be rigorous and elegant (and they should be), but: these solutions are going to be used by people. &nbsp;<\/p>\n\n\n\n<p>And these people can choose whether to use it or not.<\/p>\n\n\n\n<p>It turns out they might take some convincing.<\/p>\n\n\n\n<p>In the last 10 years, several papers have come out exploring what researchers can do to encourage organisations to use OR solutions \u2014 when they are better than human judgement alone.&nbsp;<\/p>\n\n\n\n<p>Not everyone, it seems, has absolute faith in the power of mathematical or algorithmic solutions to problems like forecasting.<\/p>\n\n\n\n<p>My dog Markus (pictured below) for example, will almost certainly prefer to use his nose, plus a certain amount of running about in random directions, to search for snacks, over an <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/0040580978900540\">optimised search strategy<\/a>.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" src=\"http:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-content\/uploads\/sites\/14\/2020\/04\/Markus8.jpg\" alt=\"\" class=\"wp-image-210\" width=\"270\" height=\"360\" srcset=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-content\/uploads\/sites\/14\/2020\/04\/Markus8.jpg 768w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-content\/uploads\/sites\/14\/2020\/04\/Markus8-225x300.jpg 225w\" sizes=\"(max-width: 270px) 100vw, 270px\" \/><figcaption>Markus: and a very fine nose it is too.<\/figcaption><\/figure><\/div>\n\n\n\n<p>On a more serious note, some studies have shown that people are less likely to use an algorithm for prediction if they have seen that it is can get things wrong. This is known as Algorithm Aversion<a href=\"#_ftn1\">[1]<\/a>. If they know the algorithm is not perfect, they are put off from using it.<\/p>\n\n\n\n<p>Anecdotally, I see this with \u2014 for example \u2014 political polling. A lot of people seem to write off polls as nonsense, because they don\u2019t always get things 100% right. Either they are perfect and worth following, or they contain error and are rubbish.<\/p>\n\n\n\n<p>Back to Algorithm Aversion: one way to overcome this <a href=\"#_ftn2\">[2]<\/a> is to allow people to adjust the output of the algorithm, in a controlled manner.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignright size-large is-resized\"><img decoding=\"async\" src=\"http:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-content\/uploads\/sites\/14\/2020\/04\/Markus6.jpg\" alt=\"\" class=\"wp-image-208\" width=\"261\" height=\"349\" srcset=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-content\/uploads\/sites\/14\/2020\/04\/Markus6.jpg 768w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-content\/uploads\/sites\/14\/2020\/04\/Markus6-225x300.jpg 225w\" sizes=\"(max-width: 261px) 100vw, 261px\" \/><figcaption>Markus photographed moments after I tried to explain an optimised search strategy to him.<\/figcaption><\/figure><\/div>\n\n\n\n<p>Dietworst, Simmons and Massey (2018) found that if people were allowed to adjust an algorithm\u2019s forecast, they were happier with it. Restricting the amount that users could adjust forecasts did not make a lot of difference to their satisfaction.<\/p>\n\n\n\n<p>Of course, in a real life situation, it may make a lot of sense for someone in a business to adjust a forecast produced by an algorithm: if they know something the algorithm doesn\u2019t<a href=\"#_ftn3\">[3]<\/a>.<\/p>\n\n\n\n<p>For example, if the business is about to launch a big advertising campaign or slash prices \u2014 or if a close competitor has just opened a shop right opposite yours.<\/p>\n\n\n\n<p>This is a new area of research and so far relies on some limited field experiments, with sometimes seemingly contradictory results.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Beware our own expertise<\/h4>\n\n\n\n<p>A paper published last year<a href=\"#_ftn4\">[4]<\/a> suggested that people were likely to choose an algorithm\u2019s advice over that of other people.<\/p>\n\n\n\n<p>However, they were a little bit less likely to pick an algorithm\u2019s opinion over their own.<\/p>\n\n\n\n<p>The paper also found that people they determined to be experts were much less likely to take algorithmic advice over their own opinion, and that that hurt the accuracy of their predictions.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<p><a href=\"#_ftnref1\">[1]<\/a> Dietvorst, B.J., Simmons, J.P. and Massey, C., (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. <em>Journal of Experimental Psychology<\/em>: General, 144(1), p.114.<\/p>\n\n\n\n<p><a href=\"#_ftnref2\">[2]<\/a> Dietvorst, B.J., Simmons, J.P. and Massey, C., (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. <em>Management Science<\/em>, 64(3):1155-1170.<\/p>\n\n\n\n<p><a href=\"#_ftnref3\">[3]<\/a> Fildes, R., Goodwin, P., Lawrence, M. and Nikolopoulos, K., (2009). Effective forecasting and judgmental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning. <em>International Journal of Forecasting<\/em>, 25(1):3-23.<\/p>\n\n\n\n<p><a href=\"#_ftnref4\">[4]<\/a> Logg, J.M., Minson, J.A. and Moore, D.A., 2019. Algorithm appreciation: People prefer algorithmic to human judgment. <em>Organizational Behavior and Human Decision Processes<\/em>, 151:90-103.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>So, you\u2019ve created a brilliant solution to an operational research problem. But \u2014 not everyone is using it. What\u2019s going on? Read on to find out. <\/p>\n","protected":false},"author":8,"featured_media":202,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"class_list":["post-212","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-operational-research"],"_links":{"self":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/posts\/212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/comments?post=212"}],"version-history":[{"count":11,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/posts\/212\/revisions"}],"predecessor-version":[{"id":295,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/posts\/212\/revisions\/295"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/media\/202"}],"wp:attachment":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/media?parent=212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/categories?post=212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/tessa-wilkie\/wp-json\/wp\/v2\/tags?post=212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}