Overcoming Algorithm Aversion: People Will Use Algorithms If They Can (Even Slightly) Modify Them

HFA Padded
HFA Staff
Published on
Updated on

Overcoming Algorithm Aversion: People Will Use Algorithms If They Can (Even Slightly) Modify Them by SSRN

Berkeley J. Dietvorst

University of Pennsylvania – The Wharton School

Joseph P. Simmons

University of Pennsylvania – The Wharton School; University of Pennsylvania – Operations & Information Management Department

Cade Massey

University of Pennsylvania – The Wharton School

June 11, 2015

Abstract:

Although evidence-based algorithms consistently outperform human forecasters, people consistently fail to use them, especially after learning that they are imperfect. In this paper, we investigate how algorithm aversion might be overcome. In incentivized forecasting tasks, we find that people are considerably more likely to choose to use an algorithm, and thus perform better, when they can modify its forecasts. Importantly, this is true even when they are severely restricted in the modifications they can make. In fact, people’s decision to use an algorithm is insensitive to the magnitude of the modifications they are able to make. Additionally, we find that giving people the freedom to modify an algorithm makes people feel more satisfied with the forecasting process, more tolerant of errors, more likely to believe that the algorithm is superior, and more likely to choose to use an algorithm to make subsequent forecasts. This research suggests that one may be able to overcome algorithm aversion by giving people just a slight amount of control over the algorithm’s forecasts.

Overcoming Algorithm Aversion: People Will Use Algorithms If They Can (Even Slightly) Modify Them – Introduction

Forecasts made by evidence-based algorithms are more accurate than forecasts made by humans. This empirical regularity, documented by decades of research, has been observed in many different domains, including forecasts of employee performance (see Highhouse, 2008), academic performance (Dawes, 1971; Dawes, 1979), prisoners’ likelihood of recidivism (Thompson, 1952; Wormith & Goldstone, 1984), medical diagnoses (Adams et al., 1986; Beck et al., 2011; Dawes, Faust, & Meehl, 1989; Grove et al., 2000), demand for products (Schweitzer & Cachon, 2000), and so on (see Dawes, Faust, & Meehl, 1989; Grove et al., 2000; Meehl, 1954). When choosing between the judgments of an evidence-based algorithm and a human, it is wise to opt for the algorithm.

Despite the preponderance of evidence demonstrating the superiority of algorithmic judgment, decision makers are often averse to using algorithms, opting instead for the less accurate judgments of humans. Fildes and Goodwin (2007) conducted a survey of 149 professional forecasters from a wide variety of domains (e.g., cosmetics, banking, and manufacturing) and found that many professionals either did not use algorithms in their forecasting process or failed to give them sufficient weight. Sanders and Manrodt (2003) surveyed 240 firms and found that many did not use algorithms for forecasting, and that firms that did use algorithms made fewer forecasting errors. Other studies show that people prefer to have humans integrate information (Diab, Pui, Yankelvich, & Highhouse, 2011; Eastwood, Snook, & Luther, 2012), and that they give more weight to forecasts made by experts than to forecasts made by algorithms (Onkal et al., 2009; Promberger & Baron, 2006). Algorithm aversion is especially pronounced when people have seen an algorithm err, even when they have seen that it errs less than humans do (Dietvorst, Simmons, & Massey, 2015).

Algorithm aversion represents a major challenge for any organization interested in making accurate forecasts and good decisions, and for organizations that would benefit from their customers using algorithms to make better choices. In this article, we offer an approach for overcoming algorithm aversion.

Many scholars have theorized about why decision makers are reluctant to use algorithms that outperform human forecasters. One common theme is an intolerance of error. Einhorn (1986) proposed that algorithm aversion arises because although people believe that algorithms will necessarily err, they believe that humans are capable of perfection (also see Highhouse, 2008). Moreover, Dietvorst et al. (2015) found that even when people expected both humans and algorithms to make mistakes, and thus were resigned to the inevitability of error, they were less tolerant of the algorithms’ (smaller) mistakes than of the humans’ (larger) mistakes. These findings do not invite optimism, as they suggest that people will avoid any algorithm that they recognize to be imperfect, even when it is less imperfect than its human counterpart.

Fortunately, people’s distaste for algorithms may be rooted in more than just an intolerance of error, but also in their beliefs about the qualities of human vs. algorithmic forecasts. Dietvorst et al. (2015) found that although people tend to think that algorithms are better than humans at avoiding obvious mistakes, appropriately weighing attributes, and consistently weighing information, they tend to think that humans are better than algorithms at learning from mistakes, getting better with practice, finding diamonds in the rough, and detecting exceptions to the rule. Indeed, people seem to believe that although algorithms are better than humans on average, the rigidity of algorithms means they may predictably misfire in any given instance.

Algorithm Aversion

Algorithm Aversion

See full PDF below.

HFA Padded

The post above is drafted by the collaboration of the Hedge Fund Alpha Team.

Leave a Comment