Apparent Algorithmic Discrimination and Real-Time Algorithmic Learning
Subject
Marketing
Publishing details
Social Sciences Research Network
Authors / Editors
Lambrecht A; Tucker C E
Biographies
Publication Year
2020
Abstract
It is worrying to think that algorithms might discriminate against minority groups and reinforce existing inequality. Typically, the worry is that the algorithm's code could reflect bias, or that the data that feeds the algorithm might lead the algorithm to produce uneven outcomes. In this paper, we highlight another reason for why algorithms might appear biased against minority groups which is the length of time algorithms need to learn. If an algorithm has access to less data for particular groups, or accesses this data at differential speeds, it will produce differential outcomes, potentially disadvantaging minority groups. We revisit the context of a classic study which documents that searches on Google for black names were more likely to return ads that highlighted the need for a criminal background check than searches for white names. We show that at least a partial explanation for this finding is that if consumer demand for a piece of information is low, an algorithm accumulates information at a lesser speed and therefore takes longer to learn about consumer responses to the ad. Since black names are less common, the algorithm learns about the quality of the underlying ad more slowly, and as a result an ad, including an undesirable ad, is more likely to persist for searches next to black names even if the algorithm judges the ad to be of low-quality. We replicate this result using the context of religious affiliations and present evidence that ads targeted towards searches for religious groups persists for longer for groups that are less searched for. This suggests that the process of algorithmic learning can lead to differential outcomes across those whose characteristics are more common and those who are rarer in society.
Keywords
Algorithmic Bias; Advertising; Inequality; online advertising; algorithmic learning; digital discrimination
Series Number
3570076
Series
Social Sciences Research Network
Available on ECCH
No