Acts of technological racism might not always be so blatant, but they are largely unavoidable. Black defendants are more likely to be unfairly sentenced or labeled as future re-offenders, not just by judges, but also by a sentencing algorithm advertised in part as a remedy to human biases. Predictive models methodically deny ailing Black and Hispanic patients’ access to treatments that are regularly distributed to less sick white patients.
These sorts of systematic, inequality-perpetuating errors in predictive technologies are commonly known as “algorithmic bias.” They are, in short, the technological manifestations of America’s anti-Black zeitgeist. (Paragraphs 2-3)
everyone knows that human opinions are subjective, but algorithms operate under the guise of computational objectivity, which obscures their existence and lends legitimacy to their use. (Paragraph 6)
Teens of color aren’t the only ones at risk of computer-generated racism. Living in a world controlled by discriminatory algorithms can further segregate white youths from their peers of color. TikTok’s content-filtering algorithm, for example, can drive adolescents toward echo chambers where everyone looks the same. This risks diminishing teens’ capacity for empathy and depriving them of opportunities to develop the skills and experiences necessary to thrive in a country that’s growing only more diverse. (Paragraph 7)
Algorithmic racism exists in a thriving ecosystem of online discrimination, and algorithms have been shown to amplify the voices of human racists. Black teens experience an average of five or more instances of racism daily, much of it happening online and therefore mediated by algorithms. (Paragraph 8)
Organizations such as NYU’s AI Now Institute and the Algorithmic Justice League out of the MIT Media Lab are developing guidelines for ethical artificial intelligence. But importantly, research on algorithmic bias typically fails to account for age as a dimension of inequity, which is a point my own research on the subject aims to address. (Paragraph 9)
Algorithms powerfully shape development; they are socializing an entire generation. And though United States governmental regulations currently fail to account for the age-based effects of algorithms, there is precedent for taking age into consideration when designing media policy: When the Federal Communications Commission began regulating commercials in kids’ TV in the 1990s, for example, the policies were based on well-documented cognitive and emotional differences between children and adults. Based on input from developmentalists, data scientists, and youth advocates, 21st-century policies around data privacy and algorithmic design could also be constructed with adolescents’ particular needs in mind. If we instead continue to downplay or ignore the ways that teens are vulnerable to algorithmic racism, the harms are likely to reverberate through generations to come. (Paragraph 11)
Epps-Darling, A. (2020, October 24). How the Racism Baked Into Technology Hurts Teens. Retrieved December 17, 2020, from https://www.theatlantic.com/family/archive/2020/10/algorithmic-bias-especially-dangerous-teens/616793/