The Flagrant Algorithms Visit 19th Century Women

Andy CampbellOpening Up

Issues of racial bias, cultural bias, employment bias, and copyright infringement are inherent in contemporary AI systems. But the role of algorithms in controlling data generated from AI systems is not widely understood, and in “Bias Amplification in Artificial Intelligence Systems,” Kirsten Lloyd observes that “The first line of defense against creating AI systems that inflict unfair treatment is to give more attention to how datasets are constructed before operationalizing them”.

The Flagrant Algorithms is an artist’s dataset that — using the words of 19th Century writers as data — demonstrates how unseen algorithms impact output from AI systems. The data is always the same, but in response to the question: “What was it like to be a woman in the 19th Century?” unseen algorithms selected by the system at random determine how data is accessed and produce different results. For instance, an algorithm might generate words at random from all the data, or generate only women written words, or generate only male viewpoint written words. However, a public user might not know which algorithm the system activated. Additionally, since sources of data in AI apps should be cited, an algorithm will output sources of all the data.
To provide depth/decrease repetition, new data is added weekly, but a working model is at
https://www.narrabase.net/algorithms/index_fa.html Notes about the process: https://www.narrabase.net/algorithms/flagrant_algorithms_notes.pdf