We need to talk about the history of Artificial Intelligence (AI).
The history of AI is built on institutions that privilege an unspoken understanding of normal, and maintain a conveniently forgetful retelling of history. Ada Lovelace is a cultural icon in computing, thought to have written the first computer program in the 1800s. When Russell and Norvig mention Lovelace in the Chapter 1 of the book Artificial Intelligence: A Modern Approach, why do they feel compelled to mention Lord Byron? When they mention Lord Byron, how is it they forget to mention that he abandoned her and her mother? When they mention Lord Byron, why do they forget to mention that Ada’s mother hated Byron so vehemently that she forced Ada into mathematics to spite him? Why do they forget Ada had no substantial relationship with her father? It seems that no woman can enter the venerated halls of history with a dick to claim her.
It’s important to mention that Lord Byron has no direct connection to artificial intelligence at all. But if the authors thought including his name was crucial for our understanding of AI history, why didn’t they think it was important to mention that Gottlob Frege, the famous and AI relevant logician, was anti-Semitic and a budding Nazi? This fact is certainly more relevant than the mere existence of a man who donated sperm to the birth of Ada Lovelace.
You see, when Russell and Norvig started this section, they mentioned that there was too much to cover, that they would have to skip over stuff. The problem is how they decided what made the cut. Their choices are steeped in an unchecked understanding of normal, a normal where sexism is within the important history but calling out the prejudice of white men is outside of it.
This reading reminds me just how important feminism is in computer science research. In my research I will build off of the scholarship of women, of people of color, of disabled scholars, queer scholars, trans scholars, and post-colonial scholars. Who you cite and how you write about people matters.
S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, vol. 9, no. 2. Englewood Cliffs, New Jersey: Prentice-Hall, 1995.
Capitalism is at odds with humanity. Following the logical conclusions one can draw from its definition, capitalism is designed in such a way that compassion, kindness, exploration, and community do not have much weight in the system—especially when they’re not being bought or sold. At the core of this system, there is a strong need for productivity, one must have produced in order to buy and sell. This insidious, contagious concept—that there is no worth or weight outside of the desires of capitalism—is squirming through the veins of scholarship. Its presence is clear in each point of criticism leveraged against the different methodologies Yvonne Rogers presents in HCI Theory: Classical, Modern, and Contemporary, Chapter 5, “Modern Theories.” It seems in today’s world, theories need to prove their usefulness to the field, not so much intellectually, but in their application to the “design of technology and work”.
These incognito-capitalist criticisms range from not being easy to use, to not being quick to produce. If scholarship is about exploring knowledge—what it means to exist, create, design, act, and be—why does it matter if a method is easy to use or quick to produce? Isn’t some of this scholarly exploration about wrestling with what seems impossibly unknowable? I was never under the impression this was supposed to be a quick battle. At the heart of these criticisms is a push for productivity. The idea that methodologies like ethnography must produce a series of steps for other researchers to use to achieve a more productive goal is disheartening. Is there not knowing outside of production? It seems that I cannot imagine my life outside the constraints of capitalism, just like the critics cannot imagine research outside of the constraints of productivity. I worry about how that impacts my ability to add meaning to the world through scholarship, even just in small ways. It seems that the only meaning making allowed is that which fits within the constraints of capitalist knowing.
Part of what was so disheartening about all the literature on Distributed Cognition we read this week was the focus on maximizing efficiency—particularly in the work place. I believe there is so much more to life, technology, and humanity (even to the potential applications of Distributed Cognition); yet, it all seems ungraspable given the constraints of this moment in history. Perhaps it’s a naive desire, but I’ve always hoped we, as scholars, could sprinkle a little magic fairy dust to see a bit outside the constraints of “reality.” Unfortunately, academia is a system like any other that exists within capitalism, which means it’s a system that must play into focusing on efficiency and productivity in the name of profit.
Y. Rogers, HCI Theory: Classical, Modern and Contemporary. 2012.