AI Is Already Harming Humanity
On misogynoir, facial recognition, invisible labor, and corporate control.
Real quick up top: you can now register for my summer course on plastics! In this 4-credit Gen Ed course, I’ll teach you all about plastic pollution: how we got here, and what comes next. Read more here and consider joining the class, whether you’re a Five College student or an industry professional. Thanks!
*content warning for racism, misogyny, and misogynoir*
Among tech bros, much of the discussion about the “potential” harms of artificial intelligence (AI) is with respect to artificial general intelligence (AGI). In other words, rather than a specialized AI system (an AI tool specifically designed for a single purpose, e.g. fixing your grammar, constituting an image, or retrieving information quickly), a generalized AI would be more capable than a human at a variety of diverse activities. They fear such a system for a variety of heady, existential reasons, including that such a system could put millions of humans out of work and become impossible for humans to control.
Here’s the thing, though: regular AI is already causing massive amounts of harm to humanity. No generalized intelligence is needed to accelerate already-existing structural violence.
If you’re been reading the news lately, you may have heard about Amazon Go’s grocery store. The store appeared to have AI-based facial recognition and grocery-detection technology that allowed customers to walk into a store and walk out without having to stop for checkout. While this AI system is real, it is still in the process of being trained on real-world data. So, currently, the system also heavily relies on over a thousand underpaid workers in India to review the footage, with about 70% of transactions needing to be reviewed by humans. Most of the mainstream Western media coverage focuses on how “weird” it is that humans are watching you shop—if they even mention the foregin workers at all in the headline—but the real concern is that hundreds of underpaid, exploited overseas workers have replaced the jobs of a couple of teenagers standing at cash registers. Amazon has tried to downplay this fact in the past week, but the exploitation and surveillance of Amazon workers has been widely documented.
It’s very much the norm for AI systems to also rely on humans for a variety of things, from developing the models they use to verify that they’re working correctly during rollout. ChatGPT, for example, was trained on a dataset consisting of huge chunks of the Internet, but before the tool could be rolled out to the public, human workers had to comb through massive amounts of data and tag various texts, images, etc. as being inappropriate for the model. This was done by workers an outsourcing firm called Sama in Nairobi, Kenya, who were paid around $2/hour to sit at a computer reviewing the most vile content known to humankind—racism, misogyny, child p*rn, that sort of thing—all with minimal mental health support from the company (I wrote more about this last year). In other words, ChatGPT would never have been made without Black people in the Global South, mostly Black women, being exposed through the worst content imaginable on purpose.
Believe it or not, this is very much the norm in Silicon Valley. Facebook and Google have also outsourced data work to Sama. Amazon has a service called MTurk where anybody can sign on and earn pennies taking surveys and tagging data for Machine Learning models. Pretty much every AI company does this sort of thing—it’s incredibly normal. Even non-AI companies have the complete freedom to sell their users’ data to AI companies for a quick buck.
AI has not made humans irrelevant, it has only invisibilized them.
For all the talk about how AI is going to “replace workers” and disrupt the economy, we need to remember that while some jobs may be erased, many more will simply become more invisible, more exploitable. Instead of a teenager being able to get an entry-level job bagging groceries, they’ll have to get a job staring at a computer screen and being forced to mindlessly enter data into spreadsheets just so that a machine learning model can go from being 94.04% accurate to being 94.05% accurate, because every fraction of a percentage point is worth millions to Jeff Bezos. This is something that the left and right should, ostensibly, agree is Bad: the left can get mad at the exploitation of thousands brown people by a massive corporation just to make American lives 0.01% easier, and the right can get mad at They Took Our Jobs. Unfortunately, AI as a threat is barely understood by the general public, since most of the conversation about AI’s harms is about broad-strokes, existential thought experiments like the “paperclip maximizer”, a hypothetical AGI that would destroy the planet in service of an arbitrary objective.
As usual, Black feminist thinkers help bring us back to reality. Dr. Joy Buolamwini is a poet of code and a daughter of art and science. She’s one of the leading researchers in AI ethics, specifically with regard to how facial recognition systems detect race and gender. Just this past week, I finished reading her new book, “Unmasking AI: My Mission to Protect What is Human in a World of Machines”. It was a fantastic read, beautifully blending her own personal journey of earning her PhD and the concurrent implementation of AI systems over the past decade. I highly recommend giving it a read if you want to know more about this branch of AI research!
Dr. Buolamwini reminds us that AI is already here and causing harm…
Companies that claim to fear existential risk from Al could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity. I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI systems can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of Al without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.
Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to healthcare, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.
Dr. Buolamwini mainly focuses on facial recognition systems and how they systemically leave out darker-skinned people, women, and particularly Black women (yay intersectionality!) Her Gender Shades work has shown that Black women are regularly not able to be detected by facial recognition systems, and when they are, they often get misgendered.
This topic would be worth a whole other essay—or a book—but race and gender are deeply intertwined. Our ideas about womanhood, for example—that women are feminine, demure, asexual, etc.—are really only stereotypes for white women. Black women, on the other hand, are stereotyped as masculine, brutish, hypersexual, etc., stereotypes leftover from America’s long history of enslaving African people (we needed to characterize Black people as “wild/savage” to justify “needing to control them” via torture and forced labor).
The harms of poorly-designed facial recognition systems, and AI systems in general, are already very real and very tangible. Dr. Buolamwini uses the term “excoded” to describe people who are left behind (excluded) when AI systems don’t include everyone…
When I think of x-risk [existential risk], I also think of the risk and reality of being excoded. You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant screening algorithm denies you access to housing.' All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.
To summarize, facial recognition and other AI tools are already being used to decide:
We deserve better. AI should be in service to humanity, not the other way around.
When I think of AI as a tool for good, I think of its potential to liberate us from menial, dangerous work. I think of AI as a tool to help marginalized people tell their stories. For example, Creole Exhibit Art uses AI to generate images of revolutionary figures from the Haitian Revolution. AI and data science more broadly can also be used to challenge power, rather than reinforce it. We must imagine better futures for ourselves, because the futures being imagined for us by corporations will only lead to the destruction of marginalized bodies and the planet!
For now, I’ll be opting out of facial recognition as much as possible. As inconvenient as it is, I don’t have FaceID activated on my phone, and I refuse to participate in viral trends involving AI generated images trained on my face. You can opt out of facial recognition at the TSA, as well. Protect yourselves and your biometric data!
Action Items
Amidst countless campus protests for the liberation of Palestine, Asna Tabassum, a student at the University of Southern California (USC) is being denied the opportunity to give a valedictorian speech at graduation due to her pro-Palestine stance. You can write to the USC Admin and Board of Trustees to let her speak!
Currently Reading
I’ve been devouring books lately. I’m working on Stone Butch Blues right now in preparation for Bookends’ Hijab Butch Blues book club! (I feel like I have to read the original to read the new one…)
I also just finished Kai Cheng Thom’s “Falling Back in Love with Being Human”, a breath of fresh air in my non-fiction-filled life of political analysis. A must-read!
Baubo is one of my favorite Substack authors; check out her piece on Dune 2 and her long-running Grief Diaries series.
Artist Jean Grae also has a Substack now!
Watch History
A two-hour deep dive into fake video games.
An optimistic look into how unions are building sustainable futures.
A wholesome look into Indigenous food sovereignty.
An opinion piece about the policing of Black men’s masculinity.
Bops, Vibes, & Jams
girl in red’s new album is out now! Fav track so far: “I’m Back”
Khruangbin’s new album makes for fantastic background music. Fav track: “May Ninth”.
And now, your weekly Koko.
That’s all for now! See you next week with more sweet, sweet content.
In solidarity,
-Anna