Defining AI as a political project
posted on in: Notable Articles, ai, tech, politics and labor.
~321 words, about a 2 min read.
Series Listing (click to open)
- Generative AI is Bad For Us:
- Stop using generative AI as a search engine
December 6, 2024 - Defining AI as a political project
December 9, 2024 - The phony comforts of the AI industry's useful idiots
December 9, 2024 - The questions to ask when considering if AI content scraping should qualify as fair use
December 19, 2024
- Stop using generative AI as a search engine
I found this article by Ali Alkhatib to be tremendously useful. Re-contextualizing AI as not so much a technological project but a political one--with political features, interested in how power works, and opinionated on how power should work--makes a lot of sense to me and makes it a lot clearer that when we see AI companies talking about "AI" this is what they mean, and why it is so important to them to frame it that way instead of the more technically correct acknowledgement that AI is not real.
I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to “democratize” AI routinely conflate “democratization” with “commodification”. Even open-source AI projects often borrow from libertarian ideologies to help manufacture little fiefdoms.
Defining AI along political and ideological language allows us to think about things we experience and recognize productively as AI, without needing the self-serving supervision of computer scientists to allow or direct our collective work. We can recognize, based on our own knowledge and experience as people who deal with these systems, what’s part of this overarching project of disempowerment by the way that it renders autonomy farther away from us, by the way that it alienates our authority on the subjects of our own expertise.
This framework sensitizes us to “small” systems that cause tremendous harm because of the settings in which they’re placed and the authority people place upon them; and it inoculates us against fixations on things like regulating systems just because they happened to use 10^26 floating point calculations in training - an arbitrary threshold, denoting nothing in particular, beneath which actors could (and do) cause monumental harms already, today.
—
— Via Ali Alkhatib, Defining AI