https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

Really GREAT article just dropped on the implications of AI as it’s manifesting in the past year or so, in the societal/ethical/existential senses. 

Bender is one of the co-authors of the paper that Google fired its AI Ethics team heads over a year or two ago. The article starts with her and her work and ends with Judith Butler, who as she usually does, excellently hunts down the point of “let’s spot where the fascism and white supremacy are hidden between the lines.”

In the bottom line, Bender argues in this article for the general public to be savvy consumers. To identify where these tech-makers are trying to gull us, and ask them–and ourselves–relevant questions.

And you know, I think people are generally good at spotting where somebody is trying to fool them. You look at the conversations happening on social media now, the ones that have been happening in academic circles on this for a while, and you see that a lot of folks can quickly start to form up around the salient problems.

It’s just that a lot of this whole issue is so veiled in prior assumptions and context. Like, why are we calling it ‘artificial intelligence’ to begin with? What does that phrase do in our heads when we think of it?

And besides, “intelligent” according to what definition? … Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?” 

The question all these people want to ask is not “Should AI exist?” It’s “HOW should AI exist?” AI isn’t a naturally-occurring phenomenon. People are designing this stuff. And currently they’re designing with the aim of ever-more-effectively making it difficult for people to tell whether they’re interacting with a person or machine.

And like: why? How many use cases do we actually need that for? Why is “can’t tell the difference between human and technology” the gold standard here?

“There’s a narcissism that reemerges in the AI dream that we are going to prove that everything we thought was distinctively human can actually be accomplished by machines and accomplished better,” Judith Butler, founding director of the critical-theory program at UC Berkeley, told me, helping parse the ideas at play. “Or that human potential — that’s the fascist idea — human potential is more fully actualized with AI than without it.”

The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.” There’s a technological takeover, a fleeing from the body. “Some people say, ‘Yes! Isn’t that great!’ Or ‘Isn’t that interesting?!’ ‘Let’s get over our romantic ideas, our anthropocentric idealism,’ you know, da-da-da, debunking,” Butler added. “But the question of what’s living in my speech, what’s living in my emotion, in my love, in my language, gets eclipsed.”

Leave a Reply

Your email address will not be published. Required fields are marked *