Thread
A lot of folks have written great threads on why the latest “AI sentience” debate is not just (or even *mainly*) about what it claims to be about… & how the ex-Googler who broached the issue was operating w/in a very narrow context & set of concerns. I’ve retweeted a lot of them
As part of the effort to combat the narrowness of context of this issue & offer some more tools to think through what’s brought us here, I’d like to mention some of the chapters collected in a book I was involved in publishing last year: Your Computer is On Fire🔥 from @mitpress
In my intro to the book I talk about man-made disasters… and there’s a reason ELIZA has a cameo in it: Joseph Weizenbaum, ELIZA’s creator (& later a noted AI critic) was aghast at how easily fooled, or how *willing to be fooled*, many people who interacted with his chatbot were.
Reflecting on what computing had done in his lifetime, Weizenbaum noted that: “superficially, it looks as if [things have] been revolutionized by computers. But only very superficially”
He later went on to say that much of the infrastructural work he had done in computing early on had actually a deeply conservative if not regressive (and problematic) effect as social and economic infrastructure.
The computer “made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed” he said. Paradoxically, computerization extended rather than solved many fundamental, structural problems within institutions in industry, & society.
And one of the things that led us there was that decisions about what to computerize & how were made by the most powerful people at the top of the most powerful institutions in government/industry—with little oversight or input—and by programmers with only a specific task in mind
You can see how this could lead to problems.
But seeing the connections to present day problems—or alternatives and solutions—isn’t necessarily as easy. So I’d like to highlight some chapters I think are interesting and useful for folks thinking about this:
But seeing the connections to present day problems—or alternatives and solutions—isn’t necessarily as easy. So I’d like to highlight some chapters I think are interesting and useful for folks thinking about this:
Much of the (real) argument about AI right now is a cloaked argument about labor. About the stakes of automating things (poorly) that could or should be done in a non-automated way. This is something we’ve seen before in a slightly different form:
The idea of tech-as-savior to downtrodden workers doing “undesirable” or “easily automated” jobs has a long and problematic history. What we see again and again is that tech is offered or sold on that basis but often it winds up never delivering or even having the opposite effect
Yet is is an incredibly popular framework for powerful technologists in wealthy nations (and those aligned with them) because it is presented as an indisputable, and attainable, good.
There are several chapters in the book that address this but I think the one I’d like you to read the most is this one by Sreela Sarkar: “Skills will not set you free.” And I’d like you to think about it alongside what the narrower conversations about AI sentience focus on…
… and what they leave out. And where (figuratively and literally) they like to start computing history, or conversations about what computers do, ignoring all that came before, or is still going on. In other words, ignoring lessons learned.
Which brings me to the next chapter I’d love for you to read if you feel so inclined: “Coding is not empowerment” by Janet Abbate. Ever wonder why the pipeline problem persists? Or why that framework makes little sense from the perspective of being able to solve the problem?
And of course this framing is very important when we start thinking about automation—in other words, labor divorced from the concept of a worker. Because of course it’s never really divorced, is it?
And for this I’d really love for you to read this chapter by @safiyanoble, “Your Robot isn’t Neutral,” about how subtle (and not-so-subtle) reinscription of regressive ideas about how the world works—and who should be in charge of what (or whom)—
silently undergirds every single conversation we have about automation or “intelligent” technologies. And how we are allowed or encouraged think about their potential implementation.
Because that implementation is always much narrower and more specific than it’s presented as. In fact, it is often presented as totalizing and universal not because it really is, but so that it might become so. And that is usually not a good thing.
For an important, wide-ranging, and complex example of this, I’d love for you to read this chapter on accent bias and whose language gets to be heard, as automated voice recognition systems spread: “Siri Disciplines” by @Halcyon_L
Lastly, so much of the conversation about AI is, as many others have pointed out, is about trust—specifically, unearned trust in relatively untested systems. This is where many of the delusions of machine sentience are coming from.
It’s also where fantasies that code can stand in for other forms of power in a non-problematic way come from.
And unearned trust is an important component of many computing systems—without which they start to fall apart. As people are automated out of the system more and more, that trust can become either more solid (earned) or less so (this is usually what happens).
Cooperation by technical fiat doesn’t equal trust, nor trustworthiness. Which is why I’ll mention this chapter on just how hard it is to verify that even a simple piece of code does what it says, & how this history of trust needs a deep rethink: “Source code isn’t” by Ben Allen
Forgot the alt text for that last image: photo of the first page of the essay “Source Code Isn’t” by Ben Allen, from the book Your Computer Is On Fire.
Anyway, there’s lots more great essays in the book from a host of brilliant writers that I promise you will provide relevant context for a lot of things you’re reading about in the news regarding high tech (or will be soon). And possibly that you’re personally dealing with.
We wrote the book to help make sense of what can & should be done to fix, or head off, many of the problems that the tech industry has presented to us as a “cost of doing business.” So the book is a call to action—& talks about how to take action—in addition to offering context🔥
Mentions
See All
David "nicholdav" Nicholson, Esq. III @nicholdav
·
Jun 18, 2022
- Answered to What are the best critical works that have been written about AI?
- From Twitter