Categories
Books digital humanities digital rhetoric Uncategorized

the most stupid superintelligence possible

I’m reading Nick Bostrom’s Superintelligence as a kind of light reading tangentially related to my scholarly interests. If you’re unfamiliar, it’s basically a warning about the dangers of artificial intelligences spinning out of control. There are plenty of respectable folks out there who have echoed these worries—Hawking, Gates, Musk, etc. And Bostrom himself is certainly an accomplished, serious scholar. Superintelligence is not a hack-job, but more of a popular audience, entertaining treatment of a philosophical problem. There are plenty of places to read counter-arguments, suggesting the unlikely, even nonsensical, elements of this book.

For the sake of argument, let’s agree with the premise that IF AI superintelligence developed that we mere humans wouldn’t understand how that intelligence worked. Our relationship to that AI would be like the relationship between a bug and a human. I think that’s a safe assumption since we don’t understand how our own cognition functions (or even really a bug’s cognition for that matter). We wouldn’t understand how the AI thought, what it thought, or what capacities for action might arise from that thinking.

To get to this, I need to mention what Bostrom terms the “wise-singleton sustainability threshold,” which he defines as “A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it faced no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe.” He notes “One could even argue that Homo sapiens passed the wise-singleton sustainability threshold soon after the species first evolved. Twenty thousand years ago, say, with equipment no fancier than stone axes, bone tools, atlatls, and fire, the human species was perhaps already in a position from which it had an excellent chance of surviving to the present era.” The human analogy is an interesting one. In part it suggests that a superintelligence has already emerged (i.e. us) and one can follow out Bostrom’s alarmism regarding AI to think about how the human superintelligence might destroy the planet before an AI one gets the chance.

Indeed, right now it makes more sense to be concerned about humans than AIs, which is not to say there isn’t time in the day to worry about both I guess.

But let’s think about motivation—what would such an AI do/want to do? Bostrom admits to the difficulty of knowing this. It’s akin to a bug guessing at human motivations. Still he argues that there are a few intermediary goals that we can suspect: self-preservation, goal-content integrity (i.e. seeking to pursue goals, though they might change), cognitive enhancement, technological perfection, and resource acquisition. OK. Those all seem like rational/reasonable things to do. But if humans are the current, extant example of superintelligence then we can hardly say these things explain our motives or actions, either collectively or individually. I mean sometimes they do, but how would they explain my writing this blog post? It’s not self-preservation. It’s not the pursuit of a goal (except in a tautological sense of “I want to write a blog post so I’m writing one”). And I don’t see how it could be about any of the other three either. Why am I doing this? IDK. Ask my therapist. I want to. I enjoy it. I like the way thinking through a blog post feels.

Sure, an AI superintelligence would be far more intelligent than us, so maybe it would behave as Bostrom describes, but this strikes me more as being along the lines of what Bostrom thinks we would do if we were smarter. And I doubt that. Bostrom also writes of humans “Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.” Presumably the same might be said of an AI superintelligence. It might seem omnipotent to us, but it wouldn’t be. It would just be the stupidest thing that could outsmart humans.

One of the curious aspects of Bostrom’s imaginings is that for the most part there is only one superintelligence. Even when he considers the notion of a “collective superintelligence” they’re all pieces of single entity. I’m not really sure how a notion of motivation or even intelligence works in the absence of some relation. Now if the premise is that a superintelligent AI would be completely inscrutable to us, then I suppose it might do anything. But you can’t really spin that out into a book manuscript, can you? And I don’t think it would be wholly inscrutable.

Instead, I think it’s necessary to conceive of cognition and agency (i.e. motivation, will, goals, etc.) as emergent in relations to others. A “single” superintelligent being would have little or no cause to think or act. It would end up being no smarter than an amoeba. And really that’s what Bostrom’s monstrous AIs sound like. They are beings driven by a single arbitrary goal (calculating Pi or making paper clips) and end up destroying the world and sucking up all the resources in the available universe to achieve this end. So they grow tremendously in power and sophistication, adding all these new capacities, but never manage to escape from the inane task given to them at the outset by some lazy, unthinking human programmer.

Oddly enough, the problem Bostrom settles on—as near as I can tell—is the same one we have right now as humans. It’s not simply that we don’t know what core value should drive our actions but that language is finally insufficient to articulating such a value.

As you probably already know, it basically comes down to “try to do the right thing, even though I can’t tell you what the right thing will be and most of the time you won’t be certain either. Oh, and when you feel certain it’s probably because the action is fairly rote and inconsequential. If it isn’t rote and you’re certain, then you’ve probably missed something important.”

You don’t have to be superintelligent to know that, and if you fantasize that superintelligent beings will be more certain than us, well then we have very different notions of intelligence. On the other hand if you want to say that we shouldn’t build stupid but powerful killing machines capable of destroying the planet (that is, other than ourselves), then I agree.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.