Categories
artificial intelligence digital rhetoric media studies

mind games and mind gorillas (with apologies to J. Lennon)

Or maybe the apologies should go to Jean Houston and Robert Masters, authors of Mind Games: the Guide to Inner Space, a book that inspired Lennon beyond providing a name for this song. I can’t claim to know much of anything about Houston and Masters, but basically they are/were (Masters passed in 2008) new age gurus of a sort with a focus on human potential (entelechy) and eudaemonia. I’ll circle back to some of these concepts.

For those who don’t remember Lennon’s song, here it is. Let’s just say it isn’t “mind games” in the way we typically use that term, and yet its breadth and indeterminacy might prove useful.

I want to put these minds games in the context of an open letter titled “The Right to Warn about Advanced Artificial Intelligence,” signed by a number of current and former researchers at OpenAI and Google. It’s around in the news for you to read about, but I would recommend a recent episode of the podcast Your Undivided Attention in which Tristan Harris and Aza Raskin interview one of the letters signatories.

But first, a little about mind games and mind gorillas. To be clear, in Lennon’s song it is guerrillas not gorillas. What are “mind guerrillas”? I suppose for Lennon they are those who are turning their minds to peace and love and changing the world through positive thought in the way Houston and Masters believed possible. However, I am interested in mind gorillas, i.e., us. And by “us” I mean the conscious minds that float atop, within, aside (pick your preposition) the body, certainly interconnected but not in a way we fully understand. Do actual gorillas have minds, as in an experience of consciousness that is in anyway analogous to our own? I don’t know. I don’t know if you experience consciousness in a way that is analogous to my own. So there’s that. Then again, maybe there are mind gorillas and mind tortillas out there as well and maybe mind tort-tillas, little AIs serving up civil suits.

In a sense, the hope is that mind gorillas can help design AGIs that will go out there and spread peace and love, creating a utopia… or at least somebody’s utopia. That said, setting a course for utopia is literally going nowhere, but what can we expect from code monkeys trying to build mind guerrillas?

We can expect mind games, and that’s what we have. And “the right to warn” is part of the game. If you listen to the podcast suggested above, a big part of the concern with AGI is that it is largely a black box, especially for those of us who are not employed by these companies. The aims of alignment, explainability, and interpretability are to create AGI that humans understand with some satisfaction. One problem with any of this is that if/when we create super-human AGI, such machines will likely be able to fool us or may not even need to fool us as they will have shaped our thinking so that we are satisfied with whatever AGI does. So you want to create a machine that thinks like a human? Great! Which human? Or a super-human? What’s that? A human with more humanness? A crowd of humans? That sounds promising. Maybe a machine that can do what all humans in time have collectively done… Awesome, because we’re doing such a great job that our best idea is to just turn the dial to 11. We might as well just start attempting to communicate telepathically with a supreme being who we hope loves us. (I guess we’ve already tried that one.)

Most of the crap that is actually going on with AGI development has to do with a 21st century version of Californian Ideology and as such all the conventional modes of critical theory can gain some purchase here. That is, most of what these companies are doing is about money and power, shocking, I know. If they weren’t incentivized to win a race to get their products to market then things would be different. If the value of their work wasn’t overdetermined by creating profitable property then maybe the secretive nature of IP wouldn’t exist. If these companies were developing nuclear weapons, we wouldn’t stand for it.

As everyone who has seen Oppenheimer now knows, those involved in the project were confident that the math demonstrated there was a very low likelihood that the Trinity Site test would ignite the atmosphere and kill us all. And perhaps AI developers feel similarly about the “paper clip maximizer.” I’m less concerned about AGI going horribly wrong than I am about AGI doing exactly what it is being designed to do: maximize shareholder value or some such.

And this is where I am afraid Lennon-ist mind games take us, toward the neoliberal new age Californian ideology: mind gorillas trying to build mind guerrillas. Let’s hope we share in their vision of peace and love. And if we don’t I guess they’ll shape us through our media until we do.

One reply on “mind games and mind gorillas (with apologies to J. Lennon)”

WP embedded AI post feedback:

The content discusses various thought-provoking ideas related to the concept of “mind games” and the potential implications of advanced artificial intelligence. It delves into the significance of aligning AGI with human values and highlights the underlying influence of profit-driven motives in AGI development. The references and connections made to historical events and contemporary concerns provide an insightful perspective.

Actions to consider:

1. Provide a brief explanation or definition of certain concepts or terms for readers who may not be familiar with them.

2. Incorporate specific examples or case studies to illustrate the potential impact of AGI on societal and ethical aspects.

3. Consider offering a balanced view by acknowledging potential positive outcomes of AGI development alongside the identified concerns.

Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.