I think there’s a reasonable argument to be made that the development of AGI is informed by the continual search for someone who is willing to be the DM and will do a good job at it. It is what we want, right? Someone who will design and manage an interesting, fun and generally desirable world for us to play in.

These are the AIs from Iain Banks’ Culture series.

AI developers sometimes discuss this as “friendly AI.” It is a designation that connects with issues like alignment, but it’s more than that. Alignment is when the AI does what we ask. A friendly AI is one that does what it thinks is best for us and does not rely upon individual human users making requests to know what that might be. It’s like a parent responding to a toddler who keeps asking for cookies.

It strikes oddly with the concerns of bias and hallucination. Doing what is best for humans is obviously a bias. Believing that there is an action or set of actions that is “best” or “good” for humans is obviously a hallucination. It is hardly surprising that we want biased, hallucinatory machines as all of our wants and desires are the product of bias and hallucination. Remember that a hallucination in this case is asserting something to be true when it isn’t. For example, humans have free will.

In the role-playing game of life, with AI as DM, we are player characters with defined abilities, characteristics, and equipment that are all inventoried on our player sheets. The characters are constructed this way, and some might say constructing a new character is one of the best parts of D&D. It’s been exported to most RPG video games. Here it is very easy to see how characters are “made to act,” to use Latour’s term. We would be something quite similar in to this imaginary AGI DM: a matrix of data points. We are data points to current AI.

How can something that is friendly see you as data points? Or perhaps more to the point, how can you be a friend to something that sees you only as data points?

The human DM is, generally speaking, a friend of the players. They are all playing the game together. When I was in college for a while I was in a RPG group, and Junot Diaz was our DM. I spent many nights in his mom’s basement with our friends. We spent months in what was basically collective storytelling with dice as a heuristic. In Nicomachean Ethics, Aristotle identifies these types/aspects of friendship: pleasure, utility, and ethics. We are friends because I receive pleasure, find it useful, or see you as a good person. E.g., it’s useful to be friends with your colleagues as it makes the work go more smoothly, even if you don’t particularly enjoy their company or find them of particularly good character.

From this respect how can I and an AI be friends? And what is the difference between a friend and someone who is friendly? The latter is at best fulfilling some social obligation and at worst is deceiving you. And if an AI isn’t your friend then why would you let it make decisions for you? Even if it was your friend, why would you do that?

You wouldn’t. You’d only do that if that AI was your parent, your boss, or your god. In those cases though, you aren’t necessarily letting the AI make decisions. Instead you are either compelled or impelled to take their “advice.”

Thrown down your umbilical noose so I can climb right back.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending