Discussions of artificial intelligence, particularly the question of machine consciousness, tend to involve questions of qualia. Can AI experience experiences? This is a tangential question for my purposes here.
Aside from AI, what other common products do we call artificial? The examples I came up, aside from artificial turf, are food products: sweeteners, colors, and flavors. Each of these products, including turf, are successful for their ability to produce qualia in humans that are satisfactory substitutes for our experience of their non-artificial referents. We might think of AI in the same way if we think of intelligence as qualitative and subjective. AI may be intelligent in the same way that artificial sugar is sweet, because we experience them as such. This is the basis of the Turing Test after all.
As I was writing about in the last post, we might also define understanding as experiential. This is why understanding is rhetorical, why it is negotiated. Arguably we do not continually renegotiate our understandings because that would probably kill us as a species. Instead, it takes a lot for us to change our minds once we come to an understanding. That’s more energy efficient. And we’ve built all these institutions to help us limit the need for creating new understandings.
But every now and then, things go pear-shaped because the existing understandings start malfunctioning… sorry I got distracted for a moment because I started thinking about China Miéville’s Embassytown. It’s very apropos here. We are in a different situation but also with malfunctioning understandings. If you focus on it, you might start to taste it, how the milk of understanding has gone a little sour. And intelligence? Intelligence has gone right off. I mean, can you swallow what’s being feed to you?
So there’s the phenomenology of qualia and there’s the empiricism and materialism of the real. On the materialist side, we might say that intelligence exists in some virtual form that humans, computers, and others apprehend differently. Maybe it exists in the way gravity exists, as emergent in the relations among things. We experience intelligence in the way we experience gravity: because we are bound to the Earth. This makes it neither less real or less qualitative.
I think few people would deny that. The question is how, if, or should the two go together: a unified theory of intelligence, one might say.
Meanwhile for the commercial developers of AI, probably what matters most is the user experience of intelligence, which begins with understanding (in the AI developer sense of the term). What matters is that the user feels understood, even if the program isn’t able to fully satisfy the user. After all, who is ever “fully” satisfied? And we want these folks coming back seeking more satisfaction… all day long.
Who could blame us if we started feeling a little bloated, a little understanding intolerant?





Leave a comment