It was I.A. Richards who famously wrote rhetoric “should be a study of misunderstanding and its remedies.” Today, “understanding” has a very different cultural status and meaning as a term of art in AI research. As such it is easy to misunderstand misunderstanding. Its remedies are (of course) samples of the pharmakon.

We discuss poststructuralism, especially deconstruction, as a fascination with why language cannot secure meaning, and there is that. But there is also the examination of how symbolic behavior seems to work despite this. And symbols appear to work for machines as well, as our software-becoming-AI culture demonstrates. The poststructural/postmodern/posthuman philosophical approach to the mechanisms of understanding is different from Richards’ semantic triangle, but I’d say they are asking similar questions. And that’s just one binary, one segment, of many possible ways to pose this question.

While this may seem an abstract, even esoteric, concern, and one that has been around for a very long time, certainly going back to the Phadreus, if anything, it is more important than ever. That importance lies in its significance for how we understand intelligent machines, their current operation, speculations on their future operations, and the discourses in which these concepts participate.

For example, we want machines to understand our requests. Maybe we want them to understand our requests in the way waitstaff do at a restaurant. We don’t want them to respond in the way a spouse or parent or instagram health guru influencer might to ordering a giant cheeseburger and fries with a shake at a fast food drive-thru. Those folks have different understandings of us and different motivations than just putting some correct slop in a paper sack for us. Maybe, if we think about it, we might also be uncomfortable with the uncanny observations machines make about us when they feed our online fast food order into vast populations of behavioral matrices.

What does the machine understand about us from our clicks? Do we want it to respond like a close friend who might ask “are you stress-eating again?” Do we want it to start spamming us with pizza delivery ads a few hours later? Or whatever products it thinks come next, just as it composes sentences by deciding what word comes next? How can I expect the machine to understand me when I don’t understand myself? Maybe it knows me better than I know me.

As contemporary rhetoric scholars, I think we generally view language not as representing some a priori understanding that two or more people or machines share but instead as participating in the formation of a shared understanding, or at least an agreement that there is a shared understanding.

Depending partly on a semantic argument (i.e. how do we define understanding), we can identify understandings that are ineffable (a word for saying there are no words). If you duck from a high, inside fastball, your reaction happens faster than words, but you did “understand” the situation, and later we can explain your understanding with words in a manner that satisfies most audiences. We commonly say our deeply-held emotions exist beyond our words to express them, though if someone says they are deeply in love with someone else, a third-party probably finds that a sufficient description of the emotional state.

I guess my point is that understanding is negotiated and conditioned. It is first negotiated in/through/with language, and we experience this negotiation, especially when we write. Part of that negotiation is with the cultural-material history and conditions of the language (why words suggest particular understandings of themselves that are ultimately specific to each singular rhetorical situation). It is also negotiated with our interlocutors and audiences. And the qualities of that negotiation can vary widely.

The capacity for understanding exhibited by intelligent machines (which is really to say our experience, our understanding, of their understanding) and how we define it will be powerful conditions for the path of AI development. Our understanding of understanding will be, is already, secured by institutional discourses: religious, legal, neuroscientific, psychological, etc. The exhibition of understanding will also be conditioned by market-economic contexts: what will get people willing to use, purchase, and invest in AI?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending