I’ve been working around this idea for a while as part of my book project, but I am starting to see more similar observations being made about AI output (i.e., what we encounter). Though perhaps we can find hints of this in earlier time-critical media, as those primarily replayed already inscribed media, putting them into operation, this is something new in my view.

Here’s the basic version. All conscious human symbolic action must endure the ethics, the character, of the speaker. In this respect it is ethical.. We might not judge the speaker’s character (or ethics) in high regard, but that’s the point. The speaker is accountable for the speech and can be made to account. (Keep in mind this is the basic version.) The key part is that the speaker’s utterance occurs in their conscious present experience, over time. We encounter it similarly, even if not contemporaneously.

This cannot be said of AI output. Regardless of all the a priori stagecraft, there is no practical account of AI output’s “generative resolution” (as I term it). Yes, technically there are logs, but how could any human account for them without employing an AI as intermediary, which then introduces a new set of undecipherable actions.

We cannot wish away AI’s non-ethical outputs. Instead, we are faced accepting the consequences of their outputs. Could we create an AI with a capacity for ethical deliberation? I don’t know. Maybe. But it would not serve our chronopolitical demands. The whole point of AI is to precede and foreclose on the necessity of ethics. Not to practice them at speed.

In short, if you want a non-ethical solution to climate change, etc., then AI might provide one. On the other hand, if you want an ethical response to climate change, well, you’re just gonna have to wait.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending