In an interview with New Scientist, robotics expert Noel Sharkey expresses his doubts about the possibilities of artificial intelligence. He says:
It is my contention that AI, and particularly robotics, exploits
natural human zoomorphism. We want robots to appear like humans or
animals, and this is assisted by cultural myths about AI and a willing
suspension of disbelief. The old automata makers, going back as far as
Hero of Alexandria, who made the first programmable robot in AD 60, saw
their work as part of natural magic – the use of trick and illusion to
make us believe their machines were alive. Modern robotics preserves
this tradition with machines that can recognise emotion and manipulate
silicone faces to show empathy. There are AI language programs that
search databases to find conversationally appropriate sentences. If AI
workers would accept the trickster role and be honest about it, wemight progress a lot quicker.
So of course when some mentions the use of trick and illusion, I immediately think of rhetoric. It's not about being intelligent; it's about convincing others that you are intelligent. In other words, AI is like an academic conference. OK, enough snarkiness.
One of the fundamental conceptual problems of AI is that we don't have a very good grasp on what "intelligence" is to begin with. As Katherine Hayles' work details, the history of this field is replete with hand-wringing over defining human intelligence in a way that would allow it to be artificially recreated with computers. Is intelligence really computational? Does intelligence equal sentience/consciousness? Obviously we don't know the answers to these questions.
What does seem more likely is that we might develop computers/robots that have the appearance of intelligence that is driven by computation. As Sharkey points out, Big Blue can win chess matches with brute force computation. Perhaps in some sense this makes it smarter than humans but it's "thinking" isn't anything like our own. On the other hand, when I talk with someone all I can discern is the appearance of intelligence.
From a digital humanist viewpoint, one of the interesting things about AI discourses and practices is the way they lead us to reconceptualize human intelligence. Does the computational model of the mind represent the material reality of how we think? I suppose that would fall into the "I guess anything is possible" category. Does it shine light on how equally flimsy and contingent other, more traditional models of the mind might be? I'd think so.
I just think it's interesting to turn this conversation around and ask the same question Sharkey has for AI to the conversation we have about human intelligence. Maybe human intelligence is little more than cultural myth and a willing suspension of disbelief. After all there are plenty of humans who could use some additional programming in order to "recognize emotion," "show empathy," and "find conversationally appropriate sentences." (I guess I wasn't finished with the snarkiness.)
#plaa{position:absolute;clip:rect(450px,auto, auto,450px);}