Recently I found myself fascinated with a group of articles in Wired Magazine re-examining the theme of Artificial Intelligence. Enthusiasm about AI, it seems, after a pessimistic "winter', has experienced a renaissance.
The lead piece, by Steven Levy (which offers an enormously useful capsule history of theories about the field) shows that the development of artificial intelligence is proceeding at a faster pace than ever -- but that it comes in a completely different form than was previously imagined.
Gone are the fantasies of the metallic humanoids you saw it movies like I, Robot (itself based on classic science ficton tales from the 40s/50s by Isaac Asimov). Engineers have given up trying to replicate the human brain (as was thought achievable in the 50s).
But their inability to do so turned out to be a fortunate fault -- for it led to the creation of a rich array of smaller, intelligent machines, each furnished with "clever algorithms" that enable them to accomplish a limited number of specific tasks:
"Once researchers were freed from the burden of building a whole mind, " Levy writes, "they could construct a rich bestiary of digital fauna, which few would dispute possess something approaching intelligence."
Along the way, "the AI crowd" made a key discovery:
"'The big surprise is that intelligence isn't a unitary thing,'" says Danny Hillis, who cofounded Thinking Machines, a company that made massively parallel supercomputers. 'What we learned is that it's all kinds of different behaviors.'"
Because of the success of these new machines, Levy concludes that we are more dependent on Artificial Intelligence than ever. We use AI to help us drive, invest, and find answers to nearly everything (via Google), just to name a few of the types of robots who assist us everyday. Such dependency leads Hillis to conclude, "the computers are in control, and we just live in their world."
And this is a world we don't understand. Levy tells us that these new machines reason in ways vastly different than human logic. If anything, their consciousness is more mysterious than the soul searching, existentialist machines imagined (but never realized) during Science Fiction's Golden Age.
Different but Same
I asked myself what I found so intriguing about these ideas, especially seeing as I'm more of a pragmatist than a utopian (or dystopian) when it comes to technology. At first, I thought it was the cool "otherness" these thinkers achieve when talking about what seem to be mere gadgets. The vision of AI they propose -- multiple, ubiquitous, alien -- and with a "will of its own", is reminiscent of the founding science fiction narrative of the current technologic sublime: The Matrix.
And then I wondered if what drew me to these ideas was not their alien but familiar quality. Maybe this is because this tale of alien intelligence replays recent theories of what human consciousness is all about.
Take the idea, for example, that the minds of the new machines are too strange to comprehend. In Susan Blackmore's concise introduction to contemporary thinking on "Consciousness", you find out that there is a school of thought, dealing with human and animal consciousness, based around the same assumption.
Its proponents are labeled "mysterians." They argue "that we humans are 'cognitively closed' with respect to understanding consciousness ... Just as a dog has no hope of being able to read the newspaper, he so happily carries back from the shops." (p. 7-8)
Or the idea of intelligence as multiple rather than unitary. It turns this is the standard, materialist account of human consciousness as well. As Blackmore puts it:
"There is no single location from where my decisions are sent out. Instead, the many different parts of the brain just get on with their own jobs, communicating with each other whenever necessary, and with no central control." (p. 15)
And it's not just in the realm of philosophy such ideas seem familiar. The arts of our time, both popular and avant-garde, are obsessed with the multiple -- and the "otherness" it supposedly forces one to confront. And this is not to mention (before the crash at least) the praise heaped upon "de-centralized" approaches to business management and economic policy -- all said to be increasingly necessary in a "multi-centric" world.
All of which is to say, I suppose, that as "otherworldly" as the workings of artificial minds may appear, when it comes to describing them, the influence of worldly, human culture and metaphor is as powerful as that of science.
This may also be why the literature of really great science fiction, rather than being escapist, is the true realism of our time.
For often you need the technology of the future to get a clear look at the now.