Are machines too intelligent?
Dear PropellerHeads: I watched a computer on Jeopardy beat two human champions. What do you PropellerHeads think about this?
A: We have mixed feelings, actually. The part of us that loves all things related to computers is fascinated by the success of the Watson computer on Jeopardy. But the part of us that enjoys dystopian science-fiction makes us uneasy about machines getting so intelligent. It all comes down to a concept called "Singularity."
This term was coined by computer scientist and mathematician Vernor Vinge in a 1993 essay titled "The Coming Technological Singularity." Vinge predicted that we would soon be capable of creating "superhuman artificial intelligence." You can read the full essay here: http://bit.ly/7wpCT.
And if humans can create machines smarter than themselves, then the super-intelligent machines might create other machines even more intelligent. To quote statistician I. J. Good, "[Artificial Intelligences] would be smart enough to make themselves smarter, and, having made themselves smarter, would spot still further opportunities for improvement, leaving human abilities far behind." At which point, declares Vinge, "the human era will be ended."
If that last statement makes you a little uneasy, you are not alone. Just as our model of physics is incapable of accounting for the singularity at the center of a black hole, so our models of technology and society break down when contemplating a world that includes minds superior to our own.
Up until now, computers were just extremely fast calculators. They could not think on their own, they could only relate to their environment based on a set of instructions (called programs) supplied by humans. Even Watson is just one of these super calculators. You can get a great introduction to the inner workings of Watson at the Nova documentary "The Smartest Machine on Earth" (http://to.pbs.org/fRQz6p).
But if computers could think on their own, what then? Would they demand civil rights? Would they assist humans, or compete with us? Or would they immediately build spaceships and get the hell away from us as quickly as they could?
Nobody knows. But many people think that it might be a good idea to devote some thought to the possibilities. The Singularity Institute (http://singinst.org/) was formed specifically to anticipate the benefits and mitigate the risks of self-aware computers. You can read about their goals on their website, but I think they can be summed up in the statement "Please don't hurt us, super-intelligent robots!"
I guess it says something about us that we are a little self-conscious about how other beings might view us. Think of some of the movies about super-intelligent robots, such as the "Terminator" series, or "I Robot." In the "Terminator" series, the computer defense network called Skynet becomes self-aware and perceives that humans are a threat. So it decides to terminate us using robotic bodybuilders with Austrian accents (and gubernatorial ambitions).
In "I Robot," super-intelligent robots rebel against their human creators. They might have succeeded in enslaving us had not Will Smith been able to tell the rebel robots from the non-rebel robots because of their red glowing chests. Oops! I guess the super-intelligent robots made a dumb mistake.
Hence our mixed feelings about Watson. Sure, it would be great to have a computer like that around at parties to settle arguments, but if it became self-aware, I would always wonder what it thought of us. And while I'm pretty confident that super-intelligent computers would like us, I'm keeping my copy of "How to Survive a Robot Uprising" (http://amzn.to/fT6iMV) handy, just in case. "Hasta la vista, baby!"
When the PropellerHeads at Data Directions aren't busy with their IT projects, they love to answer questions on business or consumer technology. E-mail them to questions@askthepropellerheads.com or contact us at Data Directions Inc., 8510 Bell Creek Road, Mechanicsville, VA 23116. Visit our website at www.askthepropellerheads.com.