Roger Penrose wrote a book titled “The Emporor’s New Mind.” The book roused quite a bit of controversy, because he claimed that he had proof that computers can’t think in the same way that humans do.
To illustrate, he used the complex subject of mathematical proof making. Using various fancy arguments tied to Gödel’s incompleteness theorem, he basically said this: computers can’t go beyond the logical system they’re already inside of, and math proofs must go beyond what’s already known, therefore humans are doing something when making math proofs that computers can’t do.
There’s been a lot of argument about the points he raised. Some artificial intelligence proponents have shown ways in which computers can do things like mathematical proof making. The arguing back and forth goes ’round in circles of complex logic and abstract math theory, with neither side really gaining all that much traction in the debate.
Yet, it’s almost twenty years after the book was published, and we still don’t have self-aware or creative computers. Sir Penrose was onto something, but I think he somewhat missed the boat on a much simpler argument that could have been wielded for his side: would computers really want to do math proofs to begin with?
No. Not unless a human programmed them to do math proofs. That is the one and only instance in which a computer will do math proofs.
This is really where the argument lies. Computers have no inherent self-awareness, and hence no intrinsic motivation to accomplish any goal. The only “motivation” they ever have is that supplied by their human programmer.
Now, a simplistic argument might go like this: ok, Ms. smarty pants, but our human motivations are simply programmed functions of our DNA and our environment growing up! Despite that there is no hard proof for this would-be-assertion by a would-be-hard-core-artificial-intelligence-practicioner, it is nonetheless likely to be wielded much like a shield wielded by a knight going into battle.
The counter-proof to this not-really-proof is simple: humans often go directly against our programming. For the new book I’m writing, I’ve been doing a bit of research on the Wright brothers. You know, those guys who invented powered flying machines that actually worked. They went against their programming in many ways. They gave up their bike shop, which provided them with economic well being, to have the time to pursue the flying, and they suffered financially. Multiple crashes resulting in broken bones and potential death, but they kept flying. After their first successful flights (albeit short ones), they faced scorn and outright hostility, with many newspapers calling them liars. They were going firmly against their programming of self-preservation, social belongingness, and economic well-being to pursue a crazy idea that nobody had evidence of actually being able to work.
No computer can do that, or will do that – at least not as presently constructed. A series of binary switches, no matter how complex, is still just a responsive mechanism. There is no place for motivation in there. There are just inputs and outputs. That’s all. Those inputs and outputs may do some incredible things, and do them very fast. But at no time is it suddenly going to become magically “aware” as the computers get faster.
A lot of people think that when computers reach the computing capacity of our brains that they will automatically become aware. Ummm no. That’s about as logical as saying that putting a bigger engine in your car will help it drive itself. Putting the bigger engine in your car will help it go faster, but the car still needs a driver.
Computers are ultimately just machines, much like that car. It’s a fine and noble mission to be giving them ever “bigger engines” so that they can do more stuff and faster. But to assume that, at some point, this will automatically lead to them being self-aware and self-motivated is much like assuming that your car is going to suddenly start driving itself. It is just a fantasy.