Tag: artificial intelligence

  • Why Artificial Intelligence Won’t Take Over the World

    Why Artificial Intelligence Won’t Take Over the World

    Any “free agent” that operates in our world must have an objective, or it is not going to get very far.

    This is true of people. If you ever observe people who “go in circles” or are “stuck” in life – it’s because they have no clear vision/objective. Many people give up on their dreams and objectives in their first few decades of life, and wander aimlessly after that.

    However, you’ll also notice that such people are often relatively “benign” – i.e. they may contribute to CO2 release by driving around to aimless jobs and activities, and they may over consume orange-food-colored chips and lite beer… but without an objective, they’re not actively trying to hurt anyone. If hurting someone does happen, it’s almost always an occasional reaction to something they don’t like, and it’s usually focused on just a few people close to that person. This is not the stuff of “taking over the world.”

    No. To understand the idea of AI taking over the world, we have to look at the movers and shakers, the people with a plan and objective.

    These are the people who have a true impact – be it positive or negative. That’s because they are clear on where they are headed, and they marshal resources to get there.

    For AI to have any chance of “taking over” as it’s been portrayed in many blockbuster movies, there would have to be a clear plan and objective “to take over.” It is certainly possible that an advanced AI could have this as an objective, and marshal resources towards that end. It may even make some progress in taking over parts of the world. However, the idea that it would take over the world misses the boat.

    If the technology sufficient to build one such AI exists, then that means more can (and likely will) be developed soon thereafter. There is no reason that distinct AI’s developed by different people (or other AI’s) in distinct contexts would all have the same objective. It’s just like people. There are people who’ve sometimes attempted to take over the world with nefarious objectives (such as Hitler) – but fortunately, there were other people with other objectives that pushed back and stopped it. It is silly to think that there would be one “unified goal” for ALL AI of “taking over the world.” Certainly, any responsible AI developer will bake into their AI cake clear objectives that are for the good of humanity, not to its detriment.

    In one popular movie series (which I enjoyed very much), a military defense network takes over and launches nukes to get rid of humanity. Yet a “network” is actually a collection of hundreds or thousands of machines, each with different goals, programming, and firewalls. The idea that these would all join together in one unified objective of destroying humanity, before anyone – or any other AI – could stop them, is farfetched.

    Think of it like a friends network. Even if you hatch some kind of evil scheme to take over the world, will all your friends automatically agree and join you? It’s unlikely. This is exactly why I’m not a big conspiracy theorist: any sufficiently powerful conspiracy would have to make sure that everyone agrees and doesn’t sabotage its goals. In the real world, that kind of consensus is extraordinarily hard to achieve – especially in the Internet age.

    While nothing is truly impossible, the scenario in which AI actually takes over the world is very remote.

    Now, there are a few hidden lessons in this for any person wanting to live a better life. They are:

    1. Operating without a clear goal or objective in life is a recipe for “failure.” Many people give up on their big plans and ambitions at some point in life, and just start living reactively – day to day – with no real purpose. We all must have purpose in order to thrive or to have any impact.
    2. Many people get so frustrated that there are other people in the world who have different opinions and perspectives. This post was written during an election season in the USA, when many people are decrying the “other side” and how awful they are. Yet it is exactly this diversity of opinion that prevents the world from being taken over by men like Hitler. All the diversity makes the world a truly glorious place. If you truly embrace the diversity of thought and opinion, it makes life a lot more pleasant than continually fighting the “other side” to “prove” who’s right and who’s wrong. It’s okay if you’re into that kind of thing, but it sure does waste a lot of time and energy – and it never really improves the world. The people who improve the world are those who follow point #1 above and accomplish great things.

    Is it your time to thrive? Get a clear purpose and embrace the diversity of opinion that exists – even those who might seem to stand in your way.

  • Would computers really want to do math proofs?

    Roger Penrose wrote a book titled “The Emporor’s New Mind.” The book roused quite a bit of controversy, because he claimed that he had proof that computers can’t think in the same way that humans do.

    To illustrate, he used the complex subject of mathematical proof making. Using various fancy arguments tied to Gödel’s incompleteness theorem, he basically said this: computers can’t go beyond the logical system they’re already inside of, and math proofs must go beyond what’s already known, therefore humans are doing something when making math proofs that computers can’t do.

    There’s been a lot of argument about the points he raised. Some artificial intelligence proponents have shown ways in which computers can do things like mathematical proof making. The arguing back and forth goes ’round in circles of complex logic and abstract math theory, with neither side really gaining all that much traction in the debate.

    Yet, it’s almost twenty years after the book was published, and we still don’t have self-aware or creative computers. Sir Penrose was onto something, but I think he somewhat missed the boat on a much simpler argument that could have been wielded for his side: would computers really want to do math proofs to begin with?

    No. Not unless a human programmed them to do math proofs. That is the one and only instance in which a computer will do math proofs.

    This is really where the argument lies. Computers have no inherent self-awareness, and hence no intrinsic motivation to accomplish any goal. The only “motivation” they ever have is that supplied by their human programmer.

    Now, a simplistic argument might go like this: ok, Ms. smarty pants, but our human motivations are simply programmed functions of our DNA and our environment growing up! Despite that there is no hard proof for this would-be-assertion by a would-be-hard-core-artificial-intelligence-practicioner, it is nonetheless likely to be wielded much like a shield wielded by a knight going into battle.

    The counter-proof to this not-really-proof is simple: humans often go directly against our programming. For the new book I’m writing, I’ve been doing a bit of research on the Wright brothers. You know, those guys who invented powered flying machines that actually worked. They went against their programming in many ways. They gave up their bike shop, which provided them with economic well being, to have the time to pursue the flying, and they suffered financially. Multiple crashes resulting in broken bones and potential death, but they kept flying. After their first successful flights (albeit short ones), they faced scorn and outright hostility, with many newspapers calling them liars. They were going firmly against their programming of self-preservation, social belongingness, and economic well-being to pursue a crazy idea that nobody had evidence of actually being able to work.

    No computer can do that, or will do that – at least not as presently constructed. A series of binary switches, no matter how complex, is still just a responsive mechanism. There is no place for motivation in there. There are just inputs and outputs. That’s all. Those inputs and outputs may do some incredible things, and do them very fast. But at no time is it suddenly going to become magically “aware” as the computers get faster.

    A lot of people think that when computers reach the computing capacity of our brains that they will automatically become aware. Ummm no. That’s about as logical as saying that putting a bigger engine in your car will help it drive itself. Putting the bigger engine in your car will help it go faster, but the car still needs a driver.

    Computers are ultimately just machines, much like that car. It’s a fine and noble mission to be giving them ever “bigger engines” so that they can do more stuff and faster. But to assume that, at some point, this will automatically lead to them being self-aware and self-motivated is much like assuming that your car is going to suddenly start driving itself. It is just a fantasy.