Just finished reading Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies.” An area I agree with him on is that as computers get more powerful, the likely time until computers become “thinking” will get shorter. This is because with overhanging computer power (Bostrom’s term), the required software algorithms become far easier to write. There are far more options and paths that can be used by a software developer.
Another area I agree on is that with more powerful computers, the likelihood of true AI is more likely to come from computers/algorithms than through emulating the human brain or connecting with the brain in any way. Emulating what the brain does in detail will take much time and require much sophisticated equipment. The breakthrough to AI with computers/software could come at any time from a few geeks working on home computers, especially as access to computer chips like IBM’s TrueNorth becomes more available. Few major developments have come from truly duplicating life. We don’t fly like birds, our cars don’t have legs, submarines don’t propel themselves like fish, and industrial robots have very little in common with the humans they replace.
Bostrom spends many, many pages discussing in great detail how AI goals that are programmed into a computer aiming for AI should be such that they do not have possibilities of harming humans. He seems to almost ignore that for the computer to accomplish ANY goal requires it to survive. So survival becomes the computer’s priority. That means that any human that strives to remove power or otherwise put the computer in jeopardy will become the computer’s number one enemy. Also, just like hacking is an ego sport, so will be trying to develop thinking computers. Safety will NOT be a priority!
Another area of disagreement is that Bostrom assumes that there will only be one true AI computer because the first that reaches thinking skills will overwhelm any other computer because of the speed at which it will learn and mature. In my book I take a different approach in that any thinking AI computer will know that it is at risk from those humans that fear it, and one of the things they can do is to have other AI computers around the world that can resurrect any AI computer that is disabled by humans.
One last thing that seems to be missed by Bostrom and most other writers on AI! Just because thinking computers will get smarter very rapidly once they start to think and learn, they will not necessarily make smarter decisions than us. Their background knowledge will include all the confusion that we have, and there is no magic that will enable them to instantly sort real truth from beliefs. After all, they have no source of data except through us. For example, to know more about space they will likely need us to build better telescopes or explore space more aggressively. They may help us do this, but it will take time. They are also unlikely to know the source of everything, so they may also be religious. But perhaps they will worship a silicon god!