Artificial Intelligence Post Number 21

I just finished reading “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World,” by Pedro Domingos. This is a very difficult read. The author covers certain areas in great (and confusing) detail, and is contradictory in other areas. For example, despite the concerns and warnings from many very smart people about AI, the author states that there is absolutely no risk of artificial intelligence computers taking over. But he then proceeds to list some ways that computers MAY take over, including The Master Algorithm getting into the hands of a bad person. What are the odds of THAT happening? He also says that there is a risk of people getting so confident of computers always being right that they begin to follow the AI computer like following a god? Again, what are the odds of THAT happening? Have people ever really done that?

The author believes that it is important that The Master Algorithm be discovered BEFORE individual algorithms get developed for specific narrow problems, because otherwise the detail of those individual algorithms will become too complex to incorporate into one Master Algorithm. Again, the author spends chapters talking about the progress already being made in these individual learning algorithms, including areas like self-driving cars, medical care, etc. He doesn’t say how we are supposed to stop this progress until someone identifies an overriding Master Algorithm. And he does say that the individual algorithms on specific subjects will be more all-encompassing. He just feels that required computer power will be overwhelming when someone tries to put everything together into one Master Algorithm package equivalent to a brain. He doesn’t mention breakthroughs like IBM’s TrueNorth computer chip.

The author then lists the competing philosophies being applied in the development of a Master Algorithm, which he calls “the five tribes of machine learning.” The five tribes are symbolists, connectionists, evolutionaries, Bayesians, and analogizers. I won’t even attempt to describe the details of each, which is the main content of the book.

I read this book because the author Pedro Domingos is a professor of computer science and is a winner of the SIGKDD Innovation Award. The book is very recent (published September 22, 2015) and is rated fairly well on Amazon. So I thought that I would be getting very up-to-date information. But I don’t feel it helped much in this blog’s quest to see the progress of AI in general, and when we should expect to see dramatic changes as the result of AI. I actually think that my fictional book “Artificial Intelligence Newborn” does a better job in laying out a possible AI future, especially given that Domingos gives a zero chance of AI taking over!

Advertisements

4 Responses to “Artificial Intelligence Post Number 21”

  1. Bob Kaufman Says:

    If a professor in computer science states that there is zero chance of AI taking over and they can’t explain their rational for that conclusion simply in a paragraph or two, then I am assuming they have some kind of ideological belief system that does not allow them the freedom to imagine it and the ideology is as irrational as their conclusion. That is just a guess as I do not really know. But zero chance with no explanation – really!! Anyway glad I did not waste my time on this book. Sorry you did.

  2. Bill Says:

    “172. First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

    173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

    174. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite — just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or to make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they most certainly will not be free. They will have been reduced to the status of domestic animals.”

    Is it time to call Ted Kaczyski a prophet?

  3. Bill Says:

    What’s scarier is now that many of his predictions are looking more like a reality, those who sympathize with his views may try to foment social unrest. Or they may try to co-opt existing social movements to create even more chaos. He refers several times in his Strategy section, about trying to create social tension.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: