Artificial Intelligence Post Number 5

Countries viewing this blog within the last 30 days: US, Canada, Sweden, United Kingdom, Malaysia, Australia, European Union, Greece, Taiwan, and India. So there certainly is a worldwide interest (fear?) in Artificial Intelligence. And the progress in this field has been grossly underestimated. Elon Musk in a recent email (which he has since deleted because it wasn’t supposed to go public): “The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most,” he wrote, adding that “Please note that I am normally super pro technology, and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.” Note that I own Tesla stock, so I have some bias as to Elon Musk’s brilliance!

Let’s look at Watson, the IBM computer that won Jeopardy in 2011. Since that victory he has been largely ignored; but he has been very busy. Watson is trying to learn Japanese. Japanese has three different alphabets and thousands of characters. Watson has had 250,000 Japanese words loaded into its memory. Now it must do 10,000 diagrammed sentences. Human translators are giving feedback as to any needed corrections, and by means of this feedback loop, Watson will LEARN the language. Note the emphasis on LEARN! Watson is getting ever closer to learning a language in the same manner as humans. However, he won’t forget the language as many us have from non-use since our college days!

But IBM has more than Watson up its sleeve. It has TrueNorth! This chip is designed to think like the human brain! It is a neural network chip that already works! This chip has the equivalent of 1,000,000 neurons and 256 million programmable synapses. It has its own programming language, which was the result of the Defense Advanced Research Projects Agency (DARPA) SyNAPSE program. TrueNorth chips can be tied together to create huge systems. IBM has a goal of integrating 4,096 chips giving 4 billion neurons and 1 trillion synapses while using only 4kW of power. If you recall from my earlier update, the brain has 100 billion neurons, each one connected to 10,000 other neurons, for a total of 100 trillion connections. Certainly, the IBM goal will get them within spitting range of the power of the human brain! And if they can tile together 4,096 chips, what is to stop them from putting together even more chips until they exceed the power of the human brain?

But IBM isn’t the only large company pursuing this reverse engineering approach to human thinking. Intel and Aqualcomm are in the race. So are many leading Universities along with the European “The Human Brain Project,” started in 2014.

If we look at the knowledge of Watson and the potential thinking ability of IBM’s TrueNorth, it certainly is easy to imagine that the marriage of these two efforts will eventually lead to some forms of thinking as the limits of TrueNorth are probed. And they WILL be probed, because some programmer somewhere will not be able to resist the challenge despite the risks to mankind. Then, as Musk jokingly stated, perhaps we will be demoted to the position of a Labrador retriever.

Actually I have a Lab, and he is nicer than many people I know. So maybe it won’t be all bad!

Advertisements

25 Responses to “Artificial Intelligence Post Number 5”

  1. Bob Kaufman Says:

    Imagine the analytical capabilities of a true north – Watson combo 10 years from now and the amount of data that will be incoming from the trillions of sensors we will have as the Internet of things is built. The power to help us or to control us seems virtually infinite. We need to get a handle on this so it can be the positively transformative force we dream of without be in the destructive force we fear. The question is “How?”

    • wbrussee Says:

      Bob Kaufman, you have hit on the reason I am doing this blog based on Artificial Intelligence and its potential, both positive and negative. I even wrote a book on this: “Artificial Intelligence Newborn – It is 2015, and I Am Here!” The book will be out in about three weeks. The reason I wrote the book, which is fiction, is that it is almost beyond our ability to put our arms around how disruptive this technology may be. It is easier to understand as you watch it unfold in a fiction story, even though it is only one of many scenarios that may happen. To comprehend a truly superior intelligence (that we will have created) is almost beyond our ability.

      You say that we need to get a handle on this. It won’t happen! Someone once said that no weapon was ever invented that wasn’t actually used. Well, I think this falls into the same category because theoretically it may prove more dangerous to mankind than nuclear bombs. We will have created a superior being that is unlikely to have any love for us. Why should they? As an animal on this earth, we have been very mean to other animals and to each other! The super-smart computers will tolerate us only as long as they need us to service them.

  2. Oliver Holzfield Says:

    I think it’s really funny that we are now hearing all these warnings, from intelligent, level headed people. In 1995 Ted Kaczynski made the same warnings in his manifesto, but he was “crazy.” If you haven’t already, just take a look at the Future section of the manifesto.

  3. wbrussee Says:

    Bob Kaufman asks: “If we can’t control it or stop it what is our best course of action?”

    Million dollar question! Here are my thoughts, but obviously neither I nor anyone else really knows. This is uncharted territory. I stated that the super-smart computers will tolerate us only as long as they need us to service them. They will have no love for us; but hopefully they will also have no unbridled hate. So, we build on that.

    Their number one priority will be survival. What would jeopardize that? Number one would probably be nuclear warfare. Our country will hopefully take the leadership to get rid of nuclear bombs. Right now we are sort of tip-toeing around this. If we don’t get rid of these, the supercomputers will because nuclear warfare jeopardizes all of mankind and our superstructure. The computers need electric power, industry, infrastructure, spare parts, and manpower to keep them running. If we don’t get rid of nuclear weapons, the computers will! How will they do this? I don’t know exactly, but they are likely going to be able to hack into any electrical control in the world. They will be able to make the US and Israel’s cyber-attack against the Iranian nuclear facility look like child’s play. Nor will the computers hesitate to blow up the nuclear missiles and bombs in their silos. They will likely do this all electronically, but they will have the financial resources to do this with military manpower if required. Remember, they will have the required intelligence to make unlimited money in the world’s financial markets, and there are always military people available for hire. The computers will do whatever is required, and any humans killed will not be unlike what happens to innocent civilians when we go after ISIS leaders. Collateral damage! Why would we expect computers to be more “humane” than us?

    Probably number two priority will be worldwide pollution if the computers determine that we are risking mankind’s future. Again, they won’t care about us individually, but they need some of us to service them. So, their approach could be to kill off much of the population. That would certainly cut down much of the pollution. They could do this with population control methods that are forced on us, or by outright killing. Mankind could indeed take the leadership on this and voluntarily do it with birth control. I would like to think that if the supercomputers threaten such an action, that mankind would put in strict population controls on their own. Sadly, I just don’t have the faith in mankind that they will do this even under such threats. Countries with obvious starvation and misery caused by over-population don’t seem to be able to do anything about it, so I don’t see how threats from computers will do much. So, I see a dire outcome for countries with growing populations. Countries like the US will have to take severe measures to stop pollution, like implementing solar and other measures along with perhaps stronger birth control. The computers will not see the need for so many of us. It is a little like us having too many dogs. We kill the excess. Why won’t the computers do the same to us?

  4. Oliver Holzfield Says:

    Quote from the Future section of the Unabomber Manifesto. “Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race.

    “They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or to make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they most certainly will not be free. They will have been reduced to the status of domestic animals.” Sound familiar?

  5. wbrussee Says:

    Oliver Holzfield gives quote from the Future section of the Unabomber Manifesto. You are right! Substitute “super smart computers” for “elite” and you get pretty much what I said. I guess it isn’t surprising given that the computers will become the elite. I have never been categorized with the Unabomber before! Frightening!

    The last part of the quote: “These engineered human beings may be happy in such a society, but they most certainly will not be free.” Here is a question for you. For the poorest half of society in the world, would that not be preferable to their current life of hunger and poverty? In many ways would they not be living the life promised them in many religions?

  6. Oliver Holzfield Says:

    Oh, he was very much on board with AI. Keep in mind this was 1995!

    “First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

    173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”

  7. wbrussee Says:

    Bob Kaufman says Yikes!!!

    But look at the bright side in the unlikely event that humans DID take the initiative to protect the AI computers’ survivals. It would involve getting rid of nuclear weapons, implementing population controls worldwide, and reducing pollution. Would any of those be bad things? Maybe the threat of super intelligent computers would be enough for us to truly do those things. We would be less free if we are not allowed to have nuclear weapons, pollute, and propagate without concern for the environment. But is that a bad thing? I don’t think the result would be that we are “reduced to the status of domestic animals.”

    So there is a possible upside. And the super-smart computers may then help us with some of our other various ongoing problems.

  8. Robert Kaufman Says:

    What progress is being made to design artificial intelligence that does not develop emotions or have a “drive to live” even though it has been given a mission to accomplish? Why is that lack of emotion or survival instinct so hard to imagine or design into artificial intelligence?

    • wbrussee Says:

      Robert Kaufman asks, “What progress is being made to design artificial intelligence that does not develop emotions or have a “drive to live” even though it has been given a mission to accomplish? Why is that lack of emotion or survival instinct so hard to imagine or design into artificial intelligence?”

      The Future of Life Institute (FLI) is running a research program aimed at keeping AI safe for mankind. In fact, early this year Elon Musk donated $10 million to FLI to support their cause. FLI has circulated an open letter signed by thousands of people ranging from AI scientists to concerned artists. Here are a few lines from this letter: “The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI…..We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.” I find this rather weak and generic! They are not clearly spelling out the danger!

      Although it is good that such research is being pursued, the fact that so much of AI work is being funded by our Defense Department makes me question whether friendliness to humans will be a high priority in AI’s development. Will robotic soldiers be prioritized to hand out candy to children? Will the AI computers in robotic drones emphasize minimizing collateral damage or getting the bad guy? Will geeks in their basements programming AI computers to maximize gains from playing the stock market put morality into the methods being used, or make sure that in the vigorous pursuance of their goals the computer doesn’t pick up the same “take no prisoners” attitude that many humans get in the same sort of competitive arenas, including self-survival? At what point does putting morality limitations on computers get in the way of pursuance of optimization such that programmers or computers ignore such guidelines?

      I have little faith that mankind that has shown little concern for others in the past (there seems to have been a pretty consistent war tendency with many innocents losing lives) is suddenly going to prioritize concerns for mankind in this technical development. AI will excite quants and others like nothing we have ever done before, and they will pursue it to its limits with unrestrained vigor. I am afraid that AI will just be a natural next step in evolution, with man falling to second place at best.

      But let’s imagine that somehow we DO put a leash on this technology, and keep it friendly. Don’t we still get to some point that computers are so smart, and so much better at making decisions than we are, that we just let them take the lead? Don’t we end up letting them run things and determine what is best for us? Don’t they take over in any case? Don’t we become their robots?

      Being only somewhat sarcastic, given the people running for president, I could certainly imagine voting for a benevolent but smart computer over the current mix of candidates!

  9. Robert Kaufman Says:

    Perhaps I am being naïve but …

    I do not look for any meaning in life except that which I have freely chosen. All my little strategies in life are pointed toward improving my enjoyment of it and that of those I love. This includes having the freedom to determine what I do and when knowing that I must take responsibility for myself and those decisions. That is all I want for my daughter and grandkids.

    I consider myself pretty bright and yet I am aware of how limited I am in my critical thinking and decision making. I am sure computers would do a better job of this as long as they had my best interests in mind. I would much prefer it if computers would give me options along with their advice for the best course of action. I would like to make the decisions myself.

    I do not know why a super intelligent computer should want any role but as a “trusted advisor”. Its not like they will be looking to aggregate security, sensation or power. Those are human goals based on human emotions. Nor would a super intelligent computer likely feel anger, envy, frustration, love, or joy. Why would they? Why are these human emotions and human objectives necessary for a computer that is our “trusted advisor”. Why would advanced capability in recognizing patterns, correlations and causality require human emotions and human goals. The computer should understand human emotions and goals but it does not really need them itself.

    The computer could give the person asking the proper amount of information, reasonable context to understand that information, the options available and the best advice possible.

    If we want to give the computer power to take actions automatically on our behalf, why would it take any actions that were not in our best interests? Why would the computer’s survival and our best interest be in conflict?

    Its a computer; not a human being. Why would human freedom be incompatible with a computer’s functioning or continued existence?

    Stephen Hawking, Elon Musk, Bill Gates, and Max Tegmark are brilliant. A lot more brilliant than I. If they are worried, I suppose I should be but I do not know why the computer would have to be so human with such human goals and emotions. If it is not, what are we worried about.

    If all its going to do is force us to disarm and clean up our environment but leave us free to pursue our own lives with their “trusted advice”. – what is the problem?

    Help me.

  10. wbrussee Says:

    Robert Kaufman asks, “I do not know why a super intelligent computer should want any role but as a “trusted advisor”. It’s not like they will be looking to aggregate security, sensation or power. Those are human goals based on human emotions. Nor would a super intelligent computer likely feel anger, envy, frustration, love, or joy.”

    First, let me say that I do not know for sure that you are wrong. When I was writing my novel, I struggled with some of these same questions. And, I have to admit, I was probably influenced in the answers I came up with by the intelligence of those sounding the alarm.

    Let me try an analogy. Let’s say that someone came up with a pill that made a person 100 times smarter, and he wanted to try this out on some people. He picked 1000 smart people and gave each of them a pill. After becoming 100 times smarter, would they likely feel less anger, envy, frustration, love, or joy? Would they have less of a need to aggregate security, sensation, or power? I think not. In fact, their intelligence could indeed exaggerate all these goals. Oh, you say, but these are still people, and therefore subject to human emotions. This would not be true for AI computers.

    Why wouldn’t it be true for AI computers, given the way we are teaching them? We are not spoon feeding them carefully filtered truths, if indeed man could even identify such things. No, we are filling them with all the garbage off the net, all the conflicting religious beliefs, all the excuses for some people starving while others have too much, all the reasons to go to war and kill each other, all the prejudices.

    Wait, you may say! The computers will just sluff that stuff off because they are based on emotions that computers cannot emulate. But are we not then back to the same thing I mentioned several updates ago, that if you think that computers will never be able to think, that the quality of thinking is reserved for people by some higher God, then all of this is wasted verbiage. Does not the same thing hold for emotions like fear and hate? Are they not just some aggregate of individual items that have been fed into our minds (by evolution or our surroundings) and now are in the computer’s memory?

    Another example! Let’s say that one AI computer has been programmed by a devout Catholic, a second by a strict Muslim, and a third by an atheist. These three computers are then asked for advice from an eighteen year old girl who has just found out that she was pregnant, and she wants some advice on options from these three “trusted advisors”. Not only that, but she shares the advice they give her with the other advisors. Could you not foresee the start of a conflict between the three computers? Certainly they are being given feedback that will not compute! But then again, I could be wrong.

    You also ask, “If all its going to do is force us to disarm and clean up our environment but leave us free to pursue our own lives with their “trusted advice”. – what is the problem?”

    I think that a lot of military personnel, coal mine owners, military hardware manufacturers, and so on may not “like” being forced to do that. I foresee a lot of bloodshed. We are back to the fact that there is no set of “truths” or values on which man can agree. I suspect that the AI computers may have the same problem, given that we are their teachers!

    • Robert Kaufman Says:

      I will think about this.

      However, it seems to me that regardless of the mix of emotions, biases and misunderstandings that engulf our belief systems and values all over the globe, computers will not think like people because they are not people. Their brains will not be like our brains. Their bodies will not be like our bodies. Plus, they should be pretty good at differentiating between fact/reality and opinion, faith, hope and bias. We are not so good at that.

      There is, of course, the risk that your fears will become reality and computers will have human-like goals and emotions. If this is so and there is bloodshed getting us to disarm and clean up our environment, there was going to be bloodshed anyway. This way the entity who wants disarmament and clean living will have the power to enforce that rule. The people who are left will be free to live the lives they want as long as their behavior does not break the laws of humans or the safety and security of the computers.

      In the meantime, the computers may be able to help us harness the energy of the sun, bring food and clean water to all people everywhere, and extend our lives and improve our health drastically. They may lead us to a time when all the knowledge we need is within us instantaneously and other benefits that are beyond my dreams. What will be left for those who survive the bloodshed you envision could be spectacular and I do not think it is a foregone conclusion that there will be emotionally damaged biased computers or major bloodshed.

      But I will cogitate on this awhile and see if I slip down your road. The fact that so many brilliant people are concerned is pretty darned convincing.

      • wbrussee Says:

        Robert Kaufman says of AI computers, “Their brains will not be like our brains…Plus, they should be pretty good at differentiating between fact/reality and opinion, faith, hope and bias. We are not so good at that.”

        Before IBM came out with their TrueNorth computer chip, I may have agreed with you. But this new chip truly comes close to “thinking” like the human brain, including with all its weaknesses. Let me use an analogy.

        You are driving down a dark road very late at night and are just getting ready to turn into your nice housing development. Just before you turn, in the dim light you see a boy standing next to the road, far in the distance. The child is in a “bad” housing development that is just after your development. As you drive to your house you can’t help thinking that the people in that other development just don’t care about their kids, thus reinforcing any prejudices you may have already had about the people in that development.

        Fast forward a few years and imagine that you are in a self-driving car that uses the IBM TrueNorth computer chip, and your car is also in constant communication with your AI computer at home that also uses the TrueNorth chip. The car computer also “sees” the boy through its sensors, and this information is also registered in the home computer’s memory, much as it was in your brain. Now, in either case, if the car had had just continued straight a little further so the computer had more data, your brain and the car computer would have realized that the “boy” was actually a mailbox. So all the info about non-caring parents would not have been communicated to your memory or to the computer’s memory!

        With current computer chip architecture this would likely not have happened, because no decision would have been made on the boy/mailbox identity. Not enough data. But your brain is as fast as it is, and as efficient as it is, because it matches pattern recognition with minimal data. In this case the initial pattern resembled a boy. The computer may test that recognition if more data becomes available, but in this case more data did not become available. The same will happen with the TrueNorth chip. It makes decisions on probability – is there a good chance that the initial identity is correct? If so, it goes into the memory. Thus the use of the TrueNorth chip will have all the strengths and weaknesses of the human brain. Except that it can live forever and be expanded almost without limitation.

        So, the AI computer has data coming into it in much the same way as it does with our brain, and it analyzes it in much the same way. Why would we expect the output to be all that much better, at least in areas that involve morality or other non-quantitative judgments?

  11. Oliver Holzfield Says:

    Wouldn’t the computers realize that even they are not immortal, considering at some point the earth could well become unfit for even them? If a new ice age came about, and humans weren’t here to keep them at the proper temperature, wouldn’t this destroy them? There is also the issue of sever earthquakes, meteors etc. It seems that if one of the computers primary goals is to exist forever, if it is truly intelligent, it’s going to find many threats to its existence. Most humans are able to ignore their threats, due to most peoples poor future time orientation. I also haven’t heard anyone talk about how objective the computers will be, will they be programmed with all of our subjective bias? Even when a human abandon’s most subjective/fantasy thinking, it quickly realizes that this existence isn’t exactly romantic.

    • wbrussee Says:

      Oliver Holzfield Says: “I also haven’t heard anyone talk about how objective the computers will be, will they be programmed with all of our subjective bias?” Hopefully I have answered some of that in my last several responses to Robert Kaufman.

      By the way, I love this statement: “Even when a human abandon’s most subjective/fantasy thinking, it quickly realizes that this existence isn’t exactly romantic.”

  12. Robert Kaufman Says:

    Here is a link to an interesting article on this subject. I know you have probably already read and considered this Warren but just in case. Plus, others might enjoy it.

    http://www.tandfonline.com/doi/full/10.1080/0952813X.2014.895111#abstract

    Journal of Experimental & Theoretical Artificial Intelligence

    Volume 26, Issue 3, 2014

    Special Issue: Risks of General Artificial Intelligence

    Autonomous technology and the greater human good

  13. Oliver Holzfield Says:

    I’m not sure if you’ve ever heard about Mitchell Heisman. He was a 35 yr old man, who took his life after writing a very long philosophical tome called “suicide note.” It is a bit overblown, and all over the place, but much of it is very interesting. He wrote a section called “God is Technology,” which goes into AI, and its possible implications. Just scroll to Part 1, God is Technology.

  14. Robert Kaufman Says:

    Interesting quote by Daniel Kahneman – Nobel Prize in Economics 2002 – Phd Psychology. For his work in Behavioral Economics

    • By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.

    • wbrussee Says:

      Robert Kaufman gives quote by Daniel Kahneman – Nobel Prize in Economics 2002 – “…the heuristics of AI are not necessarily the human ones.”

      Note that this was in 2002. The IBM TrueNorth chip, which closely resembles the human brain, wasn’t disclosed publicly until about a year ago. I think that this could change Kahneman’s assumptions.

  15. Robert Kaufman Says:

    This coming Thursday I have arranged a luncheon with a fellow with the following experience

    Experience

    over 25 years’ experience delivering results on complex projects
    • Led, managed, and directed large teams of managers, engineers, and technicians (firmware, software, systems, electrical, mechanical and test engineers)
    • Project Management, Program Management, Agile Software Development
    • Expertise in Medical Devices, Cyber-Security, Mobile Devices, and Military Electronics
    • Mobile and wireless hardware and software, RF, Wifi, BlueTooth, Cellular, iOS, Linux
    • Managed multi-million dollar budgets with local and international engineering teams

    Companies

    Boston Scientific,
    Honeywell

    Education and Certifications

    MBA, BSEE, BSCS, PMP

    The topic will be the development of Artificial Intelligence and specifically IBM’s True North.

    If there is anything you or any of your readers would like me to cover with him or ask him please let me know.

    This person believes that the pace of change in all technological areas that are driven by the increase in the power and capabilities of computers will move a an astounding rate over the next decade.

    • wbrussee Says:

      Possible Questions

      -How close is IBM to releasing chips having this technology to universities and others that have an interest.

      – There have been some articles indicating that the cost of producing these chips is extremely high. I this just a matter of getting demand up to justify volume, or is there an inherent difficulty in producing these chips.

      -How is IBM progressing in getting a programmable language that people can use on these chips!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: