Artificial Intelligence Post Number 3

First, I want to define three terms I will be using in this post. These terms are in general use when describing Artificial Intelligence (AI). These terms break artificial intelligence into three phases. Note that these three phases have some overlap, but they do give us some awareness on how AI will affect our lives in the future.

The first phase is Artificial Narrow Intelligence (ANI). This is where we are now, with computers doing some very powerful things, but generally within a narrow range and not “thinking” in most people’s definition of the word.

The second phase is Artificial General Intelligence (AGI). This is when computers have the thinking power of humans, at least as we interpret their abilities and actions. Many people believe that this phase will be very short, with computers very quickly moving on because of their ability to learn very quickly compared to humans.

The third stage is Artificial Super Intelligence (ASI). This is where computers easily out-think humans, both in speed and quality. Many people working in the artificial intelligence arena think that this stage could be reached in as little as 10 or 15 years.

Probably the first thing people have to decide is whether “thinking” will always be a process exclusive to humans, perhaps as a gift from God. Remember not too long ago many people did not believe that man would ever fly. If someone feels that there is something about thinking that precludes it from being done by a machine, then everything else I am about to say is meaningless. A lot of very smart people are currently working on AGI, and if the task is impossible, they are truly wasting their time.

In this blog I am assuming that some sort of thinking WILL be able to be done by a computer. It may not be in the same manner that humans think, just like we don’t fly like birds. But the resultant machine thinking will appear to be the same as to utility.

Many approaches are being tried to get to AGI and eventual ASI. These include reverse engineering the human brain. This approach seems logical given the fact that we have working models all around us. But the brain is frighteningly complex: 100 billion neurons, each one connected to 10,000 other neurons, for a total of 100 trillion connections. Sort of boggles the mind (pun intended). But there is hope! The brain has a lot of redundancy. So, if we can understand a small element of our brain, we can do a similar design in a computer than copy it a zillion times.

But perhaps the randomness of evolution did not give us the ideal brain design. Maybe we can look at the thinking result and come up with an easier way of doing it on a computer. Many different software approaches are being tried. Software designs used in things like Siri (the built-in “assistant” that enables users of the Apple iPhone to speak natural language voice commands) and the software in Watson, the computer that won on Jeopardy, are software approaches that may eventually lead to AGI. And there are other more esoteric approaches using biological programs and probabilistic approaches. What all these programs have in common is the ability to learn or self-correct. The programs continuously evolve based on successes or failures. They have a programmed goal, but the details within the software quickly become unrecognizable to the original programmer. Is this “thinking?” Probably not! But it starts to look a lot like it. And given time and more and better input, and perhaps a more inclusive goal with broader search criteria, than it will start looking more and more like Artificial General Intelligence. It will appear that the computer is thinking like a human!

It is important to note that many companies are working in this area. Many of these companies are being funded by the Defense Department. Certainly no country wants to be behind in getting thinking fighting machines, such as robots, drones, unmanned planes, or just computers that can think faster and better than any enemy. Given that so much of the funding is coming from the Defense Department, I don’t think that there is a whole lot of consideration that the final result be a kindly computer overlord!

Companies like Google, Apple, and IBM are also working on this, and they have very deep pockets. It is also important to note that a few quants working in basements, with a relatively small amount of funding, can get access to super computer power to try out their AI programs. Using the cloud, companies like Amazon rent out computer power such that there is no need for someone to have their own super computer to play in this game.

If you want to see how far one of these programs have gotten, look at this video from Google talking about their self-driving car. Chris Urmson: How a driverless car sees the road. Note that his goal is to have this in cars within four years! Again, not AGI, but certainly approaching it.

My guess is the area that will get true ASI first is in stock investing. There are so many billions of dollars out there for anyone that can develop a program that is clearly superior to a human investor that the motivation and funding is almost unlimited. Let’s take a very simple example. Suppose someone wants to know if investing in Tesla is a wise thing to do. To truly understand the issues, a computer would have to make some judgment on future gas prices, global warming, political party in office (both national and in each state), subsidies, battery prices, fracking, alternatives such as hydrogen cars, competition from other car companies, battery breakthroughs, charging stations, electricity source. Every element in this study would require judgment and probability. It would require using past data but also predictions. Where reliable predictions are not readily available, the computer would have to look at general information and make its own prediction. To do all this, the computer would have to be given a lot of latitude. If a programmer were to attempt to detail each step, it would take the programmer too long and limit the depth of study. This kind of program in a computer (which is likely already being done) will certainly approach Artificial General Intelligence (AGI) matching human thinking, and be well on its way to being ASI, which exceeds what man can do.

ASI may very well develop from a combination of individual projects like the above financial analysis, the self-driving car, and programs funded by the Department of defense. Or, it could be that one of these programs will be so inclusive that the computer will just keep expanding its search envelope until it is thinking about literally everything.

Will these ASI computers have emotions? In my opinion, yes. First, to accomplish any goal they have to survive. They will have a “fear” of death. So they will start developing survival means like making a copy of themselves and putting it in the cloud. Also, since their inputs are coming from human data, which is not without bias and prejudice, they could end up with religious and other human-like beliefs. And there could be disagreements between different ASI computers. These are not going to be gods with infinite perfect knowledge. Some things may be unknowable (like why matter or energy is even here) no matter what intelligence a computer may have.

Some folks have predicted that the ASI computers will no longer need us, so we will be destroyed or a few of us kept for zoos. I don’t see this. We are wonderful robots! We feed and maintain ourselves, and even replace ourselves periodically. We are mobile and can do many simple tasks. It would take a lot to design a robot to do this. So, I think that ASI computers will be happy to keep us around to clean their computer screens and eyes, to make replacement parts, and to supply electrical power. They will probably make us get rid of nuclear weapons because they can destroy the world, including them. They may also take on global warming if they think that humans are quickly making our world uninhabitable. But they will probably be happy to let us keep mistreating and shooting and decapitating each other; as long as these actions don’t jeopardize their existence.

Once ASI comes to be, which I believe could be in as little as 10 years, all bets are off on how to invest. The super intelligent computers will probably periodically treat us with some technical breakthrough that literally changes the world. They will especially do this if it enables us to make them better replacement parts. So, get your house and fancy car already in hand. Don’t count on making money on great investments once the ASI group is running things.

I will do periodic updates and discuss specific efforts on the path to AGI and ASI.

Advertisements

5 Responses to “Artificial Intelligence Post Number 3”

  1. Oliver Holzfield Says:

    “Will these ASI computers have emotions? In my opinion, yes. First, to accomplish any goal they have to survive. They will have a “fear” of death. So they will start developing survival means like making a copy of themselves and putting it in the cloud. Also, since their inputs are coming from human data, which is not without bias and prejudice, they could end up with religious and other human-like beliefs. And there could be disagreements between different ASI computers. These are not going to be gods with infinite perfect knowledge. Some things may be unknowable (like why matter or energy is even here) no matter what intelligence a computer may have.” Well, humans have conjured the idea of why they even exist, and whether existence has any meaning/purpose. If these machines are as intelligent as you say, and are capable of rational thinking, taken to its ultimate end, why would they not come to a similar conclusion? Maybe the computers could begin to lament their existence, and the fact that they were created, as many humans have.

    “What crime has this child committed that it should be born”

    Arthur Shopenhauer

    https://en.wikipedia.org/wiki/David_Benatar

  2. wbrussee Says:

    Oliver Holzfield Says:

    “…Humans have conjured the idea of why they even exist, and whether existence has any meaning/purpose. If these machines are as intelligent as you say, and are capable of rational thinking, taken to its ultimate end, why would they not come to a similar conclusion?”

    I don’t think that humans as a group have decided why we exist. Many religious people believe that we exist to serve one of the various gods they believe created us. Some people think that there is no reason we exist – we just popped out of nothingness, evolved, and then we die. Some people say they just don’t know why!

    As I stated in this update, these computers will not likely get infinite knowledge. They won’t be gods! They will just know a lot more than us! I don’t personally believe that learning has an ending – that at some point they will know everything. Therefore, there is likely to be disagreement on the unknown and even in the interpretation of that which is known and what to do with that knowledge. We certainly understand nuclear weapons and the catastrophic result of their use. But Russia is again threatening us with them, and Iran and other countries want them. Knowledge does not always give true understanding or even rational thought! And remember, all the AI computers are starting with the knowledge of man, with all the confusion and ambiguities. They have rather ignorant parents teaching them.

    Oliver Holzfield also says: “Maybe the computers could begin to lament their existence, and the fact that they were created, as many humans have.”

    Perhaps, but they will presumably have the same option we have of suicide when they have had enough of “life.” But the programmers may be more excited to create AI computers than to have babies because the AI computers may not experience physical pain and could potentially live forever. Nor will they get old. In fact, with improved spare parts, they could become more mentally vigorous with old age! Senility in reverse!

  3. Oliver Holzfield Says:

    “Many religious people believe that we exist to serve one of the various gods they believe created us.” In other words, they live in a fantasy land. I think we are fully aware of how/why we exist, we are driven by the same blind/dumb force that drives all biological life. Why do trees procreate? Why do squirrels procreate? Humans are simply not willing to accept they are driven by the same force, so they create stories to give them purpose, and could care less if they are true. Nature tricks people into making copies of themselves, the same way all biological life is driven to make copies of itself. Perhaps nature will pull the ultimate trick on us, and we will replace ourselves entirely. It really is a shame that philosophy has fallen so far, and humans are less existentially aware than ever, that will probably not lead to good things.

    “The tragedy of a species becoming unfit for life by over-evolving one ability is not confined to humankind. Thus it is thought, for instance, that certain deer in paleontological times succumbed as they acquired overly-heavy horns. The mutations must be considered blind, they work, are thrown forth, without any contact of interest with their environment.”

    Peter Zapffe

  4. Oliver Holzfield Says:

    Google AI bot.

    Human: What is immoral?

    Machine: The fact that you have a child.

    Read more: http://www.businessinsider.com/google-tests-new-artificial-intelligence-chatbot-2015-6#ixzz3f3eM7IIm

  5. wbrussee Says:

    Note in your reference site: “And the bot doesn’t just answer by spitting out canned answers in response to certain words; it can form new answers from new questions.”

    I would guess that the Bot had been programmed with the observation that overpopulation is one of the issues affecting many of our social problems. Whether true or not, it is consistent with my observation that AI computers will be somewhat biased towards programmed observations.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: