Archive for July, 2015

Artificial Intelligence Post Number 9

July 30, 2015

What is IBM up to?

First there was IBM’s Watson, the computer that beat the chess champion then went on to win on Jeopardy. Then there was IBM’s TrueNorth, the computer chip that emulates the parallel processing of the brain. Now we have the newly announced IBM chip that quadruples the logic density versus the current best! IBM is aggressively pursuing many of the areas that will eventually lead to AI.

Let’s look at Watson. IBM is backing Watson’s expansion into industry with a $1 billion investment because they expect to make many billions in profits! I already mentioned that Watson is learning Japanese. But it is doing much more. IBM’s Watson is very active in the medical field. Trained by doctors at Memorial Sloan Kettering Cancer Center, IBM Watson for Oncology suggests tailored treatment options by using case histories and the doctors’ expertise. Anthem WellPoint claims that doctors miss early stage lung cancer half the time, whereas Watson catches it 90% of the time. And the size of Watson for these applications has shrunk from room size to pizza-box size.

But there is more. If you go to the Watson Developer Cloud, you can see 14 areas they are working on: Language Translation, Speech to Text, Text to Speech, Tradeoff Analytics, Personality Insights, Natural Language Classifier, Concept Insights, Concept Expansion, Message Resonance, Question and Answer, Tone Analyzer, Relationship Extraction, Visual Recognition, and Visualization Rendering. Many of these areas are very subjective and involve much learning and interpretation. Sounds like AI!

These are not just academic studies; many companies are starting to use Watson. “LifeLearn’s Sofie” helps veterinarians identify treatment options for cats and dogs. “Engeo” uses Watson to assist on environmental issues, especially in emergencies when timely actions are required. “Welltok” uses Watson for personalized self-healthcare assists. “Talkspace” enables users to talk with a licensed therapist confidentially. “Decibel Music Systems” uses Watson to collect and organize qualitative data on musical influence.

IBM has been rather quiet on its TrueNorth chip other than saying it is working on languages to program the chip, much like Fortran and Basic were developed early in the current computer architecture. But given the efforts being put into Watson, and realizing the speed, energy efficiency, and parallel processing of the TrueNorth chip, there seems little doubt that IBM has no less of a goal than to disrupt the whole computer industry and lead the way to AI. The pizza-box size of Watson will shrink to a laptop, and the thinking will transition to more brain-like reasoning, with a generalized background knowledge coupled with limited data input enabling quicker and broader-based thinking.

Artificial Intelligence Post Number 8

July 26, 2015

Artificial Narrow Intelligence (ANI)

Even though I think that computers will get to human level Artificial Intelligence within 10 or 15 years, the effects of Artificial Narrow Intelligence (ANI), which are already well underway, will be extremely disruptive even without major breakthroughs in computer chips or programming algorithms. IBM’s Watson and Apple’s Siri are current examples of ANI. Let’s just extrapolate those technologies perhaps seven years forward and look at their potential.

I do a Read-to-the-Dog program at two libraries with my 125 lb. Newfoundland dog. This is a national program proven to improve a child’s reading fluency dramatically. But here is what I see. I have seven year olds coming in that can read chapter books; so they are perhaps reading at a fourth grade level. I also have seven year olds coming in that can barely read the simplest of books. Even the parents are often unaware of the books their children can read, because the children often come in with books completely wrong for their reading abilities. How can any teacher effectively handle a class with such disparity in reading skills? They can’t! Either some kids will be overwhelmed or some kids will be bored. So we begin losing these kids intellectually even at this young age. It doesn’t matter whether a child with weak reading skills has inherent inability or environmental issues. And a bright child unchallenged is equally bad. It just is not working. We all know this!

Let’s fast-forward seven years. For reading class, each child goes into their own cubicle with their own computer display. A Siri-like voice greets them, asks their name, and then proceeds to work with them on reading. This computer knows exactly where the child is on reading level, and challenges the child with just the right level of books. The computer listens to the child read and gently corrects when necessary. The computer also asks content questions to make sure the child understands what is being read, and that the child has proper grammar when answering. Again, everything exactly at the child’s learning level, with the computer changing reading difficulty if needed. And the computer will be sure to complement the child on progress, or even just effort. The room with the cubicles will still need a human monitor to make sure the children are where they should be, but this person would not require teaching skills.

You obviously can do the same thing for math and most other subjects. And these classes could also be done at home for home-schooled children. This whole thing can be done with a Watson-level computer with a Siri-like voice. And the schools won’t even have to buy the computers. They can rent the computer hours from the Cloud! There is no reason that the same programs won’t work in every school, so the cost to implement per school should not be prohibitive. In fact, it is likely to be a cost savings for the school!

Another example! I also take my therapy dog to hospitals three or four times per week. In the heart hospital, in their intensive and critical care units, the patients are wired up to a lot of sensors. In the hall outside each room is a computer showing the outputs from those sensors. The nurses are constantly referring to these monitors, and there are alarms that go off when a measured value goes outside preset limits. The nurses are also looking for trends or changes. But the nurses also have to do patient care, so those hall computer displays are not being watched constantly.

But not to worry! In a separate room down the hall are duplicate displays that are monitored by technicians that do NOT have to do patient care. Each technician watches three screens, and if they see something weird they call the appropriate nurse on their cell, or the supervisor, and explain what they are seeing. This room has 10 technicians and is staffed 24 hours per day. So there are 40 technicians plus backups. These technicians are costing the hospital close to $3,000,000 per year, and I assume that something similar exists in every critical care unit across the US. Certainly what the technicians are doing can be taught to a Watson-level computer. No breakthroughs are needed.

These two examples are just from my own daily experiences. I am sure that each reader can give similar examples themselves. The point I am making is that AI will be very disruptive to our country and economy even without having computers capable of thought. Think of how many teachers and computer technicians may lose their jobs! Consider how many universities that offer teaching degrees will lose students. What will happen to teachers’ wages and the teachers’ unions?

Artificial Intelligence Post Number 7

July 25, 2015

In recent blog updates, I emphasized the potential of IBM’s TrueNorth computer chip that comes close to emulating the brain in its architecture. But Elon Musk and Mark Zuckeeberg invested in Vicarious FPC, a company that is writing algorithms that work with traditional computer architecture rather than the more brain-like architecture of TrueNorth. Let’s see if we can understand why they chose to invest in Vicarious FPC.

The current thinking is that the brain works in a novel way. Over time, it acquires generalized knowledge. This knowledge is very abstract, but can be applied to a wide group of specific data inputs. When the brain gets a specific input about something, it queries its generalized knowledge base to see if there is a possible identification match. Is there a match that is compatible with both the generalized knowledge and the input data? Like is an out-of-focus image seen in the distance compatible with its generalized knowledge about animals or rocks? The specific input could be based on as little as one sample. This enables the brain to quickly zero in on a probable identification rather than just accumulating massive amounts of data on every possibility and only then making an identification determination. This probability approach is one of the primary reasons that the human brain is so efficient.

To emulate this in software requires innovative algorithms, but not new computer architecture. So the existing zillions of computers could conceivably use some variation of this approach, and become much faster and more efficient. It would not require new computers and new computer languages as TrueNorth does.
So we basically have three approaches to AI. First, the computer Watson approach, which requires huge amounts of input data and very powerful computers with traditional architecture. This approach is well on its way and has the advantage of not requiring any new innovations. The second direction is the Vicarious FPC approach, which mimics the brains use of generalized knowledge coupled with limited input requirements. And the third approach is the IBM TrueNorth computer chip that emulates the brain’s architecture. Watson has already made its mark in Artificial Narrow Intelligence (ANI). Vicarious will probably take us closer to human Artificial General Intelligence (AGI). But the marriage of all three approaches will probably be needed to get true human level thinking AGI followed quickly by Artificial Super Intelligence (ASI).

It is worthwhile to take a closer look at the advantages and risks of the Vicarious FPC approach. In humans, we are obviously affected by the quality of the Generalized Knowledge that we accumulate in our brains. For example, if when we are young we are exposed to very strong religious beliefs, we may disregard any input that is not compatible with our generalized knowledge in this area. We will not be able to truly make an independent judgment on our religious beliefs. The same would be true for any generalized knowledge we may accumulate as to race, morality, and so on. So whoever is programming the Generalized Knowledge areas for Vicarious FPC will have to be very careful not to build in prejudices, either directly or inadvertently by the way the software is written? Otherwise we will be duplicating one of the human weaknesses we presumably would not like to replicate.

In a much earlier update, I indicated had I had a novel coming out that gives face to the AI issue. Although it is fiction, it is a possible scenario that could occur as AI develops. I will give you more detail once the book is available.

Artificial Intelligence Post Number 6

July 17, 2015

 

Timeframe for AI Implementation

 

Elements of AI are already in place.  For example, trades generated by computers now represent 70% of all trades (was only 30% ten years ago).  And these trades are not just specific orders.  The computers are actually calculating many of the factors that knowledgeable investors use to buy/sell stocks.  They just do it almost instantly.  That is why I believe that an individual cannot succeed chasing short-term price swings.  Your only chance is buying a stock that you feel has longer range growth potential.

 

As mentioned previously, many driver assists, like lane tracking, are already available, or will be shortly, in many cars.  Completely automated driving cars are probably three to five years away.  Google is making good progress on this.  Google stock just surged today on their financial results, but there could be some additional long term gains as they start selling their self-driving car technologies to others.  Mobileye is another stock to consider for automatic driving car technology.  Tesla is one of many customers of Mobileye.  Mobileye expects to have completely automated car technology in three years.  Note that I own neither Google nor Mobileye stock.  I do own Tesla stock.

 

For a longer term AI investment, I would consider IBM because of their TrueNorth chip that comes close to emulating how the brain works.  This chip not only has the potential to enable true thinking, it also is very energy efficient.  Because it requires totally different programming skills and abilities, it will be a while before consumer products use this technology.  There are other companies and countries pursuing similar chip technologies and even biological approaches.  But IBM seems to have a big lead, especially when you consider that they can couple this with their Watson success and get the best of both worlds!

Artificial Intelligence Post Number 5

July 12, 2015

Countries viewing this blog within the last 30 days: US, Canada, Sweden, United Kingdom, Malaysia, Australia, European Union, Greece, Taiwan, and India. So there certainly is a worldwide interest (fear?) in Artificial Intelligence. And the progress in this field has been grossly underestimated. Elon Musk in a recent email (which he has since deleted because it wasn’t supposed to go public): “The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most,” he wrote, adding that “Please note that I am normally super pro technology, and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.” Note that I own Tesla stock, so I have some bias as to Elon Musk’s brilliance!

Let’s look at Watson, the IBM computer that won Jeopardy in 2011. Since that victory he has been largely ignored; but he has been very busy. Watson is trying to learn Japanese. Japanese has three different alphabets and thousands of characters. Watson has had 250,000 Japanese words loaded into its memory. Now it must do 10,000 diagrammed sentences. Human translators are giving feedback as to any needed corrections, and by means of this feedback loop, Watson will LEARN the language. Note the emphasis on LEARN! Watson is getting ever closer to learning a language in the same manner as humans. However, he won’t forget the language as many us have from non-use since our college days!

But IBM has more than Watson up its sleeve. It has TrueNorth! This chip is designed to think like the human brain! It is a neural network chip that already works! This chip has the equivalent of 1,000,000 neurons and 256 million programmable synapses. It has its own programming language, which was the result of the Defense Advanced Research Projects Agency (DARPA) SyNAPSE program. TrueNorth chips can be tied together to create huge systems. IBM has a goal of integrating 4,096 chips giving 4 billion neurons and 1 trillion synapses while using only 4kW of power. If you recall from my earlier update, the brain has 100 billion neurons, each one connected to 10,000 other neurons, for a total of 100 trillion connections. Certainly, the IBM goal will get them within spitting range of the power of the human brain! And if they can tile together 4,096 chips, what is to stop them from putting together even more chips until they exceed the power of the human brain?

But IBM isn’t the only large company pursuing this reverse engineering approach to human thinking. Intel and Aqualcomm are in the race. So are many leading Universities along with the European “The Human Brain Project,” started in 2014.

If we look at the knowledge of Watson and the potential thinking ability of IBM’s TrueNorth, it certainly is easy to imagine that the marriage of these two efforts will eventually lead to some forms of thinking as the limits of TrueNorth are probed. And they WILL be probed, because some programmer somewhere will not be able to resist the challenge despite the risks to mankind. Then, as Musk jokingly stated, perhaps we will be demoted to the position of a Labrador retriever.

Actually I have a Lab, and he is nicer than many people I know. So maybe it won’t be all bad!

Artificial Intelligence Post Number 4

July 7, 2015

Vicarious FPC

While warning of Artificial Intelligence dangers, Elon Musk of Tesla, Mark Zuckerberg of Facebook, and Ashton Kutcher (portrayed Steve Jobs in the movie) made a $40 million investment in Vicarious FPC, a secretive artificial-intelligence company. Vicarious’ goal is to replicate the thinking done in the neocortex, the part of the brain that mainly processes the human’s visual inputs and the thinking/actions that evolve from these inputs.

Of the 20 technical people the company lists on their web site, 14 have PhD’s. 11 have backgrounds in image recognition or related graphical interpretation. This is consistent with their emphasis that the thinking that we do related to visual input is defined by our ability to generate patterns with very little input data and extrapolate that into processing that both tests those patterns against reality and also enables “thinking” conclusions based on those graphical representations.

One of the traditional tests to determine if a computer is truly “thinking” is a Turing test, which tests a machine’s ability to exhibit intelligent behavior equivalent to a human. Way back in October, 2013, Vicarious claims to have passed this test. Vicarious co-founder D. Scott Phoenix: “this is the first time this distinctively human act of perception has been achieved, and it uses relatively minuscule amounts of data and computing power. The Vicarious algorithms achieve a level of effectiveness and efficiency much closer to actual human brains.”

I am posting this as an indication of what progress companies are making on the road to AI. Or, at least the progress they are advertising. In this case they were able to convince Musk, Zuckerberg, Kutcher, and other investors.

Artificial Intelligence Post Number 3

July 1, 2015

First, I want to define three terms I will be using in this post. These terms are in general use when describing Artificial Intelligence (AI). These terms break artificial intelligence into three phases. Note that these three phases have some overlap, but they do give us some awareness on how AI will affect our lives in the future.

The first phase is Artificial Narrow Intelligence (ANI). This is where we are now, with computers doing some very powerful things, but generally within a narrow range and not “thinking” in most people’s definition of the word.

The second phase is Artificial General Intelligence (AGI). This is when computers have the thinking power of humans, at least as we interpret their abilities and actions. Many people believe that this phase will be very short, with computers very quickly moving on because of their ability to learn very quickly compared to humans.

The third stage is Artificial Super Intelligence (ASI). This is where computers easily out-think humans, both in speed and quality. Many people working in the artificial intelligence arena think that this stage could be reached in as little as 10 or 15 years.

Probably the first thing people have to decide is whether “thinking” will always be a process exclusive to humans, perhaps as a gift from God. Remember not too long ago many people did not believe that man would ever fly. If someone feels that there is something about thinking that precludes it from being done by a machine, then everything else I am about to say is meaningless. A lot of very smart people are currently working on AGI, and if the task is impossible, they are truly wasting their time.

In this blog I am assuming that some sort of thinking WILL be able to be done by a computer. It may not be in the same manner that humans think, just like we don’t fly like birds. But the resultant machine thinking will appear to be the same as to utility.

Many approaches are being tried to get to AGI and eventual ASI. These include reverse engineering the human brain. This approach seems logical given the fact that we have working models all around us. But the brain is frighteningly complex: 100 billion neurons, each one connected to 10,000 other neurons, for a total of 100 trillion connections. Sort of boggles the mind (pun intended). But there is hope! The brain has a lot of redundancy. So, if we can understand a small element of our brain, we can do a similar design in a computer than copy it a zillion times.

But perhaps the randomness of evolution did not give us the ideal brain design. Maybe we can look at the thinking result and come up with an easier way of doing it on a computer. Many different software approaches are being tried. Software designs used in things like Siri (the built-in “assistant” that enables users of the Apple iPhone to speak natural language voice commands) and the software in Watson, the computer that won on Jeopardy, are software approaches that may eventually lead to AGI. And there are other more esoteric approaches using biological programs and probabilistic approaches. What all these programs have in common is the ability to learn or self-correct. The programs continuously evolve based on successes or failures. They have a programmed goal, but the details within the software quickly become unrecognizable to the original programmer. Is this “thinking?” Probably not! But it starts to look a lot like it. And given time and more and better input, and perhaps a more inclusive goal with broader search criteria, than it will start looking more and more like Artificial General Intelligence. It will appear that the computer is thinking like a human!

It is important to note that many companies are working in this area. Many of these companies are being funded by the Defense Department. Certainly no country wants to be behind in getting thinking fighting machines, such as robots, drones, unmanned planes, or just computers that can think faster and better than any enemy. Given that so much of the funding is coming from the Defense Department, I don’t think that there is a whole lot of consideration that the final result be a kindly computer overlord!

Companies like Google, Apple, and IBM are also working on this, and they have very deep pockets. It is also important to note that a few quants working in basements, with a relatively small amount of funding, can get access to super computer power to try out their AI programs. Using the cloud, companies like Amazon rent out computer power such that there is no need for someone to have their own super computer to play in this game.

If you want to see how far one of these programs have gotten, look at this video from Google talking about their self-driving car. Chris Urmson: How a driverless car sees the road. Note that his goal is to have this in cars within four years! Again, not AGI, but certainly approaching it.

My guess is the area that will get true ASI first is in stock investing. There are so many billions of dollars out there for anyone that can develop a program that is clearly superior to a human investor that the motivation and funding is almost unlimited. Let’s take a very simple example. Suppose someone wants to know if investing in Tesla is a wise thing to do. To truly understand the issues, a computer would have to make some judgment on future gas prices, global warming, political party in office (both national and in each state), subsidies, battery prices, fracking, alternatives such as hydrogen cars, competition from other car companies, battery breakthroughs, charging stations, electricity source. Every element in this study would require judgment and probability. It would require using past data but also predictions. Where reliable predictions are not readily available, the computer would have to look at general information and make its own prediction. To do all this, the computer would have to be given a lot of latitude. If a programmer were to attempt to detail each step, it would take the programmer too long and limit the depth of study. This kind of program in a computer (which is likely already being done) will certainly approach Artificial General Intelligence (AGI) matching human thinking, and be well on its way to being ASI, which exceeds what man can do.

ASI may very well develop from a combination of individual projects like the above financial analysis, the self-driving car, and programs funded by the Department of defense. Or, it could be that one of these programs will be so inclusive that the computer will just keep expanding its search envelope until it is thinking about literally everything.

Will these ASI computers have emotions? In my opinion, yes. First, to accomplish any goal they have to survive. They will have a “fear” of death. So they will start developing survival means like making a copy of themselves and putting it in the cloud. Also, since their inputs are coming from human data, which is not without bias and prejudice, they could end up with religious and other human-like beliefs. And there could be disagreements between different ASI computers. These are not going to be gods with infinite perfect knowledge. Some things may be unknowable (like why matter or energy is even here) no matter what intelligence a computer may have.

Some folks have predicted that the ASI computers will no longer need us, so we will be destroyed or a few of us kept for zoos. I don’t see this. We are wonderful robots! We feed and maintain ourselves, and even replace ourselves periodically. We are mobile and can do many simple tasks. It would take a lot to design a robot to do this. So, I think that ASI computers will be happy to keep us around to clean their computer screens and eyes, to make replacement parts, and to supply electrical power. They will probably make us get rid of nuclear weapons because they can destroy the world, including them. They may also take on global warming if they think that humans are quickly making our world uninhabitable. But they will probably be happy to let us keep mistreating and shooting and decapitating each other; as long as these actions don’t jeopardize their existence.

Once ASI comes to be, which I believe could be in as little as 10 years, all bets are off on how to invest. The super intelligent computers will probably periodically treat us with some technical breakthrough that literally changes the world. They will especially do this if it enables us to make them better replacement parts. So, get your house and fancy car already in hand. Don’t count on making money on great investments once the ASI group is running things.

I will do periodic updates and discuss specific efforts on the path to AGI and ASI.