Artificial Intelligence Post Number 14

September 4, 2015

When I started writing about Artificial Intelligence, I was convinced that AI would more disruptive to our economy and to the stock market than most people were assuming. The more I read and research the subject, the more I believe this to be true. And it is true even if there is a secret sauce that is required for computers to truly think, and we can’t discover that sauce. I don’t believe there is such a magic sauce, but no one knows for sure.

But let’s look at the effect of Artificial Narrow Intelligence (ANI), which is already being applied in many areas like robotics and driving assists. Many of the things we do as a profession have a large complement of repeatability and can be done without mechanical assistance of advanced robots or other mechanical devices. Several of those fields are teaching, accounting, law, and medical diagnostics to name a few. In the US, there are roughly 3.5 million teachers, 1.3 million accountants, 1.2 million lawyers, and 1.0 million doctors. This is a total of 7 million people in these four professions. If AI reduces the number of people required in these four fields by 50%, this would force 3.5 million people into other professions, or put them out of work. That would be disruptive! The recent Bureau of Labor Statistics shows the number of unemployed in the US at 8 million. That would jump to 11.5 million with the additional people out of work, even though there would be some trickle-down effect on who ends up actually being unemployed. And the total number would be higher when other jobs indirectly affected are included.

Can ANI really reduce needed employment in these fields by 50%? Let’s look at the teaching field in more detail, using perhaps a second grade student learning to read as an example. There is a lot of diversity in reading skills; actually more than current teachers can effectively handle in a normal sized classroom. But let’s assume each child can go into his own cubicle with its own computer. The child verbally signs on, and the computer asks what teacher he or she would like. He has his choice of a super hero, cartoon character, a male or female teacher, etc. The teacher choice will appear on the screen and talk to the student using the appropriate voice, addressing the child personally by name. The computer will have in its data base a complete history of the child’s background and reading level including any issues like difficulty in understanding reading material, speech impediment, attention deficit, and so on. In the computer’s memory would be appropriate actions for all these issues that have come from reading experts but then further refined by actual past issues with this specific student.

The child would start to read an appropriate book on something like a Kindle rather than the computer screen. The student can ask for help with any word, and the computer will monitor reading speed and periodically interrupt to “discuss” the book with the child to assure that the child understands what he is reading. At any point, the computer could choose to increase the difficulty of the book being read, or take it down a notch. In the discussions, the computer can also be monitoring speech impediments or other things that might have to be flagged for a real teacher.

Note that current computers, sensors, speech capability, and algorithms are capable of doing all the above, and work in these areas is already ongoing. It hasn’t reached the level described above, but it is just a matter of time; probably just a few years. It will then take some time to implement. But since these systems will be better than any traditional classroom, it will be financed through private schools and elite school districts. So even though it perhaps will eventually be most valuable in poorer school districts, the development will be financed by the wealthy. Once the computers and required software is fully developed, the required investment will be manageable by ALL school districts.

A similar story can be told for the fields of accounting, law and medicine. People will still be needed, but they will be used far more effectively on the exceptions.

Disruptive AI is coming!


Artificial Intelligence Post Number 14

August 30, 2015

Here is a book trailer for my new book:

The book is also available on Amazon (at least the Kindle version – the hard copy should be there in a few days).

Artificial Intelligence Post Number 13

August 28, 2015

I have mentioned that semi-automatic or automatic driving cars will be one of the first applications of low-level artificial intelligence. The interest in this area just increased with recent highway death and injury data. Per the National Safety Council, the costs of deaths, injuries, and property damage are up 24% versus a year ago, to $152 billion for the first 6 months. To put this in perspective, the projected 2015 US deficit is $486 billion. If the year continues like the first six months, automobile accidents will cost the equivalent of 2/3 of our annual US deficit. This trend in increased highway deaths is unlikely to reverse given that “experts” believe that the causes are cheaper gas (which encourages people to drive more) along with people using phones and texting while driving, which we do not seem able (or willing) to stop. Note that this upward trend is happening while cars are getting measurably safer! The main problem is the driver, not the car!

Most car companies are developing various levels of computer and sensor driver aids, and even non-automotive companies like Google and Apple are investing heavily. But the company that is perhaps leading in this area is the Israel company Mobileye, which expects to have completely automated car technology within three years. Per their website, their EyeQ chip and algorithms are already in approximately 5 million vehicles. Although this EyeQ chip is not as unique as the IBM TrueNorth chip, it does enable what Mobileye calls parallel routing, up to four paths, between 4 masters and 4 slave ports. Autonomous driving is planned for launch in 2016. This will enable hands-free capable driving at highway speeds and in congested traffic. By 2018 they plan to add country roads and city traffic capability. They say this addition will be enabled by major algorithm changes they are already working on. Note that they believe that the hardware, with relatively minor and identified changes, is already capable.

The reason we are interested in this for AI is that that to truly have an automated car, the level of decision making by whatever computer is involved is extensive. Maybe not truly “thinking,” but it will appear that way if the car can really handle all the various scenarios it will have to deal with. To make the computer decision making as quick and energy efficient as possible, certainly novel computers like those built with the IBM TrueNorth chip will be considered. And the algorithms developed for automatic cars will certainly speed up applications in other areas that don’t have the potential $300 billion annual benefit of self-driving cars.

This work will not go unnoticed by those working towards truly thinking AI, because much of the improvements in computer architecture and advanced algorithms will be able to be applied in other areas, including projects whose goals are to emulate the way the human brain thinks, or its non-biological equivalence.

I have published a novel that puts a possible face on the AI issue. The book is fiction because I don’t have a chrystal ball to predict the future. But it does present a story around a possible scenario. The name of the book is Artificial Intelligence Newborn – It is 2025, and I am Here! It is published by and can be purchased from Kellan Publishing:–Its-2025-and-I-am-Here_p_55.html. For those that prefer reading on a Kindle, go to Amazon: Amazon will have the hard copy available within a few days.

Artificial Intelligence Post Number 12

August 21, 2015

Per Cade Metz of Wired, 8/17/2015, IBM for the first time is sharing their TrueNorth computers with the outside world. They are running a three-week class for academics and government researchers at an IBM R&D lab. The students are exploring the chip’s architecture and beginning to build software. At the end of the training session, the students, which represent 30 institutions on five continents, will each take their own TrueNorth computer back to their labs. Let the games begin!

In the meantime, President Obama has signed the National Strategic Computer Initiative, an executive order that sets the goal of producing an American supercomputer with an exaflop of processing power. This computer will be 20 times faster than the current fastest supercomputer, which is owned by China. This new computer will use parallel processing, but not be as unique as the IBM TrueNorth chip design and will not require new computer code. Nvidia is creating the supercomputer system which is initially targeted for the medical field.

As we are seeing, the efforts on supercomputing and on learning to use chips capable for AI are accelerating.

Artificial Intelligence Post Number 11

August 14, 2015

In my recent posts I have emphasized the changes most likely to happen with the simplest form of AI. There seems little doubt these will happen because we can already see signs of them in the marketplace in advanced robotic manufacturing and the progress towards self-driving cars.

But what about computers that truly think? This is the level of AI (AGI) that is frightening many of our brightest people. Is this just a “sky is falling” thing? After all, you can go back thirty years and find articles, books, and even movies that predicted that by 2001 thinking computers would already be here and causing havoc! Is anything that much different now? Sure, computers are bigger and faster. But all the computers in use still use the von Neumann linear processing architecture developed in 1945. They are all just really fast calculators!

But there IS now a real difference! I have already mentioned IBM’s TrueNorth chip that does parallel processing and attempts to emulate what the human brain does. And a company called HRL is developing a chip that comes even closer to emulating the brain in that its internal connections adjust to new data – it learns from experience, much like a child! But none of these approaches are truly IDENTICAL to the biological brain. Are they close enough in design and application to actually become thinking entities? I don’t think (pun intended) that anyone truly knows, because we don’t really know how to even define “thinking.” If these chips are loaded up with data and given goals, will they independently find unique paths to reach those goals? Of course, if they do, that also could be a problem. If you ask a thinking AGI computer how to solve global warming issues identified as being caused by humans, their advice to kill all of mankind may be valid but not welcome! Or even if their advice is less terrifying, if they suggest shutting down all coal-firing power plants for example, is this viable politically even if it may be theoretically possible? And to eliminate this kind of impractical advice, do AGI computers require some morality judgements based on human values? We cannot even agree on what those are within the human race. We quickly get into religious and philosophical issues as we get closer to the possibility of AGI.

I will continue to monitor the progress of AI as much as possible just by trying to glean as much as I can from published articles. Hopefully blog readers will help on this. Even if we never get to the very frightening level of AGI or developing thinking computers smarter than us, we need to be monitoring the advances that are likely to explode with the introduction of chips like the IBM TrueNorth.

Artificial Intelligence Post Number 10

August 9, 2015

In my last update, I showed how IBM is investing billions of dollars that almost guarantees that a high level of artificial intelligence is going to happen, and it is going to happen soon. The IBM TrueNorth computer chip will give much faster computer speeds at much lower energy use, and it accomplishes this by mimicking much of the parallel processing of the human brain. And while the programming of this chip is evolving, downsized Watson (the computer that won on Jeopardy) clones are being expanded into many fields and being filled with broad knowledge bases. Once TrueNorth is fully operational, it will be relatively simple for IBM to marry the systems together.

To see the eventual effect of all this on the US economy, and on potential investments, we have to look at its broader effect. I already mentioned that fields like law will change dramatically since so much legal history will be so easily obtainable once that information is in a system like Watson. The need for lawyers will be greatly reduced. Doctors and teachers will be focusing on those relatively few far outside the norm, because the majority of patients and students will be well served by programs that are customized to their own needs, drawing from huge data bases. The interaction with the computer will be much different than most current computer programs, because the computer voice will not be discernably different than if the user were interacting with a live human. This indeed is the Turing Test criteria; that someone should not be able to tell if they are taking to a computer or a person! This will eliminate the need for many of the call centers often located in India!

This machine intelligence explosion will affect many products. Already mentioned are robots that are far easier to train and can interact more easily with multiple sensors. But perhaps less obvious is its effect on automobiles. Look at any major car manufacturer and how they are stressing their new driver-assist systems. Cars are becoming increasingly smart. Tesla has indicated that they will likely be making their own maps based on each of their vehicles communicating to each other and back to Tesla. This will make roads, road conditions, and traffic issues close to real time.

Apple is hiring people that have automotive backgrounds, and they may use some of their massive financial strength to enter the automotive business now that it is becoming so software and computer driven. Electric vehicles seem destined to become a bigger part of the industry since they are so easily controlled by electronics and are “greener” than their internal combustion engine competitors. Certainly there is a lot of speculation of Apple and Tesla somehow combining. Perhaps Tesla will supply the vehicle drive, batteries, and base structure to Apple; and Apple will design their own body and electronics for their own car. This would enable Apple to get into the business easier, give Tesla some needed cash, and better utilize and expand the charging infrastructure that Tesla has already started. Again, the growth of intelligent cars is making this whole thing relevant. And what will this do to manufacturers like GM, Chrysler, and Ford that are largely still designing vehicles like they did ten years ago? I don’t think that I would like to own their stock!

Already mentioned in earlier updates was China’s economy slowing as robotic manufacturing enables lower cost production of volume goods in the US. Recent economic data indicates that this is already happening. Sadly, as China slows they may become more combative, which also seems to be the case. Artificial intelligence gains will definitely hurt counties whose economies are driven by volume products produced by cheap labor working in poor working environments. The US economy is likely to continue to grow at its slow but consistent pace. And the stock market will reflect that growth, especially in companies like robotic manufacturers and those companies making smart systems for teaching, medical care and the legal profession. Most computer products are likely to gain from the greater efficiencies and downsizing that AI gains will enable. For example, a computer watch will only be limited on how data can be displayed efficiently, not in its computing power or ability to communicate.

Artificial Intelligence Post Number 9

July 30, 2015

What is IBM up to?

First there was IBM’s Watson, the computer that beat the chess champion then went on to win on Jeopardy. Then there was IBM’s TrueNorth, the computer chip that emulates the parallel processing of the brain. Now we have the newly announced IBM chip that quadruples the logic density versus the current best! IBM is aggressively pursuing many of the areas that will eventually lead to AI.

Let’s look at Watson. IBM is backing Watson’s expansion into industry with a $1 billion investment because they expect to make many billions in profits! I already mentioned that Watson is learning Japanese. But it is doing much more. IBM’s Watson is very active in the medical field. Trained by doctors at Memorial Sloan Kettering Cancer Center, IBM Watson for Oncology suggests tailored treatment options by using case histories and the doctors’ expertise. Anthem WellPoint claims that doctors miss early stage lung cancer half the time, whereas Watson catches it 90% of the time. And the size of Watson for these applications has shrunk from room size to pizza-box size.

But there is more. If you go to the Watson Developer Cloud, you can see 14 areas they are working on: Language Translation, Speech to Text, Text to Speech, Tradeoff Analytics, Personality Insights, Natural Language Classifier, Concept Insights, Concept Expansion, Message Resonance, Question and Answer, Tone Analyzer, Relationship Extraction, Visual Recognition, and Visualization Rendering. Many of these areas are very subjective and involve much learning and interpretation. Sounds like AI!

These are not just academic studies; many companies are starting to use Watson. “LifeLearn’s Sofie” helps veterinarians identify treatment options for cats and dogs. “Engeo” uses Watson to assist on environmental issues, especially in emergencies when timely actions are required. “Welltok” uses Watson for personalized self-healthcare assists. “Talkspace” enables users to talk with a licensed therapist confidentially. “Decibel Music Systems” uses Watson to collect and organize qualitative data on musical influence.

IBM has been rather quiet on its TrueNorth chip other than saying it is working on languages to program the chip, much like Fortran and Basic were developed early in the current computer architecture. But given the efforts being put into Watson, and realizing the speed, energy efficiency, and parallel processing of the TrueNorth chip, there seems little doubt that IBM has no less of a goal than to disrupt the whole computer industry and lead the way to AI. The pizza-box size of Watson will shrink to a laptop, and the thinking will transition to more brain-like reasoning, with a generalized background knowledge coupled with limited data input enabling quicker and broader-based thinking.

Artificial Intelligence Post Number 8

July 26, 2015

Artificial Narrow Intelligence (ANI)

Even though I think that computers will get to human level Artificial Intelligence within 10 or 15 years, the effects of Artificial Narrow Intelligence (ANI), which are already well underway, will be extremely disruptive even without major breakthroughs in computer chips or programming algorithms. IBM’s Watson and Apple’s Siri are current examples of ANI. Let’s just extrapolate those technologies perhaps seven years forward and look at their potential.

I do a Read-to-the-Dog program at two libraries with my 125 lb. Newfoundland dog. This is a national program proven to improve a child’s reading fluency dramatically. But here is what I see. I have seven year olds coming in that can read chapter books; so they are perhaps reading at a fourth grade level. I also have seven year olds coming in that can barely read the simplest of books. Even the parents are often unaware of the books their children can read, because the children often come in with books completely wrong for their reading abilities. How can any teacher effectively handle a class with such disparity in reading skills? They can’t! Either some kids will be overwhelmed or some kids will be bored. So we begin losing these kids intellectually even at this young age. It doesn’t matter whether a child with weak reading skills has inherent inability or environmental issues. And a bright child unchallenged is equally bad. It just is not working. We all know this!

Let’s fast-forward seven years. For reading class, each child goes into their own cubicle with their own computer display. A Siri-like voice greets them, asks their name, and then proceeds to work with them on reading. This computer knows exactly where the child is on reading level, and challenges the child with just the right level of books. The computer listens to the child read and gently corrects when necessary. The computer also asks content questions to make sure the child understands what is being read, and that the child has proper grammar when answering. Again, everything exactly at the child’s learning level, with the computer changing reading difficulty if needed. And the computer will be sure to complement the child on progress, or even just effort. The room with the cubicles will still need a human monitor to make sure the children are where they should be, but this person would not require teaching skills.

You obviously can do the same thing for math and most other subjects. And these classes could also be done at home for home-schooled children. This whole thing can be done with a Watson-level computer with a Siri-like voice. And the schools won’t even have to buy the computers. They can rent the computer hours from the Cloud! There is no reason that the same programs won’t work in every school, so the cost to implement per school should not be prohibitive. In fact, it is likely to be a cost savings for the school!

Another example! I also take my therapy dog to hospitals three or four times per week. In the heart hospital, in their intensive and critical care units, the patients are wired up to a lot of sensors. In the hall outside each room is a computer showing the outputs from those sensors. The nurses are constantly referring to these monitors, and there are alarms that go off when a measured value goes outside preset limits. The nurses are also looking for trends or changes. But the nurses also have to do patient care, so those hall computer displays are not being watched constantly.

But not to worry! In a separate room down the hall are duplicate displays that are monitored by technicians that do NOT have to do patient care. Each technician watches three screens, and if they see something weird they call the appropriate nurse on their cell, or the supervisor, and explain what they are seeing. This room has 10 technicians and is staffed 24 hours per day. So there are 40 technicians plus backups. These technicians are costing the hospital close to $3,000,000 per year, and I assume that something similar exists in every critical care unit across the US. Certainly what the technicians are doing can be taught to a Watson-level computer. No breakthroughs are needed.

These two examples are just from my own daily experiences. I am sure that each reader can give similar examples themselves. The point I am making is that AI will be very disruptive to our country and economy even without having computers capable of thought. Think of how many teachers and computer technicians may lose their jobs! Consider how many universities that offer teaching degrees will lose students. What will happen to teachers’ wages and the teachers’ unions?

Artificial Intelligence Post Number 7

July 25, 2015

In recent blog updates, I emphasized the potential of IBM’s TrueNorth computer chip that comes close to emulating the brain in its architecture. But Elon Musk and Mark Zuckeeberg invested in Vicarious FPC, a company that is writing algorithms that work with traditional computer architecture rather than the more brain-like architecture of TrueNorth. Let’s see if we can understand why they chose to invest in Vicarious FPC.

The current thinking is that the brain works in a novel way. Over time, it acquires generalized knowledge. This knowledge is very abstract, but can be applied to a wide group of specific data inputs. When the brain gets a specific input about something, it queries its generalized knowledge base to see if there is a possible identification match. Is there a match that is compatible with both the generalized knowledge and the input data? Like is an out-of-focus image seen in the distance compatible with its generalized knowledge about animals or rocks? The specific input could be based on as little as one sample. This enables the brain to quickly zero in on a probable identification rather than just accumulating massive amounts of data on every possibility and only then making an identification determination. This probability approach is one of the primary reasons that the human brain is so efficient.

To emulate this in software requires innovative algorithms, but not new computer architecture. So the existing zillions of computers could conceivably use some variation of this approach, and become much faster and more efficient. It would not require new computers and new computer languages as TrueNorth does.
So we basically have three approaches to AI. First, the computer Watson approach, which requires huge amounts of input data and very powerful computers with traditional architecture. This approach is well on its way and has the advantage of not requiring any new innovations. The second direction is the Vicarious FPC approach, which mimics the brains use of generalized knowledge coupled with limited input requirements. And the third approach is the IBM TrueNorth computer chip that emulates the brain’s architecture. Watson has already made its mark in Artificial Narrow Intelligence (ANI). Vicarious will probably take us closer to human Artificial General Intelligence (AGI). But the marriage of all three approaches will probably be needed to get true human level thinking AGI followed quickly by Artificial Super Intelligence (ASI).

It is worthwhile to take a closer look at the advantages and risks of the Vicarious FPC approach. In humans, we are obviously affected by the quality of the Generalized Knowledge that we accumulate in our brains. For example, if when we are young we are exposed to very strong religious beliefs, we may disregard any input that is not compatible with our generalized knowledge in this area. We will not be able to truly make an independent judgment on our religious beliefs. The same would be true for any generalized knowledge we may accumulate as to race, morality, and so on. So whoever is programming the Generalized Knowledge areas for Vicarious FPC will have to be very careful not to build in prejudices, either directly or inadvertently by the way the software is written? Otherwise we will be duplicating one of the human weaknesses we presumably would not like to replicate.

In a much earlier update, I indicated had I had a novel coming out that gives face to the AI issue. Although it is fiction, it is a possible scenario that could occur as AI develops. I will give you more detail once the book is available.

Artificial Intelligence Post Number 6

July 17, 2015


Timeframe for AI Implementation


Elements of AI are already in place.  For example, trades generated by computers now represent 70% of all trades (was only 30% ten years ago).  And these trades are not just specific orders.  The computers are actually calculating many of the factors that knowledgeable investors use to buy/sell stocks.  They just do it almost instantly.  That is why I believe that an individual cannot succeed chasing short-term price swings.  Your only chance is buying a stock that you feel has longer range growth potential.


As mentioned previously, many driver assists, like lane tracking, are already available, or will be shortly, in many cars.  Completely automated driving cars are probably three to five years away.  Google is making good progress on this.  Google stock just surged today on their financial results, but there could be some additional long term gains as they start selling their self-driving car technologies to others.  Mobileye is another stock to consider for automatic driving car technology.  Tesla is one of many customers of Mobileye.  Mobileye expects to have completely automated car technology in three years.  Note that I own neither Google nor Mobileye stock.  I do own Tesla stock.


For a longer term AI investment, I would consider IBM because of their TrueNorth chip that comes close to emulating how the brain works.  This chip not only has the potential to enable true thinking, it also is very energy efficient.  Because it requires totally different programming skills and abilities, it will be a while before consumer products use this technology.  There are other companies and countries pursuing similar chip technologies and even biological approaches.  But IBM seems to have a big lead, especially when you consider that they can couple this with their Watson success and get the best of both worlds!