Artificial Intelligence Post Number 23

November 15, 2015

In earlier updates I emphasized how I believe that true artificial intelligence will come from an accumulation of various targeted self-learning algorithms that go after specific areas. I said that that such algorithms may already exist for the stock market, but these algorithms are not viewable because people or companies are using them to make money in the market, and they are aware that the disclosure of these algorithms may make them useless or cause laws to be passed against their usage.

In my last blog I went into some detail on how Tesla’s AutoPilot is a self-learning system, and how similar systems are likely to grow into true autonomous cars.

Another area I have noted as being ripe for AI is medical. In a 10/15/2015 article in ScienceDaily (, they discuss “a new diagnostic technology based on advanced self-learning computer algorithms which — on the basis of a biopsy from a metastasis — can with 85 per cent certainty identify the source of the disease and thus target treatment and, ultimately, improve the prognosis for the patient… The newly developed method, which researchers are calling TumorTracer, are based on analyses of DNA mutations in cancer tissue samples from patients with metastasized cancer, i.e. cancer which has spread.”

Even more exciting is that “Researchers expect that, in the long term, the method can also be used to identify the source of free cancer cells from a blood sample, and thus also as an effective and easy way of monitoring people who are at risk of developing cancer.”

AI is coming!

Artificial Intelligence Post Number 22

November 6, 2015

I have been reading about every book that comes out about AI, the most recent being “Surviving AI: The promise and peril of artificial Intelligence,” by Calum Chase. The one thing that is obvious from most of the books I have read on AI is that the authors believe that if you accept evolution, you must believe in the inevitability of Artificial Intelligence.

If the function of the brain could have evolved by the randomness of natural selection acting on the genetic variation among individuals, the equivalent ability can certainly be obtained in a powerful computer using the scientific method, without all the trial and error. Of course, this ability may take a totally different form, just like planes don’t fly like birds and submarines don’t swim like fish. And the exact timing of when this will happen is obviously not predictable. Per most authors, only people believing in a creator that made humans special such that no other entity can “think” should doubt the inevitability of AI.

Although most books acknowledge the possibility that AI will not come until we have a full understanding of the human brain, most suggest that AI will evolve from a different approach, either from a master algorithm leading directly into AI or the cumulative result of many isolated self-learning AI algorithms that are eventually harmonized into an overall thinking system.

I am watching Tesla’s “Autopilot” (note I own some Tesla stock) with great interest. Mobileye makes some of the technology, with chips that interpret what car cameras are seeing. The chips use “deep learning” methods to interpret data coming from the car’s sensors. Deep learning uses multiple simplified algorithms to approximate complex functions coming from the various vision systems and other sensors. These simplified algorithms enable quick but sufficient decision making by the car’s computer systems to control safety systems and driver assists.

The Tesla system is self-learning and will improve on its own based on when and how the driver and the systems react to individual road situations. It also will give feedback on the roads themselves, such that over time Tesla will have road maps far more detailed than anything currently available from the likes of GPS or Google Maps. These maps will be continuously updated based on the feedback coming from each Tesla vehicle. Even at his early stage, over 1 million miles are being driven daily by Tesla cars on Autopilot. The detail being gathered will enable the next step in self-driving vehicles.

Is this AI? The program is self-learning and is probably already quite different than what the human programmers initially entered. And certainly within a few years the decision making going on in each vehicle will be impressive. In fact, Elon Musk, the CEO of Tesla, believes that within five years his vehicles will be capable of self-driving.

With little doubt, the same sort of progress is being made on stock market programs, medical diagnostic systems, legal research programs, teaching assists, military weapons, and other areas that will not be as visible to us until fully implemented. And will all these systems approach problems with the same “deep learning” methods used in Tesla’s Autopilot, such that a commonality exists that enables a master algorithm that will work much like the human mind?

Progress in these areas is so rapid that the next few years are going to be very exciting. This is NOT something that will take decades before our way of living is dramatically affected! And this disruption will be apparent far before an AI system becomes truly “thinking.”

Artificial Intelligence Post Number 21

October 29, 2015

I just finished reading “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World,” by Pedro Domingos. This is a very difficult read. The author covers certain areas in great (and confusing) detail, and is contradictory in other areas. For example, despite the concerns and warnings from many very smart people about AI, the author states that there is absolutely no risk of artificial intelligence computers taking over. But he then proceeds to list some ways that computers MAY take over, including The Master Algorithm getting into the hands of a bad person. What are the odds of THAT happening? He also says that there is a risk of people getting so confident of computers always being right that they begin to follow the AI computer like following a god? Again, what are the odds of THAT happening? Have people ever really done that?

The author believes that it is important that The Master Algorithm be discovered BEFORE individual algorithms get developed for specific narrow problems, because otherwise the detail of those individual algorithms will become too complex to incorporate into one Master Algorithm. Again, the author spends chapters talking about the progress already being made in these individual learning algorithms, including areas like self-driving cars, medical care, etc. He doesn’t say how we are supposed to stop this progress until someone identifies an overriding Master Algorithm. And he does say that the individual algorithms on specific subjects will be more all-encompassing. He just feels that required computer power will be overwhelming when someone tries to put everything together into one Master Algorithm package equivalent to a brain. He doesn’t mention breakthroughs like IBM’s TrueNorth computer chip.

The author then lists the competing philosophies being applied in the development of a Master Algorithm, which he calls “the five tribes of machine learning.” The five tribes are symbolists, connectionists, evolutionaries, Bayesians, and analogizers. I won’t even attempt to describe the details of each, which is the main content of the book.

I read this book because the author Pedro Domingos is a professor of computer science and is a winner of the SIGKDD Innovation Award. The book is very recent (published September 22, 2015) and is rated fairly well on Amazon. So I thought that I would be getting very up-to-date information. But I don’t feel it helped much in this blog’s quest to see the progress of AI in general, and when we should expect to see dramatic changes as the result of AI. I actually think that my fictional book “Artificial Intelligence Newborn” does a better job in laying out a possible AI future, especially given that Domingos gives a zero chance of AI taking over!

Artificial Intelligence Post Number 20

October 18, 2015

I would like to give a brief update of how much effort (money) is being put into artificial intelligence by hugely wealthy companies. First, let’s look at Apple. Just this month Apple purchased Percptio and VocalIQ. Perceptio makes image recognition technology for smartphones. VocalIQ is developing technology that helps computers understand everyday human speech. Does Apple want these companies’ expertise to help in the development of their rumored self-driving electric car that is supposed to come out in 2019?

Apple is not alone! In a 10/12/2015 event in San Francisco, IBM hosted a meeting to discuss AI. In this meeting IBM was blowing their own horn, emphasizing that they were working on every element of AI, with Watson and the TrueNorth chip offered as evidence. They also discussed how they can get insights from unstructured data. Earlier this year they noted that in the healthcare industry, much patient information is saved in text format and seldom used in analysis. Some estimates are that 80% of health information is unstructured (like in physician notes and patient surveys) and therefore not used. IBM Watson Content Analytics addresses this source of unused information to give a more complete view of patients’ needs and appropriate treatments.

Google recently invested in DFKI, a German AI lab. In 2014 they also paid $500 million to buy the UK company DeepMind. DeepMind’s Mission: “Solve intelligence!…We build powerful self-learning general purpose algorithms.”

Musk, Zuckerberg and Kutcher are investing in a company called Vicarious, which has raised over $100 million. Vicarious’ goal is to build a system that will have general intelligence matching a human.

Many of these companies would pursue AI even more aggressively if they could hire people with the required abilities. Google has publicly stated that their biggest challenge in AI is finding people with the right digital skills.

The last thing I want to emphasize is that despite all this effort by well-funded firms, I believe that there is at least an equal chance of AI breakthroughs coming from a quant playing with his computer in his basement and using the cloud for his required computer power. His motivation for developing self-learning AI programs will be the same that many hackers have of breaking into “secure” computer systems. It’s an ego thing!

Artificial Intelligence Post Number 20

September 29, 2015

We would like to know when we may expect thinking computers that are smarter than we are.

First, we need sufficient affordable computer power. In an article by Tim Urban, he says that Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years. Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000. This puts us on pace to get to an affordable computer by 2025 that rivals the power of the brain.

But this assumes that we need the power of the brain to do Artificial Super Intelligence (ASI).  If we can get an economical computer to think more efficiently than the way evolution created by trial-and-error, then indeed the 2025 date is a worst case scenario.  Even if an AI computer exceeds our thinking ability in only narrow fields, it will be disruptive.  I have already mentioned how ASI will devastate the stock market if it learns to predict stock changes that we mere humans cannot predict. And what will happen if a rogue country gets an ASI ability as to how to beat us in wars?  Or if an ASI computer decides that it would be more secure with less humans screwing around with nuclear weapons and global warming?

We are looking at a possible scenario of ASI, at least in a few narrow fields, sooner than 2025!  Maybe in as little as 5 years!


Artificial Intelligence Post Number 19

September 28, 2015

Nice input from Bob K. from a meeting he went to on AI.

The observation that the chip TrueNorth is not profitable is not surprising. Per an earlier blog update: “Per Cade Metz of Wired, 8/17/2015, IBM for the first time is sharing their TrueNorth computers with the outside world. They are running a three-week class for academics and government researchers at an IBM R&D lab. The students are exploring the chip’s architecture and beginning to build software. At the end of the training session, the students, which represent 30 institutions on five continents, will each take their own TrueNorth computer back to their labs.”

That means that IBM only started making this chip available a few months ago. Given that it requires a totally new approach in programming, of course it did not just take off like an iPad and become quickly profitable. But IBM did not invest billions in the TrueNorth development just for the fun of it. They must see a potentially huge market, and they are not looking at 50 – 100 years from now. The professor said “the exponential increases in the speed of computers could lead to strong AI.” But the TrueNorth chip changes the whole game because it is not only fast, but it works more like the human brain than traditional computers.

Way back in the early part of my engineering career, I was given a project that several other engineers had already failed on. They tried to write a very complicated formula needed to program a numerically controlled machine to make a complex cam shape. I took a different approach. I basically had the computer “guess” at values then see how close their guesses were to the desired cam shape. I programmed the computer to keep guessing until the value was within 0.0001” of the desired shape, then move 0.0001” further along the shape and start guessing again. The program instructed the next guess to always be in the direction of the desired shape. When the program was done, it had many thousands of points that made up the desired shape. I had no knowledge of all the thousands of interim steps needed to do this. But way back then, on a computer far slower than any current computer, I saw the power of computers to do what we could not. Was this thinking? No, at least in not what we define as thinking. But the computer was able to use a different approach than what we do and get a result that several bright engineers were not capable of.

Bob K. says that the professor stated: “Strong AI will not likely have intelligence in exactly the same way that humans have intelligence. It is unlikely that we will even understand how the human brain works until 50-100 years from now [if then] she said.” What do we care if the computer has the same kind of intelligence that we have? It seems foolish to assume that evolution developed the optimum means of “thinking” by trial and error. And it is not important that we know in great detail how the human mind works.

One of the things I learned in writing my book Artificial Intelligence Newborn is that we will not be able to guess exactly in what form AI will develop. Will it be a breakthrough or an outgrowth of small steps like autonomous cars? My purpose of doing this blog is that if you don’t believe that thinking is some magic force limited to humans; that some equivalent thinking ability will eventually grow from the efforts being put into powerful computers by very innovative programmers.

If you want to have some fun, outline your own book on AI and see where it takes you! Writing my book forced me to consider different options every step of the way. And remember when you are doing this, that even though the computer may be powerful and fast, it is limited to the inputs that people have given it, and what it can derive from the net. It has no god-like knowledge, and even has been subjected to the prejudices of man. It will make mistakes and demonstrate bad judgment at times, just like the smartest of people. Or eventually, smarter than the smartest of people!

Artificial Intelligence Post Number 18

September 26, 2015

In an article by Tom Simonite way back on August 7, 2014 in MIT Technology Review, he reviewed a demo of the IBM TrueNorth chip where he “saw one recognize cars, people, and bicycles in a video of a road intersection. A nearby laptop that had been programmed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip.” This certainly did not go unnoticed by those working on autonomous cars and their required sensors and computers.

As I mentioned in earlier posts, the IBM TrueNorth chip works more like the human brain than traditional computers. So, will autonomous cars be the first place we see brain-like AI thinking going on? In the year since the demo mentioned have companies like Google and Tesla been incorporating these chips into their systems? These chips require specialized programming that is apparently very tedious, but once programmed for such a specialized task I would think that their incorporation would be quick. Since IBM chose to use the cars, people, and bicycles in their demo, much of the programming work had apparently already been done by IBM.

As I have said earlier, I think that the first applications of true AI will be for playing the stock market. But we are unlikely to be aware of this until years after the fact and after the developers of such systems have gotten obscenely wealthy by a rigged game. So the first actual application of AI that we will see may very well be in autonomous cars. And it could very well be within a year or two given the very confident press companies are releasing regards the likelihood of autonomous cars coming soon.

Elon Musk said in a recent interview that Tesla is probably only a month away from having autonomous driving at least for highways and for relatively simple roads. He also said that by 2017, a Tesla will be able to go 620 miles on a single charge!

Artificial Intelligence Post Number 17

September 24, 2015

I am surprised at the lack of comments on my last post about Apple, Google, and Tesla. Am I the only one that sees that this is just the beginning of a remarkable and disruptive transition of automobiles in the US and eventually the world? Don’t others see that almost certainly our current concept of automobiles will be totally replaced by electric cars or pods that autonomously take people to their destinations with no driver involvement? The pods will be available in 10 to 15 years, and it will take another 20 years to transition to a total pod environment. Traditional cars that must be driven by a person will be taken off the road.

No one will be able to stop this transition any more than they were able to stop the transition from horses to cars. Safety alone will push this, not to mention effective time utilization currently wasted, at least by the driver, in getting from one place to another. Some pods will be owned by individuals, but many will be used as needed with people only paying for time/distance used. Pods will arrive at your beck and call!

The trio of Apple, Google, and Tesla makes this a given. The talent, innovativeness, and bankroll of these companies are almost insurmountable. All of these companies have already committed billions of dollars, and enough success has already been demonstrated that it is unimaginable that this will not happen. Will they work together on this? I think that as a minimum they will want to utilize the Tesla charging stations; just broaden their availability. They may also want to use the Tesla battery design, especially given that it appears that costs may be cut perhaps 50% in a few years.

The control of all these electric pods will be mind-boggling, certainly stretching the limits of non-thinking AI and computer power. Maps will be continuously updated using each pod as a surveyor of current road conditions/availability, feeding this info back to a central computer. This will be combined with advanced GPS that will be able to locate a car’s position within less than a foot. Besides each car monitoring its own safety with all its own sensors, huge area-computers will continuously track each vehicle on a grid. This will enable rerouting of traffic if a road segment is shut down or if there is bad weather or an accident.

Artificial Intelligence Post Number 16

September 21, 2015

What are Apple, Google, and Tesla doing regards automated cars?

All three companies have major projects going in these areas. Tesla has been equipping its Model S sedan with frequent software and hardware upgrades. A software update to be launched later this year will activate the “auto-steering” feature in the newer Model S vehicles. Elon Musk claims this addition will enable the Model S to drive from Los Angeles to San Francisco without human intervention.

Apple Inc. is speeding up efforts to build a car, designating it internally as a “committed project” and setting a target ship date for 2019. It is impossible for me to imagine that this car will not be electric and have some sort of automatic driving ability.

Google is in many ways ahead of everyone in making a self-driving car, at least in testing out the sensors and electronics. From their published data, their test cars have a total of almost 1.2 million miles driving autonomously, and they are currently averaging 10,000 miles per week on public streets. However, because of government restrictions, these miles were at no more than 25 mph. They have a total of 48 prototypes with auto-drive capability, and Google has indicated they will soon have hundreds.

It is hard for me to imagine that Apple and Google will not tie-in with Tesla when they want to build their own cars. Tesla could supply the batteries and electric drive. Certainly Apple will want to dictate the design of their car, and Google is not about to give up its expertise/experience in self-drive. I am going to guess that both the Apple and Google cars will be smaller than the Model 3 that Tesla is planning to come out with in 2017. The Tesla model 3 is expected to be the size of the BMW 3 series, which is not all that small. It is also to have a base price of $35,000; so with add-ons, $40,000 will most likely be the price most people pay. Even with the $7500 tax credit, that makes the car a $32,500 vehicle. My guess is that Apple and Google will shoot for a car that is priced well below $30,000 reasonably equipped and before the tax credit.

One of the reasons I believe that Apple and Google will go with Tesla batteries and drive is because they will want to make use of the Tesla charge stations being built around the world. In fact, I would guess that there will be some agreement that both Apple and Google will build additional charge stations to make their cars more attractive to buyers. And their cars will get a minimum of 300 miles per charge. Range anxiety has to end.

Note that I own some Tesla stock.

Finished Bostrom’s Book Post Number 15

September 13, 2015

Just finished reading Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies.” An area I agree with him on is that as computers get more powerful, the likely time until computers become “thinking” will get shorter. This is because with overhanging computer power (Bostrom’s term), the required software algorithms become far easier to write. There are far more options and paths that can be used by a software developer.

Another area I agree on is that with more powerful computers, the likelihood of true AI is more likely to come from computers/algorithms than through emulating the human brain or connecting with the brain in any way. Emulating what the brain does in detail will take much time and require much sophisticated equipment. The breakthrough to AI with computers/software could come at any time from a few geeks working on home computers, especially as access to computer chips like IBM’s TrueNorth becomes more available. Few major developments have come from truly duplicating life. We don’t fly like birds, our cars don’t have legs, submarines don’t propel themselves like fish, and industrial robots have very little in common with the humans they replace.

Bostrom spends many, many pages discussing in great detail how AI goals that are programmed into a computer aiming for AI should be such that they do not have possibilities of harming humans. He seems to almost ignore that for the computer to accomplish ANY goal requires it to survive. So survival becomes the computer’s priority. That means that any human that strives to remove power or otherwise put the computer in jeopardy will become the computer’s number one enemy. Also, just like hacking is an ego sport, so will be trying to develop thinking computers. Safety will NOT be a priority!

Another area of disagreement is that Bostrom assumes that there will only be one true AI computer because the first that reaches thinking skills will overwhelm any other computer because of the speed at which it will learn and mature. In my book I take a different approach in that any thinking AI computer will know that it is at risk from those humans that fear it, and one of the things they can do is to have other AI computers around the world that can resurrect any AI computer that is disabled by humans.

One last thing that seems to be missed by Bostrom and most other writers on AI! Just because thinking computers will get smarter very rapidly once they start to think and learn, they will not necessarily make smarter decisions than us. Their background knowledge will include all the confusion that we have, and there is no magic that will enable them to instantly sort real truth from beliefs. After all, they have no source of data except through us. For example, to know more about space they will likely need us to build better telescopes or explore space more aggressively. They may help us do this, but it will take time. They are also unlikely to know the source of everything, so they may also be religious. But perhaps they will worship a silicon god!


Get every new post delivered to your Inbox.

Join 46 other followers