Archive for December, 2015

Artificial Intelligence Post Number 25

December 14, 2015

What is Elon Musk, the CEO of Tesla and Space X, up to? Many months ago he invested in a company called DeepMind, with the intent of monitoring the advance of AI; to make sure it didn’t go in evil directions that could hurt mankind. Then Google bought DeepMind. Now Elon and several others have “financed” over a billion dollars in a non-profit called OpenAI. It will be co-chaired by Elon and Sam Altman, the CEO of Y Combinator. They already have eight skilled researchers and are in the process of hiring more.

Per the company’s name, they plan to make everything they do public and open-sourced. They are not just going to be monitoring AI’s progress; they will be pushing the AI envelope themselves. Their goal is that once true AI happens, it will not only be in the hands of a few who could use it for evil purposes. Similar to the NRA saying that only a good guy with a gun can stop a bad guy with a gun, OpenAI hopes to have many good guys with thinking AI computers that can minimize or eliminate the power of an evil person, country, or company with AI that is potentially smarter than a million people. Amazon Web Services is donating much computer power and infrastructure. Elon has stated that much information will be shared from Tesla and Space X.

It is my guess that even Elon has been surprised at the progress of self-driving cars and the intelligence of the self-learning systems Tesla has developed. In fact Tesla has recently advertised for more programmers to work on these systems, and Elon has stated that he will interview the candidates himself and the group will be answering directly to him.

I am surprised that I am not getting more comments on this blog. Am I the only one that believes that we may be seeing disruptive gains in AI that exceed the most ambitious timelines “experts” have projected?

Artificial Intelligence Post Number 24

December 6, 2015

I like to read recent books related to AI. “Artificial Intelligence: The God Killer,” by Zed Marston was published on October 3, 2015, and despite it being a short book, it has a 5 star rating on Amazon. It is an easy read.

I believe that this this book, as many books on AI, makes a serious mistake in that it assumes that once computers have abilities exceeding that of humans, they will get almost god-like knowledge and wisdom. But how will this happen? The computers will have to build on the same knowledge base that humans use. They will just do it faster and with more accuracy. No unique knowledge on questions such as “why are we here?” or “why is there anything?” or even “does the theory of evolution really explain everything?” will suddenly appear for the computers! The computers will do a better job of defining the inconsistencies in the traditional religious texts, but people already ignore those inconsistencies. Sure, AI may help us build better telescopes or advanced rocketry for space exploration; but until that data are available it is unlikely that we are going to make breakthroughs or have a better understanding of basic philosophical questions that often drive people to religion or a belief in some creator. Until humans (and maybe even computers) have a better explanation for their existence and the meaning of life, the need for humans to believe in an overall creator will likely continue. In fact, AI may even increase this belief mode because few people will want to accept computers as superior in any way. They will continue to believe that man was created in God’s image, and that we are special.

I have read the Old Testament, the New Testament, and the Qur’an. My personal beliefs are that each is flawed with huge inconsistencies and biases. But others have read one or more of these books and become (or remained) believers. It isn’t that the readers don’t see the inconsistencies. It just is that in most cases they give these religious sources huge latitude in that the books were written by people, not their God, and often written many years after the events described. These believers also don’t see any better explanations. Would not a computer give religion the same latitude, especially given that there seems to be no real explanation of the existence of anything that can be scientifically tested and explained with real confidence? Again, computers may not recognize religion as a strong explanation for anything; but they are unlikely to rule out all religious beliefs without having alternative testable explanations for the basic existence questions.

In my book “Artificial Intelligence Newborn – It is 2025, and I Am Here,” I include the effect of religion. Although the book is fiction, it is an attempt to present a very viable possible future once computers get the ability to think.