Artificial Intelligence Post Number 14

When I started writing about Artificial Intelligence, I was convinced that AI would more disruptive to our economy and to the stock market than most people were assuming. The more I read and research the subject, the more I believe this to be true. And it is true even if there is a secret sauce that is required for computers to truly think, and we can’t discover that sauce. I don’t believe there is such a magic sauce, but no one knows for sure.

But let’s look at the effect of Artificial Narrow Intelligence (ANI), which is already being applied in many areas like robotics and driving assists. Many of the things we do as a profession have a large complement of repeatability and can be done without mechanical assistance of advanced robots or other mechanical devices. Several of those fields are teaching, accounting, law, and medical diagnostics to name a few. In the US, there are roughly 3.5 million teachers, 1.3 million accountants, 1.2 million lawyers, and 1.0 million doctors. This is a total of 7 million people in these four professions. If AI reduces the number of people required in these four fields by 50%, this would force 3.5 million people into other professions, or put them out of work. That would be disruptive! The recent Bureau of Labor Statistics shows the number of unemployed in the US at 8 million. That would jump to 11.5 million with the additional people out of work, even though there would be some trickle-down effect on who ends up actually being unemployed. And the total number would be higher when other jobs indirectly affected are included.

Can ANI really reduce needed employment in these fields by 50%? Let’s look at the teaching field in more detail, using perhaps a second grade student learning to read as an example. There is a lot of diversity in reading skills; actually more than current teachers can effectively handle in a normal sized classroom. But let’s assume each child can go into his own cubicle with its own computer. The child verbally signs on, and the computer asks what teacher he or she would like. He has his choice of a super hero, cartoon character, a male or female teacher, etc. The teacher choice will appear on the screen and talk to the student using the appropriate voice, addressing the child personally by name. The computer will have in its data base a complete history of the child’s background and reading level including any issues like difficulty in understanding reading material, speech impediment, attention deficit, and so on. In the computer’s memory would be appropriate actions for all these issues that have come from reading experts but then further refined by actual past issues with this specific student.

The child would start to read an appropriate book on something like a Kindle rather than the computer screen. The student can ask for help with any word, and the computer will monitor reading speed and periodically interrupt to “discuss” the book with the child to assure that the child understands what he is reading. At any point, the computer could choose to increase the difficulty of the book being read, or take it down a notch. In the discussions, the computer can also be monitoring speech impediments or other things that might have to be flagged for a real teacher.

Note that current computers, sensors, speech capability, and algorithms are capable of doing all the above, and work in these areas is already ongoing. It hasn’t reached the level described above, but it is just a matter of time; probably just a few years. It will then take some time to implement. But since these systems will be better than any traditional classroom, it will be financed through private schools and elite school districts. So even though it perhaps will eventually be most valuable in poorer school districts, the development will be financed by the wealthy. Once the computers and required software is fully developed, the required investment will be manageable by ALL school districts.

A similar story can be told for the fields of accounting, law and medicine. People will still be needed, but they will be used far more effectively on the exceptions.

Disruptive AI is coming!

Advertisements

19 Responses to “Artificial Intelligence Post Number 14”

  1. Robert Kaufman Says:

    Here is a link to an Intelligence Squared Debate that is specifically on this issue.

    The Robots are coming to take your jobs. Be afraid. Be very afraid.

    http://www.intelligencesquared.com/events/the-robots-are-coming-and-they-will-destroy-our-livelihoods/

    • wbrussee Says:

      Enjoyable debate, especially where the audience asks questions. Note that no one questioned that the lowest level of AI is really coming. The discussion was only on its effect. But then again, the audience would be biased otherwise they would have not been there!

  2. Robert Kaufman Says:

    Interesting interview with an expert in AI

    https://www.singularityweblog.com/the-future-of-ai-through-a-historians-looking-glass-a-conversation-with-dr-john-maccormick/

  3. Bob Kaufman Says:

    I am reading ‘SuperIntelligence” by Nick Bistrom. It is a heavy read but really goes into detail on the many possible paths to and obstacles in the way of developing a far superior to human artificial intelligence. Lots of technical stuff. I will have to read it a couple of times. I think it is valuable.

    • wbrussee Says:

      I am reading it too! It is NOT an easy read. I am interested to see what has changed since the book was published, which was only last year. For example, I haven’t seen updates on IBM’s Watson or their TrueNorth chip. But I am only 1/3 into the book.

      I am also enjoying the read to see if the scenarios the author points out for possible future happenings in AI are consistent to what I have in my book I just published. So far we are in sync!

  4. Bob K Says:

    Short article called “Eliminating and Unfriendly AI”. I have no idea how doable these ideas are but it is food for thought.

    https://singularityweblog.com/eliminating-an-unfriendly-ai/

  5. wbrussee Says:

    So many warnings about AI involve robots. But from what I have read, computer gains are far ahead of robotic gains, as impressive as robotic inroads are. Lots of mechanical issues remain to be resolved before we have any possibility of a robot having overall human capabilities. Therefore, I believe that even if computers become truly thinking, they will use humans, not robots, as their servers.

  6. Bob K Says:

    I am on chapter 8 of your book now and I have a question. I know its fiction but you are trying to make a reasonable case for how it might happen. If the AI was concerned with the effects of a nuclear war on the viability of their existence, couldn’t they just disable the bombs by disabling of all the electronics necessary to deliver the bombs instead of the methods that are in chapter 6 and 7?

  7. wbrussee Says:

    You ask, “couldn’t they (AI) just disable the bombs by disabling of all the electronics necessary to deliver the bombs instead of the methods that are in chapter 6 and 7?”

    Remember, the AI at that point have no empathy and they want to do what will have the most lasting effect. Killing a few million humans means absolutely nothing to the AI as long as they have enough people left to take care of their needs. And this is easier than disabling all the individual delivery systems. From chapter eight: “Something bigger and more gruesome would be required. Something so awful that even jaded humans could no longer ignore the consequences of nuclear bombs and nuclear war in general.” Disabling the delivers systems would not have that effect because the horror aspect would be missing. So the AI would always have to worry about some country hiding a small nuclear bomb somewhere that can be delivered on a truck.

    The other thing you will see as you get farther in the book is that the AI computers are continuously maturing, and their acts at this point may not be what they would do in the future.

  8. Sheri Danski Says:

    Hi Bruce, I think it is highly unlikely that AI is going to take over. At the least, the popular conception of AI (which I believe you take in your book) needs to be reformulated into what we should properly refer to as superintelligence. Why? Consciousness and human intelligence are emergent, self-organizing properties of a highly complex system that cannot be simulated in a device or machine of lower complexity. Superintelligence then, I believe, will emerge not from machines (the AI view) but from society itself. This is more akin to the cybernetic/global brain view of superintelligence, which, according to most of my research, is the best way to approach this issue. What are your thoughts on this?

  9. wbrussee Says:

    Sheri, you address your comment to Bruce, but I assume it is to me. You say “Consciousness and human intelligence are …a highly complex system that cannot be simulated in a device or machine of lower complexity.”

    I agree. That is why artificial intelligence has not taken over. Yet! But we have a lot of data on what the brain is capable of, both in computational power and speed. We are now on the threshold of matching or beating those numbers with newer parallel processing computer chips. Once we reach those goals, are the computers truly less complex than the brain? Sure, they are different. They are electrical rather than electro-chemical. But our airplanes fly pretty well even though they don’t flap their wings like birds! Evolution doesn’t always give us an optimal design.

    As I have said many times in this blog, if you believe that there is some secret sauce that the brain uses to “think” that we can’t duplicate, then AI will never be a threat. But so far, there is no evidence of that secret sauce. If religion tells someone that thinking is something reserved for man, then they will have a hard time accepting any of this until it actually happens, if indeed it ever does.

    The main reason I got into this are the many extremely smart people of various backgrounds all saying that we should be aware of the risk of computers becoming smarter than us. I can’t ignore those smart people, and everything I have researched since then supports their warnings. I have never been a science fiction fan, so I did not go into this with some prior need to believe. In fact, I am quite skeptical by nature.

  10. Sheri Danski Says:

    Consciousness and human intelligence are the result of complex, self modifying, organic networks comprised of trillions of connections (constantly rewiring and changing their state of connectivity) while receiving continual sensory input from, as well, highly complex social interactions. The more we learn about the brain in terms of its internal function and direct relationship to the external world mediated through various sensory organs, the more neuroscientists detail the profound difficulty of simulating this in a computer. A more practical and intellectually sound view is that we are recreating a superhuman brain (superintelligence) at a higher scale through the collective interactions of individuals and technology. Society and technology in this case are in a state of coevolution, where artificial or automated systems serve a similar role as the sub/unconscious processes crucial to human intelligence at an individual level.

  11. wbrussee Says:

    Sheri says: “The more we learn about the brain in terms of its internal function and direct relationship to the external world mediated through various sensory organs, the more neuroscientists detail the profound difficulty of simulating this in a computer. “

    I agree that we won’t be able to directly simulate the brain on a computer. But we may well be able to exceed the resultant OUTPUT of the brain using a computer. We have already seen computers beat humans in playing chess and on Jeopardy. We would probably both agree that these computers were not truly thinking. But they did indeed beat humans in areas that only a relatively few years ago many people did not believe possible!

    As computers continue to broaden in capability and speed, including having speech and optics abilities, at what point do their output start looking an awful lot like thinking? Again, unless you believe that there is a special sauce that will preclude this. I see no special sauce or limits on what computers will be able to do in a relatively few years. It just won’t do it in the same electro-chemical manner as the human brain.

    However, I want to reiterate that the main point of this blog is that even without thinking computers, their power and capacity are already at a point to be disruptive to our society, and all of us would be wise to keep these changes in mind as we invest or choose a profession. When I wrote my recent book I did indeed choose to show thinking computers because that is the risk that many very smart people are pointing out. But a lot of their writings are esoteric and hard to decipher. I just wanted to put a face on this that people can understand and wonder about!

  12. Sheri Danski Says:

    Yes, superintelligence will be highly disruptive but the “thinking” will be aggregated human data, knowledge, intelligence, and behavioral patterns and not derived from the machine itself. This is what makes big data so successful and instructive when it comes to this issue. What big data reveals is that higher order intelligence is directly proportional to the amount of data/information (hence its name – big data) and not the intelligence of the machine or sophistication of the algorithm. What is the leading AI research firm in the world right now? Google. Why? Because it has access to unfettered human data across the globe. Superintelligence is an outgrowth of a complex human-machine (digital-analog) network. If you focus purely on the machine portion (the AI view), you are only seeing half of the picture. Why do many leading scientists miss this? Because most are trained reductionists. However, with complex systems, their behavior is always more than just the sum of their parts–a property that is not only non-linear by nature, but also non-computable by a formal mechanical process. This is why the human element is so crucial and, going back to superintelligence, argues that the right way to conceive of it is not as some type of Terminator or robotic Skynet but as a socio-technological superorganism akin to the global brain/cybernetic viewpoint. If you write another book, I would highly recommend exploring this angle instead. Need a good plot? Read my blog for some ideas: http://aiantichrist.blogspot.com/

  13. wbrussee Says:

    Sherri, your blog has a religious overtone as shown by this quote from your blog: “This may be splitting hairs but my point is that it will not be purely artificial or machine-like but reflective of and deeply connected to humanity. As it says in Revelation, the beast rises out of the sea.…Notice the parallel duality between Jesus who is said to be both fully God and fully human.”

    I have said all along that many people with religious beliefs may not be able to accept that a machine can independently ever out think a human. Since I incorporate no religion in my thinking, we will always disagree. And that is okay, because none of us can be sure of what is going to happen in the future!

    But time will tell. And I don’t think that it will be too many years.

    • Sheri Danski Says:

      Actually, I started out in your camp, but after weighing the arguments for and against strong AI from a wide range of disciplines, including modern neuroscience and complexity theory, slowly moved over to the cybernetic view. Religion/theology was not the reason for this shift and there are well known Christian scientists like Frank Tipler, for example, that hold to the AI view of super intelligence as well. He believes of course that everything can be explained by physics, which is a form of reductionism, and fails to view the complex sociological interplay between humans and technology more akin to the global brain/cybernetic view.

  14. Oliver Holzfield Says:

    The bible is not a historical document. If it were useful in predicting future events, it would have done so already. Interweaving things like AI with “Anti-Christ” is a great way to sell books, but isn’t useful beyond that. One could also ascribe the term Anti-Christ to things like nuclear weapons.

    ““The majority of men prefer delusion to truth. It soothes. It is easy to grasp. Above all, it fits more snugly than the truth into a universe of false appearances—of complex and irrational phenomena”

    Friedrich Nietzsche

  15. Oliver Holzfield Says:

    BTW, the quote from Friedrich Nietzsche is from his wonderful book titled Antichrist.

    https://en.wikipedia.org/wiki/The_Antichrist_(book)

    • Sheri Danski Says:

      Nietzche’s philosophy espoused in the Antichrist certainly aligns with the biblical antichrist or ruler of the world during the end times. Namely, the absolute use of merciless power in exact opposition to the model provided by Jesus which cared for the weak and oppressed. Of course, Nietzsche is also famous for his concept of the Übermensch (or Superhuman) which I also believe to be a true characterization of the antichrist biblical personage. In many ways, Nietzsche repackaged portions of biblical teaching through his own words and philosophical framework in fulfillment of what it says we should expect.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: