In my recent posts I have emphasized the changes most likely to happen with the simplest form of AI. There seems little doubt these will happen because we can already see signs of them in the marketplace in advanced robotic manufacturing and the progress towards self-driving cars.
But what about computers that truly think? This is the level of AI (AGI) that is frightening many of our brightest people. Is this just a “sky is falling” thing? After all, you can go back thirty years and find articles, books, and even movies that predicted that by 2001 thinking computers would already be here and causing havoc! Is anything that much different now? Sure, computers are bigger and faster. But all the computers in use still use the von Neumann linear processing architecture developed in 1945. They are all just really fast calculators!
But there IS now a real difference! I have already mentioned IBM’s TrueNorth chip that does parallel processing and attempts to emulate what the human brain does. And a company called HRL is developing a chip that comes even closer to emulating the brain in that its internal connections adjust to new data – it learns from experience, much like a child! But none of these approaches are truly IDENTICAL to the biological brain. Are they close enough in design and application to actually become thinking entities? I don’t think (pun intended) that anyone truly knows, because we don’t really know how to even define “thinking.” If these chips are loaded up with data and given goals, will they independently find unique paths to reach those goals? Of course, if they do, that also could be a problem. If you ask a thinking AGI computer how to solve global warming issues identified as being caused by humans, their advice to kill all of mankind may be valid but not welcome! Or even if their advice is less terrifying, if they suggest shutting down all coal-firing power plants for example, is this viable politically even if it may be theoretically possible? And to eliminate this kind of impractical advice, do AGI computers require some morality judgements based on human values? We cannot even agree on what those are within the human race. We quickly get into religious and philosophical issues as we get closer to the possibility of AGI.
I will continue to monitor the progress of AI as much as possible just by trying to glean as much as I can from published articles. Hopefully blog readers will help on this. Even if we never get to the very frightening level of AGI or developing thinking computers smarter than us, we need to be monitoring the advances that are likely to explode with the introduction of chips like the IBM TrueNorth.