By Matthew Putman
I was at a book-launch party this week for cognitive scientist, author, and AI entrepreneur, Gary Marcus, who I’ve known for over a decade. Andy Aaron, a senior IBM AI Researcher, introduced Gary as an “AI skeptic.” Gary created two AI companies. AI builder/AI sceptic: Is it a contradiction, an oxymoron, a symptom, an occupational hazard…or simply a job description?
Gary’s book, cowritten with Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust, explores the gap between the promise of AI and its current achievements. The basic supposition is that the statistical methods we currently use to teach AI have inherent problems in crucial learning. That “Deep Learning” does not approach “deep understanding,” and that progress will reach a plateau, particularly in machine learning. To illustrate, for example, Gary, the founder of a company acquired by Uber for autonomous vehicles, enumerates all the ways AI falls short in this domain, no matter how many rules it is given.
As a professor, Gary studies how humans learn first-hand while building algorithms for AI. In his book he observes how his own small child navigates the world…and finds AI significantly lacking in common sense.
I have written before about Gary’s books and have worked on projects with Gary at Nanotronics. I believe that his criticisms are directed at the field as challenges, and that he wants to see progress.
I find it hard to see anything through a different lens than that of the CEO of Nanotronics. If Gary is right about a four-year-old child performing better in many instances than some of the most sophisticated AIs, what does that say about the challenge we are faced with at Nanotronics?
I left with this thought in mind. Our version of an intelligent factory, AIPC, is not only superior to a factory controlled by a human, but superior to the sum of the multiple people who control parts of factories. AIPC builds complex, functional, three-dimensional products. With AIPC, AI makes better choices than existing human methods for controlling factories, but I question where human intuition in a factory cannot be so easily replaced.
The first book of Gary’s that I read gave me some clues to putting this together. Kluge: The Haphazard Evolution of the Human Mind, was not so much about AI, but instead about the flaws in the evolved human. We are far from perfectly designed machines. We have nimble spines that take terrible abuse throughout life. We have cranial nerves that wrap around our heart (when they could just stay in our head). We are not optimal in many respects—and yet humans as a species are incredibly successfully adapted. Gary, someone who has given equal energy to human flaws as to AI flaws, is clearly inspired by both.
With Rebooting AI, he challenges us to learn more, teach more. This is always a great place to start. From where I sit, the most important takeaway is not that AI hype exceeds AI reality, but Gary’s reexamination of progress and intelligence, whether artificial or human.
AI has seen some great successes, and some of those have super-human qualities and should be celebrated, but we should always remember the constraints. The ancient game of GO has simple rules. Yes, infinite combinations can be made, but one is still restricted to the two-dimensional face of a board with intersecting black lines.
I realize that AIPC, in some ways, resembles the simulacrum of game worlds—with rules, data, and rewards—however, if we are in a game, it is one with ever-expanding constraints, ever-shifting rules, that plays with time, material, and space…and builds real products.
How can play and games invoke the most challenging of human skills—creativity? We must remember that the spirit of play, learning, and creativity is what drives our own intelligence… and that art does not need to be separated from artificial intelligence.