AI: Sometimes A Banana Is Just A Banana
By admin_45 in Blog
One of our hacks for finding leading-edge disruptive tech content is to look at scientific research websites and review the “Most read” articles and papers. Most are reaffirmations of existing paradigms, but occasionally something novel or provocative pops up. Those are typically worth our attention because they tend to signal that researchers are rethinking an important topic.
Such is the case with a news item titled “What are the limits of deep learning?”, currently the third most-read item on the Proceedings of the National Academy of Sciences website. The author, M. Mitchell Waldrop, has been writing about artificial intelligence since the 1980s and has both a masters degree in journalism and a Ph.D. in particle physics. A Venn diagram of those two disciplines, we suspect, has one person in common and it’s Dr. Waldrop.
Deep learning is one of the most important disruptive technologies of the next 10 years. At its core, it seeks to replicate human cognition. Rather than coding a computer with every possible task, deep learning can create a system of artificial intelligence that makes decisions like a person – in real time, based on the unique facts presented.
Want to see truly autonomous vehicles in coming years, or have a personal assistant on your phone that can do more than just dial a contact or tell you the weather? All that requires meaningful advances in deep learning and AI. By extension, if you want to own disruptive Tech names like Google, Amazon, Apple and Facebook for the next 5-10 years, you have to believe AI will continue to expand its abilities. Meaningfully.
The PNAS article calls the speed of future developments in deep learning/AI into question with a simple example: altering a picture of a banana with a day-glow sticker pasted into a corner of the image. When an AI system already trained to identify bananas saw the picture, it identified it as a toaster. AI researchers at Google Brain even have a name for this glitch: an “adversarial attack”, and like the common cold in humans there’s no current cure for the problem.
After a review of AI developments since the 1980s, Waldrop comes to the nut of the problem: something is missing from current AI development. He posits three solutions:
- Instead of focusing on one task (like identifying fruit), AI research should build systems meant to perform multiple tasks. The software could then, in theory, begin to build a common framework in the same way a human baby begins to learn.
- Build multiple AI networks to work in tandem. Google’s DeepMind team has published work on this approach, using one system optimized for a specific task to then feed another system that finishes the job. Not very elegant, but apparently effective.
- Hardwire some basic elements into a deep learning/AI system before it starts to develop. The idea here is to replicate the human brain, which doesn’t start from a clean slate the way current AI systems begin their development. One could hardwire banana images to always come up “banana”, eliminating the possibility of an adversarial glitch.
The upshot to all this: deep learning/AI is still in a formative stage of development and when it comes to world-changing applications it’s still far from ready for “prime time”. This is something that came up in one of the Davos panels we reviewed, with the speakers most intimately involved in AI development expressing exactly the same view. In the end, we suspect that is why Dr. Waldrop’s article is getting so much attention from the scientific community. AI and deep learning are at a crossroads, and no one is quite sure what comes next.
As far as what this means for the valuation of Big/Disruptive Tech, we are of two minds.
- On the plus side of the ledger, it makes the next 3-5 years more predictable in terms of business models; they will remain largely the same rather than suddenly morph into something new and unpredictable driven by an explosion of AI.
- On the negative side, new products like truly autonomous vehicles are clearly further away than the buzz around them would indicate.
In the end, we’ll come down on the positive side. Slower AI adoption will mean less social disruption, with fewer humans losing their jobs to computers than if we see a tidal wave of innovation. That should ease regulatory pressure, a problem this group already has in spades.