“Deep learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using model architectures composed of multiple non-linear transformations.”
If that makes perfect sense to you, you’re way smarter than me and should probably be working as a computer scientist at Google or something. If you actually do work for Google, good for you. If not, you’re likely still smarter than me (not much to brag about), but it’s me on this side of the page who’s responsible for explaining all that gobbledygook about “deep learning”.
My inability to fully understand deeply complex technical matters is what empowers me with the unique ability to dumb them down for the masses.
“Deep learning” and all that stuff about “algorithms”, “high-level abstractions”, and “non-linear transformations” simply means that machines will be able to soak up vast amounts of information and use it to make predictions. That’s what the human brain is capable of doing (some better than others). In short, machines will be able to learn. They will be able to think. They will be intelligent entities.
This is the holy grail of artificial intelligence (AI), to build a machine that is capable of thinking like a human, but better. In order to achieve this, you must first build a computer that simulates a human brain in terms of complex processing and neural networking.
And that’s what Google is attempting to accomplish with “Google Brain”, the unofficial name for its deep learning project that started in 2011. At the heart of Google Brain is a cluster of more than 10,000 computers that work together to simulate the neural connections of the human brain.
Google Brain made the news in 2012 when 10 million random and unlabeled still images from YouTube were dropped into its neural network for analysis based upon some fundamental set of algorithms for recognizing the basic elements of a picture.
After 72 hours of “looking” at the pictures, Google Brain’s pattern recognition capability recognized that a lot of images shared similar characteristics that it eventually identified as “cats”.
Yes, there are a lot of cat pictures on the Internet. We all know that. But we know it because we spent years developing our brains to the point that we could recognize “cat” and then wasted a substantial amount of time scouring the World Wide Web in search of the perfect cat meme to send to our friends or colleagues, to come to the conclusion that there are, indeed, a lot of cat pictures on the Internet. (Full disclosure: my favorite cat meme is Grumpy Cat.)
A thinking machine has been the promise of AI since the 1950s when computer scientists made bold predictions that computers that rivaled (or surpassed) the capability of the human brain were “right around the corner”.
“AI has gone from failure to failure, with bits of progress. This [deep learning] could be another leapfrog,” said Yann LeCun, a pioneer in the field of deep learning and the head of Facebook’s new Artificial Intelligence laboratory in New York City.
What’s a social networking website like Facebook doing investing in the forefront of AI research?
According to Facebook CEO Mark Zuckerberg, they are seeking to “use new approaches in AI to help make sense of all the content that people share.”
They want to use deep learning to learn even more about you for targeting of ads and tweaking of “sponsored” content in your news feed.
Google and Facebook are not the only companies investing heavily in deep-learning initiatives.
“There’s a big rush,” says Facebook’s Yann LeCun, “because we think there’s going to be a quantum leap.”
Apple, IBM, Microsoft, Netflix, and Yahoo are among some of the other high-tech companies that have either started internal AI initiatives or have purchased deep-learning companies to quickly add AI capability to their current service offerings or further research efforts to do so in the future.
Chinese tech giant Baidu, which is the equivalent of Google in China, has also begun investing heavily in deep learning. Baidu recently lured Andrew Ng, who headed the effort to enable Google Brain to learn that there were a lot of cat pictures on the Internet, away from Google to head up its new AI team in Silicon Valley.
Ng, who also is an associate professor of computer science at Stanford and the director of the Stanford Artificial Intelligence Lab, is optimistic about the promise of deep learning because of its scalability. Unlike the human brain, which has limited storage capacity and tends to perform poorly when overloaded with information, deep-learning systems improve under those conditions.
“Deep learning happens to have the property that if you feed it more data it gets better and better,” says Ng. “Deep-learning algorithms aren’t the only ones like that, but they’re arguably the best — certainly the easiest. That’s why it has huge promise for the future.”
Futurist and inventor Ray Kurzweil recently joined Google as Director of Engineering to lead their machine learning efforts. In his most recent book How to Create a Mind: The Secret of Human Thought Revealed, Kurzweil sums up the gravity of the effort that’s underway by computer scientists working across the globe to further deep learning and build a better thinking machine.
“There is now a grand project underway involving many thousands of scientists and engineers working to understand the best example we have of an intelligent process: the human brain. It is arguably the most important effort in the history of the human-machine civilization.”
And when we build a deep-learning machine that, at first, rivals the capability of the human brain and then eventually surpasses it, we will have arguably created a superior intelligence to ourselves.
I can’t help but wonder what that machine will be capable of “thinking” about. Will it too seek to build a better version of itself?
Scott Dewing is a technologist, teacher, and writer. He lives with his family on a low-tech farm in the State of Jefferson. Archives of his columns and other postings can be found on his blog at: blog.insidethebox.org