We’ve hit a tipping point for deep learning. The process, a form of machine learning, refers to AI’s ability to improve its understanding of how to process information based off of past information, rather than blindly following a programmed scenario. What’s interesting is why deep learning is taking over now. Just five years ago, VCs might not have been familiar with the term. Today, any VC could easily be skeptical of a startup that doesn’t rely on the process.
Why It’s Suddenly Popular
Deep learning has been around since the 1950s. What has not been around that long, however, is the data that a deep learning system needs in order to improve. From a recent Fortune article on the topic:
“‘This is deep learning’s Cambrian explosion,’ says Frank Chen, a partner at the Andreessen Horowitz venture capital firm, alluding to the geological era when most higher animal species suddenly burst onto the scene.
That dramatic progress has sparked a burst of activity. Equity funding of AI-focused startups reached an all-time high last quarter of more than $1 billion, according to the CB Insights research firm. There were 121 funding rounds for such startups in the second quarter of 2016, compared with 21 in the equivalent quarter of 2011, that group says. More than $7.5 billion in total investments have been made during that stretch—with more than $6 billion of that coming since 2014.”
Teams have been intentionally leveraging the power of the internet to build their own databases for use in deep learning experiments: The Image Net project, launched in 2007, is a collaboration of over 50,000 people in 180 countries. They were able to create the largest international image database ever in 2009, with about 15 million titled and classified images spreading across 22,000 categories.
The Rise of Deep Learning
Today, news broke that Google Translate will, moving forward, be using deep learning:
“Several years ago, Google began experimenting with a deep-learning technique, called neural machine translation, that can translate entire sentences without breaking them down into smaller components. That approach eventually reduced the number of Google Translate errors by at least 60 percent on many language pairs in comparison with the older, phrase-based approach.
‘We believe we are the first using [neural machine translation] in a large-scale production environment,’ says Mike Schuster, research scientist at Google.”
The amounts of data available continue growing at an increasing rate. According to a 2014 Internet Trends report, an average of 1.8 billion digital images are uploaded every day, a number that only risen since. Every CCTV camera, every smartphone, every Facebook user, every Instagram account continues to pump out additional data about everyday events that might one day be fed into a constantly-improving deep learning algorithm. Get used to it.