Deep learning, an advanced artificial intelligence technique, has become increasingly popular in the past few years, thanks to abundant data and increased computing power. It’s the main technology behind many of the applications we use every day, including online language translation and automated face-tagging in social media.
This technology has also proved useful in healthcare: Earlier this year, computer scientists at the Massachusetts Institute of Technology (MIT) used deep learning to create a new computer program for detecting breast cancer.
Classic models had required engineers to manually define the rules and logic for detecting cancer, but for this new model, the scientists gave a deep-learning algorithm 90,000 full-resolution mammogram scans from 60,000 patients and let it find the common patterns between scans of patients who ended up with breast cancer and those who didn’t. It’s able to predict breast cancer up to five years in advance, a considerable improvement over previous risk-prediction models.
What Exactly Is Machine Learning?
Deep learning is a subset of machine learning, a branch of artificial intelligence that configures computers to perform tasks through experience. Contrary to classic, rule-based AI systems, machine learning algorithms develop their behavior by processing annotated examples, a process called “training.”
For instance, to create a fraud-detection program, you train a machine-learning algorithm with a list of bank transactions and their eventual outcome (legitimate or fraudulent). The machine learning model examines the examples and develops a statistical representation of common characteristics between legitimate and fraudulent transactions. After that, when you provide the algorithm with the data of a new bank transaction, it will classify it as legitimate or fraudulent based on the patterns it has gleaned from the training examples.
As a rule of thumb, the more quality data you provide, the more accurate a machine-learning algorithm becomes at performing its tasks.
Machine learning is especially useful in solving problems where the rules are not well defined and can’t be coded into distinct commands. Different types of algorithms excel at different tasks.
Deep Learning and Neural Networks
While classic machine-learning algorithms solved many problems that rule-based programs struggled with, they are poor at dealing with soft data such as images, video, sound files, and unstructured text.
For instance, creating a breast-cancer-prediction model using classic machine-learning approaches would require the efforts of dozens of domain experts, computer programmers, and mathematicians, according to AI researcher and data scientist Jeremy Howard. The researchers would have to do a lot of feature engineering, an arduous process that programs the computer to find known patterns in X-ray and MRI scans. After that, the engineers use machine learning on top of the extracted features. Creating such an AI model takes years.
Deep-learning algorithms solve the same problem using deep neural networks, a type of software architecture inspired by the human brain (though neural networks are different from biological neurons). Neural networks are layers upon layers of variables that adjust themselves to the properties of the data they’re trained on and become capable of doing tasks such as classifying images and converting speech to text.
Neural networks are especially good at independently finding common patterns in unstructured data. For example, when you train a deep neural network on images of different objects, it finds ways to extract features from those images. Each layer of the neural network detects specific features such as edges, corners, faces, eyeballs, etc.
By using neural networks, deep-learning algorithms obviate the need for feature engineering. In the case of MIT’s breast-cancer-prediction model, thanks to deep learning, the project required much less effort from computer scientists and domain experts, and it took less time to develop. Also, the model was able to find features and patterns in mammogram scans that human analysts missed.
Neural networks have existed since the 1950s (at least conceptually). But until recently, the AI community largely dismissed them because they required vast amounts of data and computing power. In the past few years, the availability and affordability of storage, data, and computing resources have pushed neural networks to the forefront of AI innovation.
What Is Deep Learning Used For?
There are several domains where deep learning is helping computers tackle previously unsolvable problems.
Computer vision: Computer vision is the science of using software to make sense of the content of images and video. This is one of the areas where deep learning has made a lot of progress. Aside from breast cancer, deep learning image processing algorithms can detect other types of cancer and help diagnose other diseases.
But deep learning is also ingrained in many of the applications you use every day. Apple’s Face ID uses deep learning, as does Google Photos uses deep learning for various features such as searching for objects and scenes as well as correcting images. Facebook uses deep learning to automatically tag people in the photos you upload.
Deep learning also helps social media companies automatically identify and block questionable content, such as violence and nudity. And finally, deep learning is playing a very important role in enabling self-driving cars to make sense of their surroundings.
Voice and speech recognition: When you utter a command to your Amazon Echo smart speaker or your Google Assistant, deep-learning algorithms convert your voice to text commands. Several online applications use deep learning to transcribe audio and video files. Google recently released an on-device, real-time Gboard speech transcription smartphone app that uses deep learning to type as you speak.
Natural language processing (NLP) and generation (NLG): Natural language processing, the science of extracting the meaning of unstructured text, has been a historical pain point for classic software. Defining all the different nuances and hidden meanings of written language with computer rules is virtually impossible. But neural networks trained on large bodes of text can accurately perform many NLP tasks.
Google’s translation service saw a sudden boost in performance when the company switched to deep learning. Smart speakers use deep-learning NLP to understand the various nuances of commands, such as the different ways you can ask for weather or directions.
Deep learning is also very efficient at generating meaningful text, also called natural language generation. Gmail’s Smart Reply and Smart Compose use deep learning to bring up relevant responses to your emails and suggestions to complete your sentences. A text-generation model developed by OpenAI earlier this year created long excerpts of coherent text.
The Limits of Deep Learning
Despite all its benefits, deep learning also has some shortcomings.
Data dependency: In general, deep learning algorithms require vast amounts of training data to perform their tasks accurately. Unfortunately, for many problems, there’s not enough quality training data to create deep learning models.
Explainability: Neural networks develop their behavior in extremely complicated ways—even their creators struggle to understand their actions. Lack of interpretability makes it extremely difficult to troubleshoot errors and fix mistakes in deep-learning algorithms.
Algorithmic bias: Deep-learning algorithms are as good as the data they’re trained on. The problem is that training data often contains hidden or evident biases, and the algorithms inherit these biases. For instance, a facial-recognition algorithm trained mostly on pictures of white people will perform less accurately on non-white people.
Lack of generalization: Deep-learning algorithms are good at performing focused tasks but poor at generalizing their knowledge. Unlike humans, a deep learning model trained to play StarCraft won’t be able to play a similar game: say, WarCraft. Also, deep learning is poor at handling data that deviates from its training examples, also known as “edge cases.” This can become dangerous in situations such as self-driving cars, where mistakes can have fatal consequences.
The Future of Deep Learning
Earlier this year, the pioneers of deep learning were awarded the Turing Award, the computer science equivalent of the Nobel Prize. But the work on deep learning and neural networks is far from over. Various efforts are in the works to improve deep learning.
Source: What Is Deep Learning?