Deep learning vs. machine learning: Explained

Both are powerful forms of AI, but one’s more mysterious than the other.

Well, you clicked this, so obviously you’re interested in some of the finer nuances of artificial intelligence. Little wonder; it’s popping up everywhere, taking on applications as far ranging as trying to catch asymptomatic COVID infections via cough, creating maps of wildfires faster, and beating up on esports pros.

It also listens when you ask Alexa or summon Siri, and unlocks your phone with a glance.

But artificial intelligence is an umbrella term, and when we start moving down the specificity chain, things can get confusing — especially when the names are so similar, e.g. deep learning vs. machine learning.

Deep Learning vs. Machine Learning

Let’s make that distinction between deep learning vs machine learning; they’re pretty closely related. Machine learning is the broader category here, so let’s define that first.

 Machine learning is a field of AI wherein the program “learns” via data. It existed on paper in the 1950s and in rudimentary forms by the 1990s, but only recently has the computing power it needs to really shine been available.

That learning data could come from a large set labeled by humans — called a ground truth — or it can be generated by the AI itself.

For example, to train a machine learning algorithm to know what’s a cat — you knew the cat was coming — or not you could feed it an immense collection of images, labeled by humans as cats, to act as the ground truth. By churning through it all, the AI learns what makes something a cat and something not, and can then identify it.

The key difference for deep learning vs machine learning is that deep learning is a specific form of machine learning powered by what are called neural nets.

As their name suggests, neural nets are inspired by the human brain. Between your ears, neurons work in concert; a deep learning algorithm does essentially the same thing. It uses multiple layers of neural networks to process the information, delivering, from deep within this complicated system, the output we ask it to.

Take the computer program AlphaGo. By playing the strategy board game Go against itself countless times, AlphaGo developed its own unique playing style. Its technique was so unsettling and alien that during a game against Lee Sedol, the best Go player in the world, it made a move so discombobulating that Sedol had to leave the room. When he returned, he took another 15 minutes to think of his next step.

He has since announced his retirement. “Even if I become the number one, there is an entity that cannot be defeated,” Sedol told Yonhap News Agency.

Notice how Sedol called AlphaGo an “entity?” That’s because it didn’t play like a run-of-the-mill Go program, or even a typical AI. It made itself into something … else.

Deep learning systems like AlphaGo are, well, deep. And complex. They create programs we really do call entities because they take on a “thinking” pattern that is so complex that we don’t know how they arrive at their output. In fact, deep learning is often referred to as a “black box.”.

The Black Box Problem

Since deep learning neural nets are so complex, they can actually become too complex to comprehend; we know what we put into the AI, we know what it gave us, but in-between, we don’t know how it arrived at that output — that’s the black box.

This may not seem too concerning when the AI in question is recognizing your face to open your iPhone, but the stakes are considerably higher when it’s recognizing your face for the police. Or when it is trying to determine a medical diagnosis. Or when it is keeping autonomous vehicles safely on the road. While not necessarily dangerous, black boxes pose a problem in that we don’t know how the entities are arriving at their decision — and if the medical diagnosis is wrong or the autonomous vehicle goes off the road, we may not know exactly why.

Does this mean we shouldn’t use black boxes? Not necessarily. Deep learning experts are divided on how to handle the black box.

Some researchers, like Auburn University computer scientist Anh Nguyen, want to crack open these boxes and figure out what makes deep learning tick. Meanwhile, Duke University computer scientist Cynthia Rudin thinks we should focus on building AI that doesn’t have a black box  problem in the first place, like more traditional algorithms. Still other computer scientists, like the University of Toronto’s Geoff Hinton and Facebook’s Yann LeCun, think we shouldn’t be worried about black boxes at all. Humans, after all, are black boxes as well.

It’s a problem we’ll have to wrestle with, because it can’t really be avoided; more complex problems require more complex neural nets, which means more black boxes. In deep learning vs machine learning, the former’s going to wipe the floor with the later when problems get tough — and it uses that black box to do so.

As Nguyen told me, there’s no free lunch when it comes to AI.

Related
New AI detection tool measures how “surprising” word choices are
A new AI detection tool that measure how “surprising” text is reportedly delivers far fewer false positives than existing options.
DeepMind’s AI could accelerate drug discovery
A new study suggests that AlphaFold, DeepMind’s AI tool for predicting protein structures, could be useful for drug discovery after all.
10 must-see technologies from CES 2024
From super-hyped AI assistants to apps that translate babies’ cries, CES 2024 has given us a glimpse at the tech of tomorrow, today.
Data poisoning: how artists are sabotaging AI to take revenge on image generators
Artists unhappy with their work being used by generative AI have are using “data poisoning” to mess with the algorithm.
Microsoft launches Copilot Pro for “power users”
Microsoft has launched Copilot Pro, a premium subscription service that makes its AI companion accessible to more people in more contexts.
Up Next
ai military jet
Subscribe to Freethink for more great stories