Learning Path
Question & Answer
Choose the Best Answer
Activation functions become non-linear, making optimization harder
The network learns redundant features that do not contribute to accuracy
The gradient can vanish or explode during backpropagation
The increased parameters lead to a higher training loss
Understanding the Answer
Let's break down why this is correct
When a network is very deep, the small changes in the output are multiplied many times during backpropagation. Other options are incorrect because Activation functions are designed to add non‑linearity, but they do not make the optimization impossible; Deeper networks can learn richer features, not just useless ones.
Key Concepts
Degradation Problem in Deep Networks
hard level question
understand
Deep Dive: Degradation Problem in Deep Networks
Master the fundamentals
Definition
The degradation problem in deep networks refers to the phenomenon where increasing network depth leads to saturation and rapid degradation in accuracy, despite not being caused by overfitting. This challenge highlights the complexities of optimizing deep models and the need for innovative approaches to prevent performance degradation.
Topic Definition
The degradation problem in deep networks refers to the phenomenon where increasing network depth leads to saturation and rapid degradation in accuracy, despite not being caused by overfitting. This challenge highlights the complexities of optimizing deep models and the need for innovative approaches to prevent performance degradation.
Ready to Master More Topics?
Join thousands of students using Seekh's interactive learning platform to excel in their studies with personalized practice and detailed explanations.