Learning Path
Question & Answer1
Understand Question2
Review Options3
Learn Explanation4
Explore TopicChoose the Best Answer
A
hidden states
B
input sequences
C
activation functions
D
output layers
Understanding the Answer
Let's break down why this is correct
Answer
The bottleneck in training RNNs is the sequential generation of hidden states, because each hidden state depends on the previous one. This dependency forces the network to process one time step after another, preventing simultaneous computation of many steps. As a result, the network cannot exploit parallel hardware, which slows training and limits scalability. For example, in a language model that processes a sentence, the hidden state for the third word must wait until the second word’s state is computed, so the three computations cannot run in parallel. This sequential nature is what makes RNNs less efficient than feed‑forward nets that can compute all layers at once.
Detailed Explanation
Each hidden state depends on the previous hidden state. Other options are incorrect because Input sequences are the data fed into the network, but the network can still process them in parallel; Activation functions are applied element‑wise and can be calculated in parallel.
Key Concepts
Recurrent Neural Networks
Hidden States
Sequence Modeling
Topic
Recurrent Neural Networks (RNN)
Difficulty
medium level question
Cognitive Level
understand
Practice Similar Questions
Test your understanding with related questions
Ready to Master More Topics?
Join thousands of students using Seekh's interactive learning platform to excel in their studies with personalized practice and detailed explanations.