Learning Path
Question & Answer1
Understand Question2
Review Options3
Learn Explanation4
Explore TopicChoose the Best Answer
A
hidden states
B
input sequences
C
activation functions
D
output layers
Understanding the Answer
Let's break down why this is correct
Answer
The bottleneck in training RNNs is the sequential generation of hidden states, because each hidden state depends on the previous one. This dependency forces the network to process one time step after another, preventing simultaneous computation of many steps. As a result, the training loop must wait for the previous state before computing the next, which slows down the process and limits GPU parallelization. For example, if a sequence has ten steps, the network must compute step 1, then step 2, and so on, instead of computing all steps in parallel. This sequential nature is why RNNs are less efficient than feed‑forward networks.
Detailed Explanation
The hidden state is produced one step after another. Other options are incorrect because Input sequences are the data that the network reads, but they do not force the model to wait; Activation functions add non‑linearity, but they are applied at each step and do not create a chain of dependencies.
Key Concepts
Recurrent Neural Networks
Hidden States
Sequence Modeling
Topic
Recurrent Neural Networks (RNN)
Difficulty
medium level question
Cognitive Level
understand
Practice Similar Questions
Test your understanding with related questions
Ready to Master More Topics?
Join thousands of students using Seekh's interactive learning platform to excel in their studies with personalized practice and detailed explanations.