Learning Path
Question & Answer
Choose the Best Answer
Positional encoding helps to identify the sequence of data inputs for the encoder, which then directly sends its output to the decoder.
The encoder processes the data without needing positional encoding, while the decoder only uses it to predict future outputs.
Both the encoder and decoder use positional encoding to retain the order of data, allowing for more accurate context understanding during processing.
Positional encoding is only relevant in the decoder phase and has no role in the encoder structure.
Understanding the Answer
Let's break down why this is correct
Positional encoding gives the model a sense of order in each sequence. Other options are incorrect because This view says only the encoder needs the order, but the decoder also requires it to make sense of the outputs it generates; This answer claims the encoder can work without positional clues, which is incorrect because the encoder still needs to understand the sequence of inputs.
Key Concepts
Transformer Architecture
medium level question
understand
Deep Dive: Transformer Architecture
Master the fundamentals
Definition
The Transformer is a network architecture based solely on attention mechanisms, eliminating the need for recurrent or convolutional layers. It connects encoder and decoder through attention, enabling parallelization and faster training. The model has shown superior performance in machine translation tasks.
Topic Definition
The Transformer is a network architecture based solely on attention mechanisms, eliminating the need for recurrent or convolutional layers. It connects encoder and decoder through attention, enabling parallelization and faster training. The model has shown superior performance in machine translation tasks.
Ready to Master More Topics?
Join thousands of students using Seekh's interactive learning platform to excel in their studies with personalized practice and detailed explanations.