Attention Mechanisms

Scaled Dot-Product Attention

Introduced by Vaswani et al. in Attention Is All You Need

Scaled dot-product attention is an attention mechanism where the dot products are scaled down by $\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:

$$ {\text{Attention}}(Q, K, V) = \text{softmax}\left(\frac{QK^{T}}{\sqrt{d_k}}\right)V $$

If we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \cdot k = \sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\sqrt{d_k}$.

Source: Attention Is All You Need

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 47 6.65%
Retrieval 33 4.67%
Question Answering 30 4.24%
Large Language Model 26 3.68%
Decoder 21 2.97%
Semantic Segmentation 21 2.97%
Image Classification 13 1.84%
Text Generation 13 1.84%
Sentence 11 1.56%

Components


Component Type
Softmax
Output Functions

Categories