1808 08946v2 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Why Self-Attention?

A Targeted Evaluation of Neural Machine Translation Architectures

Gongbo Tang1∗ Mathias Müller2 Annette Rios2 Rico Sennrich2,3


1
Department of Linguistics and Philology, Uppsala University
2
Institute of Computational Linguistics, University of Zurich
3
School of Informatics, University of Edinburgh

Abstract 2002), it is inherently difficult to attribute gains in


BLEU to architectural properties.
Recently, non-recurrent architectures (convo-
Recurrent neural networks (RNNs) (Elman,
arXiv:1808.08946v2 [cs.CL] 28 Aug 2018

lutional, self-attentional) have outperformed


RNNs in neural machine translation. CNNs 1990) can easily deal with variable-length input
and self-attentional networks can connect dis- sentences and thus are a natural choice for the
tant words via shorter network paths than encoder and decoder of NMT systems. Mod-
RNNs, and it has been speculated that this im- ern variants of RNNs, such as GRUs (Cho et al.,
proves their ability to model long-range de- 2014) and LSTMs (Hochreiter and Schmidhuber,
pendencies. However, this theoretical argu- 1997), address the difficulty of training recurrent
ment has not been tested empirically, nor have networks with long-range dependencies. Gehring
alternative explanations for their strong perfor-
et al. (2017) introduce a neural architecture where
mance been explored in-depth. We hypoth-
esize that the strong performance of CNNs both the encoder and decoder are based on CNNs,
and self-attentional networks could also be and report better BLEU scores than RNN-based
due to their ability to extract semantic fea- NMT models. Moreover, the computation over
tures from the source text, and we evaluate all tokens can be fully parallelized during training,
RNNs, CNNs and self-attention networks on which increases efficiency. Vaswani et al. (2017)
two tasks: subject-verb agreement (where cap- propose Transformer models, which are built en-
turing long-range dependencies is required)
tirely with attention layers, without convolution or
and word sense disambiguation (where seman-
tic feature extraction is required). Our exper- recurrence. They report new state-of-art BLEU
imental results show that: 1) self-attentional scores for EN→DE and EN→FR. Yet, the BLEU
networks and CNNs do not outperform RNNs metric is quite coarse-grained, and offers no in-
in modeling subject-verb agreement over long sight as to which aspects of translation are im-
distances; 2) self-attentional networks perform proved by different architectures.
distinctly better than RNNs and CNNs on To explain the observed improvements in
word sense disambiguation. BLEU, previous work has drawn on theoretical ar-
guments. Both Gehring et al. (2017) and Vaswani
1 Introduction
et al. (2017) argue that the length of the paths in
Different architectures have been shown to be neural networks between co-dependent elements
effective for neural machine translation (NMT), affects the ability to learn these dependencies: the
ranging from recurrent architectures (Kalchbren- shorter the path, the easier the model learns such
ner and Blunsom, 2013; Bahdanau et al., 2015; dependencies. The papers argue that Transformers
Sutskever et al., 2014; Luong et al., 2015) to and CNNs are better suited than RNNs to capture
convolutional (Kalchbrenner and Blunsom, 2013; long-range dependencies.
Gehring et al., 2017) and, most recently, fully self- However, this claim is based on a theoreti-
attentional (Transformer) models (Vaswani et al., cal argument and has not been empirically tested.
2017). Since comparisons (Gehring et al., 2017; We argue other abilities of non-recurrent networks
Vaswani et al., 2017; Hieber et al., 2017) are could be responsible for their strong performance.
mainly carried out via BLEU (Papineni et al., Specifically, we hypothesize that the improve-

Work carried out during a visit to the machine transla- ments in BLEU are due to CNNs and Transform-
tion group at the University of Edinburgh. ers being strong semantic feature extractors.
In this paper, we evaluate all three popular higher perplexity on the validation set. This result
NMT architectures: models based on RNNs (re- of Tran et al. (2018) is clearly in contrast to the
ferred to as RNNS2S in the remainder of the pa- general finding that Transformers are better than
per), based on CNNs (referred to as ConvS2S) and RNNs for NMT tasks.
self-attentional models (referred to as Transform- Bai et al. (2018) evaluate CNNs and LSTMs
ers). Motivated by the aforementioned theoreti- on several sequence modeling tasks. They con-
cal claims regarding path length and semantic fea- clude that CNNs are better than RNNs for se-
ture extraction, we evaluate their performance on quence modeling. However, their CNN models
a subject-verb agreement task (that requires mod- perform much worse than the state-of-art LSTM
eling long-range dependencies) and a word sense models on some sequence modeling tasks, as they
disambiguation (WSD) task (that requires extract- themselves state in the appendix.
ing semantic features). Both tasks build on test Tang et al. (2018) evaluate different RNN ar-
sets of contrastive translation pairs, Lingeval97 chitectures and Transformer models on the task of
(Sennrich, 2017) and ContraWSD (Rios et al., historical spelling normalization which translates
2017). a historical spelling into its modern form. They
The main contributions of this paper can be find that Transformer models surpass RNN mod-
summarized as follows: els only in high-resource conditions.
• We test the theoretical claims that architec- In contrast to previous studies, we focus on the
tures with shorter paths through networks machine translation task, where architecture com-
are better at capturing long-range dependen- parisons so far are mostly based on BLEU.
cies. Our experimental results on modeling
3 Background
subject-verb agreement over long distances
do not show any evidence that Transformers 3.1 NMT Architectures
or CNNs are superior to RNNs in this regard.
We evaluate three different NMT architectures:
• We empirically show that the number of at- RNN-based models, CNN-based models, and
tention heads in Transformers impacts their Transformer-based models. All of them have a bi-
ability to capture long-distance dependen- partite structure in the sense that they consist of
cies. Specifically, many-headed multi-head an encoder and a decoder. The encoder and the
attention is essential for modeling long- decoder interact via a soft-attention mechanism
distance phenomena with only self-attention. (Bahdanau et al., 2015; Luong et al., 2015), with
one or multiple attention layers.
• We empirically show that Transformers excel
In the following sections, hli is the hidden state
at WSD, indicating that they are strong se-
at step i of layer l, hli−1 represents the hidden state
mantic feature extractors.
at the previous step of layer l, hl−1
i means the hid-
2 Related work den state at i of l − 1 layer, Exi represents the
embedding of xi , and epos,i denotes the positional
Yin et al. (2017) are the first to compare CNNs, embedding at position i.
LSTMs and GRUs on several NLP tasks. They
find that CNNs are better at tasks related to se- 3.1.1 RNN-based NMT
mantics, while RNNs are better at syntax-related RNNs are stateful networks that change as new in-
tasks, especially for longer sentences. puts are fed to them, and each state has a direct
Based on the work of Linzen et al. (2016), connection only to the previous state. Thus, the
Bernardy and Lappin (2017) find that RNNs per- path length of any two tokens with a distance of
form better than CNNs on a subject-verb agree- n in RNNs is exactly n. Figure 1 (a) shows an
ment task, which is a good proxy for how well illustration of RNNs.
long-range dependencies are captured. Tran et al.
(2018) find that a Transformer language model hli = hl−1 + frnn (hl−1 l
i i , hi−1 ) (1)
performs worse than an RNN language model on
a subject-verb agreement task. They, too, note that In deep architectures, two adjacent layers are com-
this is especially true as the distance between sub- monly connected with residual connections. In the
ject and verb grows, even if RNNs resulted in a lth encoder layer, hli is generated by Equation 1,
x1 x2 x3 x4 x5 padding x1 x2 x3 x4 x5 padding x1 x2 x3 x4 x5
(a) RNN (b) CNN (c) Self-attention

Figure 1: Architectures of different neural networks in NMT.

where frnn is the RNN (GRU or LSTM) function. 3.1.3 Transformer-based NMT
In the first layer, h0i = frnn (Exi , h0i−1 ). Transformers rely heavily on self-attention net-
In addition to the connection between the en- works. Each token is connected to any other
coder and decoder via attention, the initial state of token in the same sentence directly via self-
the decoder is usually initialized with the average attention. Moreover, Transformers feature at-
of the hidden states or the last hidden state of the tention networks with multiple attention heads.
encoder. Multi-head attention is more fine-grained, com-
pared to conventional 1-head attention mecha-
3.1.2 CNN-based NMT
nisms. Figure 1 (c) illustrates that any two to-
CNNs are hierarchical networks, in that convolu- kens are connected directly: the path length be-
tion layers capture local correlations. The local tween the first and the fifth tokens is 1. Similar to
context size depends on the size of the kernel and CNNs, positional information is also preserved in
the number of layers. In order to keep the out- positional embeddings.
put the same length as the input, CNN models add The hidden state in the Transformer encoder is
padding symbols to input sequences. Given an L- calculated from all hidden states of the previous
layer CNN with a kernel size k, the largest context layer. The hidden state hli in a self-attention net-
size is L(k −1). For any two tokens in a local con- work is computed as in Equation 3.
text with a distance of n, the path between them is
only dn/(k − 1)e. hli = hl−1
i + f (self-attention(hl−1
i )) (3)
As Figure 1 (b) shows, a 2-layer CNN with ker-
nel size 3 “sees” an effective local context of 5 to- where f represents a feedforward network with
kens. The path between the first token and the fifth ReLU as the activation function and layer normal-
token is only 2 convolutions.1 Since CNNs do not ization. In the input layer, h0i = Exi + epos,i .
have a means to infer the position of elements in a The decoder additionally has a multi-head atten-
sequence, positional embeddings are introduced. tion over the encoder hidden states.

3.2 Contrastive Evaluation of Machine


Translation
hli = hl−1
i + fcnn (W l [hl−1 l−1
i−bk/2c ; ...; hi+bk/2c ] Since we evaluate different NMT architectures
+ bl ) (2) explicitly on subject-verb agreement and WSD
(both happen implicitly during machine transla-
The hidden state hli shown in Equation 2 is related tion), BLEU as a measure of overall translation
to the hidden states in the same convolution and quality is not helpful. In order to conduct these
the hidden state hl−1i from the previous layer. k targeted evaluations, we use contrastive test sets.
denotes the kernel size of CNNs and fcnn is a non- Sets of contrastive translations can be used to
linearity. ConvS2S chooses Gated Linear Units analyze specific types of errors. Human refer-
(GLU) which can be viewed as a gated variation ence translations are paired with one or more con-
of ReLUs. W l are called convolutional filters. In trastive variants, where a specific type of error is
the input layer, h0i = Exi + epos,i . introduced automatically.
1
The evaluation procedure then exploits the fact
Note that the decoder employs masking to avoid condi-
tioning the model on future information, which reduces the that NMT models are conditional language mod-
effective context size to L k−1
2
. els. By virtue of this, given any source sentence S
and target sentence T , any NMT model can assign German→English lexical ambiguities, each lexi-
to them a probability P (T |S). If a model assigns cal ambiguity instance has 3.5 contrastive transla-
a higher score to the correct target sentence than tions on average. For German→French, it consists
to a contrastive variant that contains an error, we of 71 different German word senses. There are
consider it a correct decision. The accuracy of a 6,700 German→French lexical ambiguities, with
model on such a test set is simply the percentage an average of 2.2 contrastive translations each lex-
of cases where the correct target sentence is scored ical ambiguity instance. All the ambiguous words
higher than all contrastive variants. are nouns so that the disambiguation is not possi-
Contrastive evaluation tests the sensitivity of ble simply based on syntactic context.
NMT models to specific translation errors. The
contrastive examples are designed to capture spe- 4 Subject-verb Agreement
cific translation errors rather than evaluating the
global quality of NMT models. Although they do The subject-verb agreement task is the most pop-
not replace metrics such as BLEU, they give fur- ular choice for evaluating the ability to capture
ther insights into the performance of models, on long-range dependencies and has been used in
specific linguistic phenomena. many studies (Linzen et al., 2016; Bernardy and
Lappin, 2017; Sennrich, 2017; Tran et al., 2018).
3.2.1 Lingeval97 Thus, we also use this task to evaluate different
Lingeval97 has over 97,000 English→German NMT architectures on long-range dependencies.
contrastive translation pairs featuring different lin-
guistic phenomena, including subject-verb agree- 4.1 Experimental Settings
ment, noun phrase agreement, separable verb- Different architectures are hard to compare fairly
particle constructions, transliterations and polar- because many factors affect performance. We aim
ity. In this paper, we are interested in evaluat- to create a level playing field for the comparison
ing the performance on long-range dependencies. by training with the same toolkit, Sockeye (Hieber
Thus, we focus on the subject-verb agreement cat- et al., 2017) which is based on MXNet (Chen et al.,
egory which consists of 35,105 instances. 2015). In addition, different hyperparameters and
In German, verbs must agree with their subjects training techniques (such as label smoothing or
in both grammatical number and person. There- layer normalization) have been found to affect the
fore, in a contrastive translation, the grammatical performance (Chen et al., 2018). We apply the
number of a verb is swapped. Table 1 gives an same hyperparameters and techniques for all ar-
example. chitectures except the parameters of each specific
architecture. Since the best hyperparameters for
English: [...] plan will be approved different architectures may be diverse, we verify
German: [...] Plan verabschiedet wird our hyperparameter choice by comparing our re-
Contrast: [...] Plan verabschiedet werden sults to those published previously. Our models
Table 1: An example of a contrastive pair in the achieve similar performance to that reported by
subject-verb agreement category. Hieber et al. (2017) with the best available set-
tings. In addition, we extend Sockeye with an
interface that enables scoring of existing transla-
3.2.2 ContraWSD tions, which is required for contrastive evaluation.
In ContraWSD, given an ambiguous word in the All the models are trained with 2 GPUs. Dur-
source sentence, the correct translation is replaced ing training, each mini-batch contains 4096 to-
by another meaning of the ambiguous word which kens. A model checkpoint is saved every 4,000
is incorrect. For example, in a case where the En- updates. We use Adam (Kingma and Ba, 2015)
glish word line is the correct translation of the Ger- as the optimizer. The initial learning rate is set
man source word Schlange, ContraWSD replaces to 0.0002. If the performance on the validation
line with the other translations of Schlange, such set has not improved for 8 checkpoints, the learn-
as snake, serpent, to generate contrastive transla- ing rate is multiplied by 0.7. We set the early
tions. stopping patience to 32 checkpoints. All the neu-
For German→English, ContraWSD contains 84 ral networks have 8 layers. For RNNS2S, the en-
different German word senses. It has 7,200 coder has 1 bi-directional LSTM and 6 stacked
uni-directional LSTMs, and the decoder is a stack However, it still cannot outperform Transformers
of 8 uni-directional LSTMs. The size of embed- on any of the tasks.
dings and hidden states is 512. We apply layer nor-
malization and label smoothing (0.1) in all mod- Model 2014 2017 PPL Acc(%)
els. We tie the source and target embeddings. RNNS2S 23.3 25.1 6.1 95.1
The dropout rate of embeddings and Transformer ConvS2S 23.9 25.2 7.0 84.9
blocks is set to 0.1. The dropout rate of RNNs and Transformer 26.7 27.5 4.5 97.1
CNNs is 0.2. The kernel size of CNNs is 3. Trans- RNN-bideep 24.7 26.1 5.7 96.3
formers have an 8-head attention mechanism.
Table 2: The results of different NMT models, in-
To test the robustness of our findings, we also cluding the BLEU scores on newstest2014 and new-
test a different style of RNN architecture, from stest2017, the perplexity on the validation set, and the
a different toolkit. We evaluate bi-deep transi- accuracy of long-range dependencies.
tional RNNs (Miceli Barone et al., 2017) which
are state-of-art RNNs in machine translation. We Figure 2 shows the performance of different ar-
use the bi-deep RNN-based model (RNN-bideep) chitectures on the subject-verb agreement task. It
implemented in Marian (Junczys-Dowmunt et al., is evident that Transformer, RNNS2S, and RNN-
2018). Different from the previous settings, we bideep perform much better than ConvS2S on
use the Adam optimizer with β1 = 0.9, β2 = long-range dependencies. However, Transformer,
0.98,  = 10−9 . The initial learning rate is RNNS2S, and RNN-bideep are all robust over long
0.0003. We tie target embeddings and output em- distances. Transformer outperforms RNN-bideep
beddings. Both the encoder and decoder have 4 for distances 11-12, but RNN-bideep performs
layers of LSTM units, only the encoder layers are equally or better for distance 13 or higher. Thus,
bi-directional. LSTM units consist of several cells we cannot conclude that Transformer models are
(deep transition): 4 in the first layer of the decoder, particularly stronger than RNN models for long
2 cells everywhere else. distances, despite achieving higher average accu-
We use training data from the WMT17 shared racy on distances above 10.
task.2 We use newstest2013 as the validation set,
1
and use newstest2014 and newstest2017 as the test
0.95
sets. All BLEU scores are computed with Sacre-
0.9
BLEU (Post, 2018). There are about 5.9 million
0.85
sentence pairs in the training set after preprocess-
0.8
Accuracy

ing with Moses scripts. We learn a joint BPE 0.75


model with 32,000 subword units (Sennrich et al., 0.7
RNNS2S

2016). We employ the model that has the best per- 0.65
RNN-bideep

plexity on the validation set for the evaluation. 0.6


ConvS2S

0.55 Transformer
4.2 Overall Results
0.5
Table 2 reports the BLEU scores on newstest2014 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >15
Distance
and newstest2017, the perplexity on the valida-
tion set, and the accuracy on long-range depen- Figure 2: Accuracy of different NMT models on the
dencies.3 Transformer achieves the highest accu- subject-verb agreement task.
racy on this task and the highest BLEU scores on
both newstest2014 and newstest2017. Compared
4.2.1 CNNs
to RNNS2S, ConvS2S has slightly better results re-
garding BLEU scores, but a much lower accuracy Theoretically, the performance of CNNs will drop
on long-range dependencies. The RNN-bideep when the distance between the subject and the verb
model achieves distinctly better BLEU scores and exceeds the local context size. However, ConvS2S
a higher accuracy on long-range dependencies. is also clearly worse than RNNS2S for subject-verb
agreement within the local context size.
2
http://www.statmt.org/wmt17/ In order to explore how the ability of ConvS2S
translation-task.html
3
We report average accuracy on instances where the dis- to capture long-range dependencies depends on
tance between subject and verb is longer than 10 words. the local context size, we train additional systems,
varying the number of layers and kernel size. Ta- of BLEU, which only measures on the level of n-
ble 3 shows the performance of different ConvS2S grams, but it may also indicate that there are other
models. Figure 3 displays the performance of two trade-offs between the modeling of different phe-
8-layer CNNs with kernel size 3 and 7, a 6-layer nomena depending on hyperparameters. If we aim
CNN with kernel size 3, and RNNS2S. The results to get better performance on long-range dependen-
indicate that the accuracy increases when the local cies, we can take this into account when optimiz-
context size becomes larger, but the BLEU score ing hyperparameters.
does not. Moreover, ConvS2S is still not as good
as RNNS2S for subject-verb agreement. 4.2.2 RNNs vs. Transformer
Even though Transformer achieves much better
Layer K Ctx 2014 2017 Acc(%) BLEU scores than RNNS2S and RNN-bideep, the
4 3 4 22.9 24.2 81.1 accuracies of these architectures on long-range de-
6 3 6 23.6 25.0 82.5 pendencies are close to each other in Figure 2.
8 3 8 23.9 25.2 84.9 Our experimental result contrasts with the result
8 5 16 23.5 24.7 89.7 from Tran et al. (2018). They find that Transform-
8 7 24 23.3 24.6 91.3 ers perform worse than LSTMs on the subject-
verb agreement task, especially when the distance
Table 3: The performance of ConvS2S with different
between the subject and the verb becomes longer.
settings. K means the kernel size. The ctx column is
the theoretical largest local context size in the masked We perform several experiments to analyze this
decoder. discrepancy with Tran et al. (2018).
A first hypothesis is that this is caused by the
amount of training data, since we used much larger
1
datasets than Tran et al. (2018). We retrain all the
0.95
models with a small amount of training data simi-
0.9
lar to the amount used by Tran et al. (2018), about
0.85
0.8
135K sentence pairs. The other training settings
Accuracy

0.75
are the same as in Section 4.1. We do not see the
0.7
RNNS2S expected degradation of Transformer-s, compared
0.65
ConvS2S-d8k7 to RNNS2S-s (see Figure 4). In Table 4, the perfor-
0.6 ConvS2S-d8k3 mance of RNNS2S-s and Transformer-s is similar,
0.55 ConvS2S-d6k3 including the BLEU scores on newstest2014, new-
0.5 stest2017, the perplexity on the validation set, and
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >15 the accuracy on the long-range dependencies.
Distance
1
Figure 3: Results of ConvS2S models and the RNNS2S 0.95
model at different distances. 0.9
0.85
Regarding the explanation for the poor perfor-
0.8
Accuracy

mance of ConvS2S, we identify the limited context 0.75


size as a major problem. One assumption to ex- 0.7
plain the remaining difference is that, scale invari- 0.65
RNNS2S-s
ance of CNNs is relatively poor (Xu et al., 2014). 0.6
Scale-invariance is important in NLP, where the 0.55
Tranformer-s

distance between arguments is flexible, and cur- 0.5


rent recurrent or attentional architectures are better 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >15
Distance
suited to handle this variance.
Our empirical results do not confirm the theoret- Figure 4: Results of a Transformer and RNNS2S model
ical arguments in Gehring et al. (2017) that CNNs trained on a small dataset.
can capture long-range dependencies better with
a shorter path. The BLEU score does not corre- A second hypothesis is that the experimental set-
late well with the targeted evaluation of long-range tings lead to the different results. In order to inves-
distance interactions. This is due to the locality tigate this, we do not only use a small training set,
but also replicate the experimental settings of Tran to attend to both local and distant context, whereas
et al. (2018). The main changes are neural network distant context may be overshadowed by local
layers (8→4); embedding size (512→128); multi- context in an attention mechanism with a single
head size (8→2); dropout rate (0.1→0.2); check- or few heads.
point save frequency (4,000→1,000), and initial Although our study is not a replication of Tran
learning rate (0.0002→0.001). et al. (2018), who work on a different task and
a different test set, our results do suggest an al-
Model 2014 2017 PPL Acc(%) ternative interpretation of their findings, namely
RNNS2S-s 7.3 7.8 47.8 77.3 that the poor performance of the Transformer in
Trans-s 7.2 8.0 44.6 74.6 their experiments is due to hyperparameter choice.
RNNS2S-re 9.2 10.5 39.2 77.7 Rather than concluding that RNNs are superior to
Trans-re-h2 9.6 10.7 36.9 71.9 Transformers for the modeling of long-range de-
Trans-re-h4 9.5 11.9 35.8 73.8 pendency phenomena, we find that the number of
Trans-re-h8 9.4 10.4 36.0 75.3 heads in multi-head attention affects the ability of
Transformers to model long-range dependencies
Table 4: The results of different models with small
training data and replicate settings. Trans is short for
in subject-verb agreement.
Transformer. Models with the suffix “-s” are models
trained with small data set. Models with the suffix “-re” 5 WSD
are models trained with replicate settings. “h2, h4, h8”
indicates the number of attention heads for Transformer Our experimental results on the subject-verb
models. agreement task demonstrate that CNNs and Trans-
former are not better at capturing long-range de-
pendencies compared to RNNs, even though the
1
paths in CNNs and Transformers are shorter. This
0.95
finding is not in accord with the theoretical argu-
0.9
ment in both Gehring et al. (2017) and Vaswani
0.85
et al. (2017). However, these architectures per-
0.8
Accuracy

form well empirically according to BLEU. Thus,


0.75
0.7
we further evaluate these architectures on WSD,
RNNS2S-re
0.65
to test our hypothesis that non-recurrent architec-
Trans-re-h2
0.6
tures are better at extracting semantic features.
Trans-re-h4
0.55
Trans-re-h8
0.5 5.1 Experimental settings
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >15
Distance
We evaluate all architectures on ContraWSD on
both DE→EN and DE→FR. We reuse the param-
Figure 5: Results of the models with replicate settings, eter settings in Section 4.1, except that: the initial
varying the number of attention heads for the Trans- learning rate of ConvS2S is reduced from 0.0003
former models. to 0.0002 in DE→EN; the checkpoint saving fre-
quency is changed from 4,000 to 1,000 in DE→FR
In the end, we get a result that is similar to Tran
because of the training data size.
et al. (2018). In Figure 5, Transformer-re-h2
For DE→EN, the training set, validation set,
performs clearly worse than RNNS2S-re on long-
and test set are as same as the other direction
range dependencies. By increasing the number
EN→DE. For DE→FR, we use around 2.1 million
of heads in multi-head attention, subject-verb ac-
sentence pairs from Europarl (v7) (Tiedemann,
curacy over long distances can be improved sub-
2012)4 and News Commentary (v11) cleaned by
stantially, even though it remains below that of
Rios et al. (2017)5 as our training set. We use
RNNS2S-re. Also, the effect on BLEU is small.
newstest2013 as the evaluation set, and use new-
Our results suggest that the importance of multi-
stest2012 as the test set. All the data is prepro-
head attention with a large number of heads is
cessed with Moses scripts.
larger than BLEU would suggest, especially for
the modeling of long-distance phenomena, since 4
http://opus.nlpl.eu/Europarl.php
5
multi-head attention provides a way for the model http://data.statmt.org/ContraWSD/
DE→EN DE→FR
Model
PPL 2014 2017 Acc(%) PPL 2012 Acc(%)
RNNS2S 5.7 29.1 30.1 84.0 7.06 16.4 72.2
ConvS2S 6.3 29.1 30.4 82.3 7.93 16.8 72.7
Transformer 4.3 32.7 33.7 90.3 4.9 18.7 76.7
uedin-wmt17 – – 35.1 87.9 – – –
TransRNN 5.2 30.5 31.9 86.1 6.3 17.6 74.2

Table 5: The results of different architectures on newstest sets and ContraWSD. PPL is the perplexity on the
validation set. Acc means accuracy on the test set.

In addition, we also compare to the best result 5.3 Hybrid Encoder-Decoder Model
reported for DE→EN, achieved by uedin-wmt17 In recent work, Chen et al. (2018) find that hybrid
(Sennrich et al., 2017), which is an ensemble of architectures with a Transformer encoder and an
4 different models and reranked with right-to-left RNN decoder can outperform a pure Transformer
models.6 uedin-wmt17 is based on the bi-deep model. They speculate that the Transformer en-
RNNs (Miceli Barone et al., 2017) that we men- coder is better at encoding or extracting features
tioned before. To the original 5.9 million sentence than the RNN encoder, whereas the RNN is better
pairs in the training set, they add 10 million syn- at conditional language modeling.
thetic pairs with back-translation. For WSD, it is unclear whether the most im-
portant component is the encoder, the decoder, or
5.2 Overall Results
both. Following the hypothesis that Transformer
Table 5 gives the performance of all the architec- encoders excel as semantic feature extractors, we
tures, including the perplexity on validation sets, train a hybrid encoder-decoder model (TransRNN)
the BLEU scores on newstest, and the accuracy with a Transformer encoder and an RNN decoder.
on ContraWSD. Transformers distinctly outper- The results (in Table 5) show that TransRNN
form RNNS2S and ConvS2S models on DE→EN performs better than RNNS2S, but worse than the
and DE→FR. Moreover, the Transformer model pure Transformer, both in terms of BLEU and
on DE→EN also achieves higher accuracy than WSD accuracy. This indicates that WSD is not
uedin-wmt17, although the BLEU score on new- only done in the encoder, but that the decoder also
stest2017 is 1.4 lower than uedin-wmt17. We at- affects WSD performance. We note that Chen
tribute this discrepancy between BLEU and WSD et al. (2018); Domhan (2018) introduce the tech-
performance to the use of synthetic news training niques in Transformers into RNN-based models,
data in uedin-wmt17, which causes a large boost in with reportedly higher BLEU. Thus, it would be
BLEU due to better domain adaptation to newstest, interesting to see if the same result holds true with
but which is less helpful for ContraWSD, whose their architectures.
test set is drawn from a variety of domains.
For DE→EN, RNNS2S and ConvS2S have the 6 Conclusion
same BLEU score on newstest2014, ConvS2S has
In this paper, we evaluate three popular NMT ar-
a higher score on newstest2017. However, the
chitectures, RNNS2S, ConvS2S, and Transformers,
WSD accuracy of ConvS2S is 1.7% lower than
on subject-verb agreement and WSD by scoring
RNNS2S. For DE→FR, ConvS2S achieves slightly
contrastive translation pairs.
better results on both BLEU scores and accuracy
We test the theoretical claims that shorter path
than RNNS2S.
lengths make models better capture long-range de-
The Transformer model strongly outperforms
pendencies. Our experimental results show that:
the other architectures on this WSD task, with a
gap of 4–8 percentage points. This affirms our • There is no evidence that CNNs and Trans-
hypothesis that Transformers are strong semantic formers, which have shorter paths through
features extractors. networks, are empirically superior to RNNs
6
https://github.com/a-rios/ContraWSD/ in modeling subject-verb agreement over
tree/master/baselines long distances.
• The number of heads in multi-head attention Zhifeng Chen, Yonghui Wu, and Macduff Hughes.
affects the ability of a Transformer to model 2018. The best of both worlds: Combining recent
advances in neural machine translation. In Proceed-
long-range dependencies in the subject-verb
ings of the 56th Annual Meeting of the Association
agreement task. for Computational Linguistics (Volume 1: Long Pa-
pers), pages 76–86. Association for Computational
• Transformer models excel at another task, Linguistics.
WSD, compared to the CNN and RNN archi-
tectures we tested. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang,
Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan
Zhang, and Zheng Zhang. 2015. Mxnet: A flex-
Lastly, our findings suggest that assessing the per- ible and efficient machine learning library for het-
formance of NMT architectures means finding erogeneous distributed systems. arXiv preprint
their inherent trade-offs, rather than simply com- arXiv:1512.01274.
puting their overall BLEU score. A clear under-
Kyunghyun Cho, Bart van Merrienboer, Caglar Gul-
standing of those strengths and weaknesses is im- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol-
portant to guide further work. Specifically, given ger Schwenk, and Yoshua Bengio. 2014. Learn-
the idiosyncratic limitations of recurrent and self- ing Phrase Representations using RNN Encoder–
attentional models, combining them is an exciting Decoder for Statistical Machine Translation. In Pro-
ceedings of the 2014 Conference on Empirical Meth-
line of research. The apparent weakness of CNN ods in Natural Language Processing, pages 1724–
architectures on long-distance phenomena is also 1734, Doha, Qatar. Association for Computational
a problem worth tackling, and we can find inspi- Linguistics.
ration from related work in computer vision (Xu
Tobias Domhan. 2018. How much attention do you
et al., 2014). need? a granular analysis of neural machine trans-
lation architectures. In Proceedings of the 56th An-
Acknowledgments nual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 1799–
We thank all the anonymous reviews and Joakim 1808. Association for Computational Linguistics.
Nivre who give a lot of valuable and insight-
ful comments. We appreciate the grants pro- Jeffrey L Elman. 1990. Finding structure in time. Cog-
nitive science, 14(2):179–211.
vided by Erasmus+ Programme and Anna Maria
Lundin’s scholarship committee. GT is funded Jonas Gehring, Michael Auli, David Grangier, De-
by the Chinese Scholarship Council (grant num- nis Yarats, and Yann N. Dauphin. 2017. Convolu-
ber 201607110016). MM, AR and RS have re- tional sequence to sequence learning. In Proceed-
ings of the 34th International Conference on Ma-
ceived funding from the Swiss National Science chine Learning, pages 1243–1252, Sydney, Aus-
Foundation (grant number 105212 169888). tralia. PMLR.

Felix Hieber, Tobias Domhan, Michael Denkowski,


References David Vilar, Artem Sokolov, Ann Clifton, and Matt
Post. 2017. Sockeye: A Toolkit for Neural Machine
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- Translation. arXiv preprint arXiv:1712.05690.
gio. 2015. Neural Machine Translation by Jointly
Learning to Align and Translate. In Proceedings of Sepp Hochreiter and Jürgen Schmidhuber. 1997.
the 3rd International Conference on Learning Rep- Long short-term memory. Neural computation,
resentations, San Diego, California, USA. 9(8):1735–1780.

Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Marcin Junczys-Dowmunt, Roman Grundkiewicz,
2018. An empirical evaluation of generic convolu- Tomasz Dwojak, Hieu Hoang, Kenneth Heafield,
tional and recurrent networks for sequence model- Tom Neckermann, Frank Seide, Ulrich Germann,
ing. arXiv preprint arXiv:1803.01271. Alham Fikri Aji, Nikolay Bogoychev, André F. T.
Martins, and Alexandra Birch. 2018. Marian: Fast
Jean-Philippe Bernardy and Shalom Lappin. 2017. Neural Machine Translation in C++. arXiv preprint
Using Deep Neural Networks to Learn Syntactic arXiv:1804.00344.
Agreement. LiLT (Linguistic Issues in Language
Technology), 15(2):1–15. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent
Continuous Translation Models. In Proceedings of
Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin the 2013 Conference on Empirical Methods in Natu-
Johnson, Wolfgang Macherey, George Foster, Llion ral Language Processing, pages 1700–1709, Seattle,
Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Washington, USA. Association for Computational
Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Rico Sennrich, Barry Haddow, and Alexandra Birch.
Method for Stochastic Optimization. In Proceed- 2016. Neural Machine Translation of Rare Words
ings of the 3rd International Conference on Learn- with Subword Units. In Proceedings of the 54th An-
ing Representations, San Diego, California, USA. nual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 1715–
Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 1725, Berlin, Germany. Association for Computa-
2016. Assessing the Ability of LSTMs to Learn tional Linguistics.
Syntax-Sensitive Dependencies. Transactions of the
Association for Computational Linguistics, 4:521– Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.
535. Sequence to Sequence Learning with Neural Net-
works. In Advances in Neural Information Pro-
Thang Luong, Hieu Pham, and Christopher D. Man- cessing Systems 27, pages 3104–3112. Curran As-
ning. 2015. Effective Approaches to Attention- sociates, Inc., Montréal, Canada.
based Neural Machine Translation. In Proceed- Gongbo Tang, Fabienne Cap, Eva Pettersson, and
ings of the 2015 Conference on Empirical Meth- Joakim Nivre. 2018. An evaluation of neural ma-
ods in Natural Language Processing, pages 1412– chine translation models on historical spelling nor-
1421, Lisbon, Portugal. Association for Computa- malization. In Proceedings of the 27th International
tional Linguistics. Conference on Computational Linguistics, pages
1320–1331. Association for Computational Linguis-
Antonio Valerio Miceli Barone, Jindřich Helcl, Rico tics.
Sennrich, Barry Haddow, and Alexandra Birch.
2017. Deep architectures for Neural Machine Trans- Jörg Tiedemann. 2012. Parallel Data, Tools and Inter-
lation. In Proceedings of the Second Conference on faces in OPUS. In Proceedings of the Eighth In-
Machine Translation, pages 99–107, Copenhagen, ternational Conference on Language Resources and
Denmark. Association for Computational Linguis- Evaluation (LREC-2012), pages 2214–2218, Istan-
tics. bul, Turkey. European Language Resources Associ-
ation (ELRA).
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2002. BLEU: a Method for Automatic Ke Tran, Arianna Bisazza, and Christof Monz.
Evaluation of Machine Translation. In Proceedings 2018. The Importance of Being Recurrent for
of 40th Annual Meeting of the Association for Com- Modeling Hierarchical Structure. arXiv preprint
putational Linguistics, pages 311–318, Philadelphia, arXiv:1803.03585.
Pennsylvania, USA. Association for Computational
Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is All
Matt Post. 2018. A call for clarity in reporting bleu
you Need. In Advances in Neural Information Pro-
scores. arXiv preprint arXiv:1804.08771.
cessing Systems 30, pages 6000–6010. Curran Asso-
ciates, Inc.
Annette Rios, Laura Mascarell, and Rico Sennrich.
2017. Improving Word Sense Disambiguation in Yichong Xu, Tianjun Xiao, Jiaxing Zhang, Kuiyuan
Neural Machine Translation with Sense Embed- Yang, and Zheng Zhang. 2014. Scale-invariant
dings. In Proceedings of the Second Conference convolutional neural networks. arXiv preprint
on Machine Translation, pages 11–19, Copenhagen, arXiv:1411.6369.
Denmark. Association for Computational Linguis-
tics. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich
Schütze. 2017. Comparative study of cnn and rnn
Rico Sennrich. 2017. How Grammatical is Character- for natural language processing. arXiv preprint
level Neural Machine Translation? Assessing MT arXiv:1702.01923.
Quality with Contrastive Translation Pairs. In Pro-
ceedings of the 15th Conference of the European
Chapter of the Association for Computational Lin-
guistics: Volume 2, Short Papers, pages 376–382,
Valencia, Spain. Association for Computational Lin-
guistics.

Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich


Germann, Barry Haddow, Kenneth Heafield, An-
tonio Valerio Miceli Barone, and Philip Williams.
2017. The University of Edinburgh’s Neural MT
Systems for WMT17. In Proceedings of the Second
Conference on Machine Translation, pages 389–
399, Copenhagen, Denmark. Association for Com-
putational Linguistics.

You might also like