%A Li, Dongxing
%A Luo, Zuying
%D 2021
%T Regression Loss in Transformer-based Supervised Neural Machine Translation
%K
%X Transformer-based model has achieved human-level performance in supervised neural machine translation (SNMT), much better than the models based on recurrent neural networks (RNNs) or convolutional neural network (CNN). The original Transformer-based model is trained through maximum likelihood estimation (MLE), which regards the machine translation task as a multilabel classification problem and takes the sum of the cross entropy loss of all the target tokens as the loss function. However, this model assumes that token generation is partially independent, without realizing that tokens are the components of a sequence. To solve the problem, this paper proposes a semantic regression loss for Transformer training, treating the generated sequence as a global. Upon finding that the semantic difference is proportional to candidate-reference distance, the authors considered the machine translation problem as a multi-task problem, and took the linear combination of cross entropy loss and semantic regression loss as the overall loss function. The semantic regression loss was proved to significantly enhance SNMT performance, with a slight reduction in convergence speed.
%U http://univagora.ro/jour/index.php/ijccc/article/view/4217
%J INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL
%0 Journal Article
%R 10.15837/ijccc.2021.4.4217
%V 16
%N 4
%@ 1841-9844
%8 2021-04-16