Back to main page

Demo: Tree-to-Sequence Attentional Neural Machine Translation

Thank you for reading our paper and visiting this demonstration page! :D
If you have some comments and questions, feel free to contact us.

Introduction:

We give a demonstration of our syntactic neural machine translation, "Tree-to-Sequence Attenitonal Neural Machine Translation" (tree2seq) [1]. You can enjoy English-to-Japanese translation.

The model is trained on Tanaka corpus (149K sentence pairs). Vocabulary size is around 12-14K. Out-of-vocabulary words are mapped to an unknown (*UNK*) word. Translation is obtained by using beam search with beam size of 5. The code of tree2seq is available here.


tree2seq

How to use:

Reference

[1] Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka, "Tree-to-Sequence Attentional Neural Machine Translation", in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), 2016.