Wikipedia provides an invaluable source of parallel multilingual data, which are in high demand for various sorts of linguistic inquiry, including both theoretical and practical studies. We intro- duce a novel end-to-end neural model for large-scale parallel data harvesting from Wikipedia.
Our model is language-independent, robust, and highly scalable. We use our system for collect- ing parallel German-English, French-English and Persian-English sentences.
Human evaluations at the end show the strong performance of this model in collecting high-quality parallel data. We also propose a statistical framework which extends the results of our human evaluation to other language pairs.
Our model also obtained a state-of-the-art result on the German-English dataset of BUCC 2017 shared task on parallel sentence extraction from comparable corpora.