In this paper we present a novel neural networks ensemble to solve the task of coreference resolution in Russian texts. The ensemble consists of several neural networks, each based on recurrent Bidirectional long short-term memory layers (BiLSTM), attention mechanism, consistent scoring with selection of probable mentions and antecedents.
The applied neural network topology has already shown state-of-the-art results in English for this task, and is now adapted for the Russian language. The resulting coreference markup is obtained by aggregating output scores from several blocks of independently trained neural network models.
To represent an input source text, a combination of word vectors from two language models is used. We study the dependence of the coreference detection accuracy on various combinations of models of vector representation of words along with two tokenization approaches: gold markup or UDpipe tools.
Finally, to show the improvement made by our ensemble approach, we present the results of experiments with both RuCor and AnCor datasets.