Having access to high-quality grammatical annotations is important for downstream tasks in NLP as well as for corpus-based research. In this paper, we describe experiments with the Latin BERT word embeddings that were recently be made available by Bamman and Burns (2020).
We show that these embeddings produce competitive results in the low-level task of morpho-syntactic tagging. In addition, we describe a graph-based dependency parser that is trained with these embeddings and clearly outperforms various baselines.