Odia language is one of the 30 most spoken languages in the world. It is spoken in the Indian state called Odisha.
Odia language lacks online content and resources for natural language processing (NLP) research. There is a great need for a better language model for the low resource Odia language, which can be used for many downstream NLP tasks.
In this paper, we introduce a Bert-based language model, pre-trained on 430,000 Odia sentences. We also evaluate the model on the well-known Kaggle Odia news classification dataset (BertOdia: 96%, RoBERTaOdia: 92%, and ULMFit: 91.9% classification accuracy), and perform a comparison study with multilingual Bidirectional Encoder Representations from Transformers (BERT) supporting Odia.
The model will be released publicly for the researchers to explore other NLP tasks.