Wikipedia is not only a large encyclopedia, but lately also a source of linguistic data for various applications. Individual language versions allow to get the parallel data in multiple languages.
Inclusion of Wikipedia articles into categories can be used to filter the language data according to a domain. In our project, we needed a large number of parallel data for training systems of machine translation in the field of biomedicine.
One of the sources was Wikipedia. To select the data from the given domain we used the results of the DBpedia project, which extracts structured information from the Wikipedia articles and makes them available to users in RDF format.
In this paper we describe the process of data extraction and the problems that we had to deal with, because the open source project like Wikipedia, to which anyone can contribute, is not very reliable concerning consistency.