Although in the digital humanities, researchers use software tools to conduct their research, and often apply these tools to data the software was not developed for, there has been little attention for investigating tool performance on this data. This is strange because in order to be able to appraise the results of digital humanities research, it is important to understand to what extent the tool output is correct.
To illustrate the importance of the validation of tools, this article presents a case study of validating Arabic root extraction tools. Arabic words are based on root letters; three root letters usually demarcate a semantic field.
Thus, roots can be used for studying semantic fields. For example, researchers can gain insight into the relative importance of the different senses (i.e. seeing, hearing, touching, smelling, and tasting) in Arabic jurisprudence (filth) by extracting and counting roots.
A problem is that there are only a few usable tools available. We take three root extraction tools, Khoja (Khoja and Garside, 1999, Stemming Arabic Text.
Lancaster, England: Lancaster University), ISRI (Taghva et al., 2005, Arabic stemming without a root dictionary. In International Conference on Information Technology: Coding and Computing (ITCC'05).
Vol. 2. Las Vegas, NV, April 2005 pp. 152-57), and AlKhalil (Boudlal et al., 2010, Alkhalil morpho sysl: a morphosyntactic analysis system for Arabic texts.
In International Arab Conference on Information Technology. New York, NY: Elsevier Science Inc., April 2017, pp. 1-6), and create manually annotated gold standard data consisting of three samples of approximately 1,000 words from important books of Islamic jurisprudence.
We show that Khoja is the best root extraction tool for our data. We also demonstrate that the relative counts of individual roots differ among tools, which leads to a different interpretation depending on which tool is chosen.
This means that findings based on automatically extracted roots should always be interpreted with care.