Self-directed learning is generally considered a key competence in higher education. To enable self-directed learning, assessment practices increasingly embrace assessment for learning rather than the assessment of learning, shifting the focus from grades and scores to provision of rich, narrative, and personalized feedback.
Students are expected to collect, interpret, and give meaning to this feedback, in order to self-assess their progress and to formulate new, appropriate learning goals and strategies. However, interpretation of aggregated, longitudinal narrative feedback has been proven to be very challenging, cognitively demanding, and time consuming.
In this article, we, therefore, explored the applicability of existing, proven text mining techniques to support feedback interpretation. More specifically, we investigated whether it is possible to automatically generate meaningful information about prevailing topics and the emotional load of feedback provided in medical students’ competence-based portfolios (N = 1500), taking into account the competence framework and the students’ various performance levels.
Our findings indicate that the text-mining techniques topic modeling and sentiment analysis make it feasible to automatically unveil the two principal aspects of narrative feedback, namely the most relevant topics in the feedback and their sentiment. This article, therefore, takes a valuable first step toward the automatic, online support of students, who are tasked with meaningful interpretation of complex narrative data in their portfolio as they develop into self-directed life-long learners.