Based on a comparison of four European education systems (England, France, Norway, and Poland), the aim of the study is to identify the benefits and risks related to school evaluation using value-added assessment. Value-added systems are compared according to a predefined set of criteria.
Data sources include technical documentation of value-added models, web portals used to publish the results, and texts on the development of models and the use of value-added data. Pupils' results in standardised tests or exams are the preferred input data for value-added computation.
Data availability determines both the choice of school level and the domains (subjects) to be assessed and the choice of contextual variables. The countries try to avoid direct comparison of schools, do not rank schools by value-added terms, and communicate statistical uncertainty related to the measurement.
Information on the value-added serves as feedback to schools. Value-added indicators are fair measures of schools' contribution to pupils' learning, but their introduction may be associated with risks, such as the omission of important variables because of data unavailability, lack of capacity to implement and develop the method, and limited utility in the school improvement process.