We present results of automatic evaluation of discourse in machine translation (MT) outputs using the EVALD tool, showing that automatic evaluation of discourse in translated texts allows for distinguishing individual MT systems.