The paper presents machine translation experiments from English to Czech with a large amount of manually annotated discourse connectives. The gold-standard discourse relation annotation leads to better translation performance in ranges of 4–60% for some ambiguous English connectives and helps to find correct syntactical constructs in Czech for less ambiguous connectives.
Automatic scoring confirms the stability of the newly built discourseaware translation systems. Error analysis and human translation evaluation point to the cases where the annotation was most and where less helpful.