Evaluation Metrics and Analysis of First Annotation Round

Summary
This deliverable will report on the results of the first annotation round, including error profiles for each language pair and a comparison of the results with analysis based on post-edited examples. It will also include a database of the annotated data for use in training systems and for further analysis and a listing and analysis of measurable factors (semantics and linguistics analyses) found in the annotation done in T3.1 that correspond to human quality judgments. This deliverable will also evaluate the progress on syntax and semantic-informed evaluation metrics.