This paper presents a detailed analysis of the annotation process used in the ITALERT (Italian Emergency Response Text) corpus, specifically designed to evaluate the performance of neural machine translation (NMT) systems and large language models (LLMs) in translating high-stakes messages from Italian to English.

Towards a reliable annotation framework for crisis MT evaluation: Addressing error taxonomies and annotator agreement

Maria Carmen Staiano;Johanna Monti;Chiusaroli Francesca
2025-01-01

Abstract

This paper presents a detailed analysis of the annotation process used in the ITALERT (Italian Emergency Response Text) corpus, specifically designed to evaluate the performance of neural machine translation (NMT) systems and large language models (LLMs) in translating high-stakes messages from Italian to English.
File in questo prodotto:
File Dimensione Formato  
CL2025+Book+Of+Abstracts_24th+June.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: DRM non definito
Dimensione 5.31 MB
Formato Adobe PDF
5.31 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11574/248746
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact