AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages

  • Jiayi Wang ,
  • David Ifeoluwa Adelani ,
  • Sweta Agrawal ,
  • Marek Masiak ,
  • Ricardo Rei ,
  • Eleftheria Briakou ,
  • Marine Carpuat ,
  • Xuanli He ,
  • Sofia Bourhim ,
  • Andiswa Bukula ,
  • Muhidin A. Mohamed ,
  • Temitayo Olatoye ,
  • Hamam Mokayed ,
  • Christine Mwase ,
  • Wangui Kimotho ,
  • Foutse Yuehgoh ,
  • Anuoluwapo Aremu ,
  • Jessica Ojo ,
  • Shamsuddeen Hassan Muhammad ,
  • Salomey Osei ,
  • Abdul-Hakeem Omotayo ,
  • Chiamaka Ijeoma Chukwuneke ,
  • Perez Ogayo ,
  • Oumaima Hourrane ,
  • Salma EL ANIGRI ,
  • Lolwethu Ndolela ,
  • Thabiso Mangwana ,
  • Shafie Abdi Mohamed ,
  • Hassan Ayinde ,
  • Oluwabusayo Olufunke Awoyomi ,
  • Lama Alkhaled ,
  • Sana Sabah al-Azzawi ,
  • Naome A Etori ,
  • ,
  • Clemencia Siro ,
  • Njoroge Kiragu ,
  • Eric Muchiri ,
  • Wangari Kimotho ,
  • Toadoum Sari Sakayo ,
  • Lyse Naomi Momo Wamba ,
  • Daud Abolade ,
  • Simbiat Ajao ,
  • Tosin Adewumi ,
  • Iyanuoluwa Shode ,
  • Ricky Sambo Macharm ,
  • Ruqayya Nasir Iro ,
  • Saheed Salahudeen Abdullahi ,
  • Stephen Edward Moore ,
  • Bernard Opoku ,
  • Zainab Akinjobi ,
  • Abeeb Afolabi ,
  • Nnaemeka Casmir Obiefuna ,
  • Onyekachi Ogbu ,
  • Sam Brian Ochieng' ,
  • Verrah Akinyi Otiende ,
  • CHINEDU EMMANUEL MBONU ,
  • Yao Lu ,
  • Pontus Stenetorp

North American Chapter of the Association for Computational Linguistics |

Published by NAACL 2024

Despite the recent progress in scaling multilingual machine translation (MT) to several under-resourced African languages, accurately measuring this progress remains challenging. Evaluation is often performed using n-gram matching metrics such as BLEU, which typically show a weaker correlation with human judgments. Learned metrics like COMET have a higher correlation; however, challenges such as the lack of evaluation data with human ratings for under-resourced languages, the complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and the limited language coverage of multilingual encoders have hampered their applicability to African languages. In this paper, we address these challenges by creating high-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AFRICOMET: COMET evaluation metrics for African languages by leveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-the-art MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (+0.441).