Understanding the effects of language-specific class imbalance in multilingual fine-tuning

Vincent Jung, Lonneke Plas


Abstract
We study the effect of one type of imbalance often present in real-life multilingual classification datasets: an uneven distribution of labels across languages. We show evidence that fine-tuning a transformer-based Large Language Model (LLM) on a dataset with this imbalance leads to worse performance, a more pronounced separation of languages in the latent space, and the promotion of uninformative features. We modify the traditional class weighing approach to imbalance by calculating class weights separately for each language and show that this helps mitigate those detrimental effects. These results create awareness of the negative effects of language-specific class imbalance in multilingual fine-tuning and the way in which the model learns to rely on the separation of languages to perform the task.
Anthology ID:
2024.findings-eacl.157
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2368–2376
Language:
URL:
https://aclanthology.org/2024.findings-eacl.157
DOI:
Bibkey:
Cite (ACL):
Vincent Jung and Lonneke Plas. 2024. Understanding the effects of language-specific class imbalance in multilingual fine-tuning. In Findings of the Association for Computational Linguistics: EACL 2024, pages 2368–2376, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Understanding the effects of language-specific class imbalance in multilingual fine-tuning (Jung & Plas, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.157.pdf
Video:
 https://aclanthology.org/2024.findings-eacl.157.mp4