On the Scaling Laws of Geographical Representation in Language Models

Nathan Godey, Éric de la Clergerie, Benoît Sagot


Abstract
Language models have long been shown to embed geographical information in their hidden representations. This line of work has recently been revisited by extending this result to Large Language Models (LLMs). In this paper, we propose to fill the gap between well-established and recent literature by observing how geographical knowledge evolves when scaling language models. We show that geographical knowledge is observable even for tiny models, and that it scales consistently as we increase the model size. Notably, we observe that larger language models cannot mitigate the geographical bias that is inherent to the training data.
Anthology ID:
2024.lrec-main.1087
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
12416–12422
Language:
URL:
https://aclanthology.org/2024.lrec-main.1087
DOI:
Bibkey:
Cite (ACL):
Nathan Godey, Éric de la Clergerie, and Benoît Sagot. 2024. On the Scaling Laws of Geographical Representation in Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 12416–12422, Torino, Italia. ELRA and ICCL.
Cite (Informal):
On the Scaling Laws of Geographical Representation in Language Models (Godey et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1087.pdf