Anisotropy Is Inherent to Self-Attention in Transformers

Nathan Godey, Éric Clergerie, Benoît Sagot


Abstract
The representation degeneration problem is a phenomenon that is widely observed among self-supervised learning methods based on Transformers. In NLP, it takes the form of anisotropy, a singular property of hidden representations which makes them unexpectedly close to each other in terms of angular distance (cosine-similarity). Some recent works tend to show that anisotropy is a consequence of optimizing the cross-entropy loss on long-tailed distributions of tokens. We show in this paper that anisotropy can also be observed empirically in language models with specific objectives that should not suffer directly from the same consequences. We also show that the anisotropy problem extends to Transformers trained on other modalities. Our observations tend to demonstrate that anisotropy might actually be inherent to Transformers-based models.
Anthology ID:
2024.eacl-long.3
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
35–48
Language:
URL:
https://aclanthology.org/2024.eacl-long.3
DOI:
Bibkey:
Cite (ACL):
Nathan Godey, Éric Clergerie, and Benoît Sagot. 2024. Anisotropy Is Inherent to Self-Attention in Transformers. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 35–48, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Anisotropy Is Inherent to Self-Attention in Transformers (Godey et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.3.pdf
Video:
 https://aclanthology.org/2024.eacl-long.3.mp4