Do Neural Language Models Inferentially Compose Concepts the Way Humans Can?

Amilleah Rodriguez, Shaonan Wang, Liina Pylkkänen


Abstract
While compositional interpretation is the core of language understanding, humans also derive meaning via inference. For example, while the phrase “the blue hat” introduces a blue hat into the discourse via the direct composition of “blue” and “hat,” the same discourse entity is introduced by the phrase “the blue color of this hat” despite the absence of any local composition between “blue” and “hat.” Instead, we infer that if the color is blue and it belongs to the hat, the hat must be blue. We tested the performance of neural language models and humans on such inferentially driven conceptual compositions, eliciting probability estimates for a noun in a minimally composed phrase, “This blue hat”, following contexts that had introduced the conceptual combinations of those nouns and adjectives either syntactically or inferentially. Surprisingly, our findings reveal significant disparities between the performance of neural language models and human judgments. Among the eight models evaluated, RoBERTa, BERT-large, and GPT-2 exhibited the closest resemblance to human responses, while other models faced challenges in accurately identifying compositions in the provided contexts. Our study reveals that language models and humans may rely on different approaches to represent and compose lexical items across sentence structure. All data and code are accessible at https://github.com/wangshaonan/BlueHat.
Anthology ID:
2024.lrec-main.472
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
5309–5314
Language:
URL:
https://aclanthology.org/2024.lrec-main.472
DOI:
Bibkey:
Cite (ACL):
Amilleah Rodriguez, Shaonan Wang, and Liina Pylkkänen. 2024. Do Neural Language Models Inferentially Compose Concepts the Way Humans Can?. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 5309–5314, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Do Neural Language Models Inferentially Compose Concepts the Way Humans Can? (Rodriguez et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.472.pdf