Can Language Models Learn Embeddings of Propositional Logic Assertions?

Nurul Fajrin Ariyani, Zied Bouraoui, Richard Booth, Steven Schockaert


Abstract
Natural language offers an appealing alternative to formal logics as a vehicle for representing knowledge. However, using natural language means that standard methods for automated reasoning can no longer be used. A popular solution is to use transformer-based language models (LMs) to directly reason about knowledge expressed in natural language, but this has two important limitations. First, the set of premises is often too large to be directly processed by the LM. This means that we need a retrieval strategy which can select the most relevant premises when trying to infer some conclusion. Second, LMs have been found to learn shortcuts and thus lack robustness, putting in doubt to what extent they actually understand the knowledge that is expressed. Given these limitations, we explore the following alternative: rather than using LMs to perform reasoning directly, we use them to learn embeddings of individual assertions. Reasoning is then carried out by manipulating the learned embeddings. We show that this strategy is feasible to some extent, while at the same time also highlighting the limitations of directly fine-tuning LMs to learn the required embeddings.
Anthology ID:
2024.lrec-main.246
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
2766–2776
Language:
URL:
https://aclanthology.org/2024.lrec-main.246
DOI:
Bibkey:
Cite (ACL):
Nurul Fajrin Ariyani, Zied Bouraoui, Richard Booth, and Steven Schockaert. 2024. Can Language Models Learn Embeddings of Propositional Logic Assertions?. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 2766–2776, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Can Language Models Learn Embeddings of Propositional Logic Assertions? (Ariyani et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.246.pdf