Knowledge Editing for Large Language Models

Ningyu Zhang, Yunzhi Yao, Shumin Deng


Abstract
Even with their impressive abilities, Large Language Models (LLMs) such as ChatGPT are not immune to issues of factual or logically consistent. Concretely, the key concern is how to seamlessly update those LLMs to correct mistakes without resorting to an exhaustive retraining or continuous training procedure, both of which can demand significant computational resources and time. Thus, the capability to edit LLMs offers an efficient solution to alter a model’s behavior, notably within a distinct area of interest, without negatively impacting its performance on other tasks. Through this tutorial, we strive to acquaint interested NLP researchers with recent and emerging techniques for editing LLMs. Specifically, we aim to present a systematic and current overview of cutting-edge methods, supplemented with practical tools, and unveil new research opportunities for our audiences. All the valuable resources can be accessed at https://github.com/zjunlp/KnowledgeEditingPapers.
Anthology ID:
2024.lrec-tutorials.6
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024): Tutorial Summaries
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Roman Klinger, Naozaki Okazaki, Nicoletta Calzolari, Min-Yen Kan
Venue:
LREC
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
33–41
Language:
URL:
https://aclanthology.org/2024.lrec-tutorials.6
DOI:
Bibkey:
Cite (ACL):
Ningyu Zhang, Yunzhi Yao, and Shumin Deng. 2024. Knowledge Editing for Large Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024): Tutorial Summaries, pages 33–41, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Knowledge Editing for Large Language Models (Zhang et al., LREC 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-tutorials.6.pdf