Coarse-Tuning for Ad-hoc Document Retrieval Using Pre-trained Language Models

Atsushi Keyaki, Ribeka Keyaki


Abstract
Fine-tuning in information retrieval systems using pre-trained language models (PLM-based IR) requires learning query representations and query-document relations, in addition to downstream task-specific learning. This study introduces coarse-tuning as an intermediate learning stage that bridges pre-training and fine-tuning. By learning query representations and query-document relations in coarse-tuning, we aim to reduce the load of fine-tuning and improve the learning effect of downstream IR tasks. We propose Query-Document Pair Prediction (QDPP) for coarse-tuning, which predicts the appropriateness of query-document pairs. Evaluation experiments show that the proposed method significantly improves MRR and/or nDCG@5 in four ad-hoc document retrieval datasets. Furthermore, the results of the query prediction task suggested that coarse-tuning facilitated learning of query representation and query-document relations.
Anthology ID:
2024.lrec-main.303
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
3413–3421
Language:
URL:
https://aclanthology.org/2024.lrec-main.303
DOI:
Bibkey:
Cite (ACL):
Atsushi Keyaki and Ribeka Keyaki. 2024. Coarse-Tuning for Ad-hoc Document Retrieval Using Pre-trained Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 3413–3421, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Coarse-Tuning for Ad-hoc Document Retrieval Using Pre-trained Language Models (Keyaki & Keyaki, LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.303.pdf