Taiji Li


2024

pdf bib
Dynamic Knowledge Prompt for Chest X-ray Report Generation
Shenshen Bu | Yujie Song | Taiji Li | Zhiming Dai
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Automatic generation of radiology reports can relieve the burden of radiologist. In the radiology library, the biased dataset and the sparse features of chest X-ray image make it difficult to generate reports. Many approaches strive to integrate prior information to enhance generation, but they fail to dynamically utilize pulmonary lesion knowledge at the instance-level. To alleviate above problem, we propose a novel Dynamic Knowledge Prompt (DKP) framework for chest X-ray report generation. The DKP can dynamically incorporate the pulmonary lesion information at the instance-level to facilitate report generation. Initially, we design a knowledge prompt for each pulmonary lesion using numerous radiology reports. After that, the DKP using an anomaly detector generates the dynamic knowledge prompt by extracting discriminative lesion features in the corresponding X-ray image. Finally, the knowledge prompt is encoded and fused with hidden states extracted from decoder, to form multi-modal features that guide visual features to generate reports. Extensive experiments on the public datasets MIMIC-CXR and IU X-Ray show that our approach achieves state-of-the-art performance.

pdf bib
Improving Faithfulness of Large Language Models in Summarization via Sliding Generation and Self-Consistency
Taiji Li | Zhi Li | Yin Zhang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Despite large language models (LLMs) have demonstrated impressive performance in various tasks, they are still suffering from the factual inconsistency problem called hallucinations. For instance, LLMs occasionally generate content that diverges from source article, and prefer to extract information that appears at the beginning and end of the context, especially in long document summarization. Inspired by these findings, we propose to improve the faithfulness of LLMs in summarization by impelling them to process the entire article more fairly and faithfully. We present a novel summary generation strategy, namely SliSum, which exploits the ideas of sliding windows and self-consistency. Specifically, SliSum divides the source article into overlapping windows, and utilizes LLM to generate local summaries for the content in the windows. Finally, SliSum aggregates all local summaries using clustering and majority voting algorithm to produce more faithful summary of entire article. Extensive experiments demonstrate that SliSum significantly improves the faithfulness of diverse LLMs including LLaMA-2, Claude-2 and GPT-3.5 in both short and long text summarization, while maintaining their fluency and informativeness and without additional fine-tuning and resources. We further conduct qualitative and quantitative studies to investigate why SliSum works and impacts of hyperparameters in SliSum on performance.