Vijay Srinivasan


2024

pdf bib
Explicit over Implict: Explicit Diversity Conditions for Effective Question Answer Generation
Vikas Yadav | Hyuk joon Kwon | Vijay Srinivasan | Hongxia Jin
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Question Answer Generation (QAG) is an effective data augmentation technique to improve the accuracy of question answering systems, especially in low-resource domains. While recent pretrained and large language model-based QAG methods have made substantial progress, they face the critical issue of redundant QA pair generation, affecting downstream QA systems. Implicit diversity techniques such as sampling and diverse beam search are proven effective solutions but often yield smaller diversity. We present explicit diversity conditions for QAG, focusing on spatial aspects, question types, and entities, substantially increasing diversity in QA generation. Our work emphasizes the need of explicit diversity conditions for generating diverse question-answer synthetic data by showing significant improvements in downstream QA task over existing implicit diversity techniques. In particular, generated QA pairs from explicit diversity conditions result in an average 4.1% exact match and 4.5% F1 improvement over implicit sampling techniques on SQuAD-DU. Our work emphasizes the need for explicit diversity conditions even more in low-resource datasets (SubjQA), where average QA performance improvements are ~12% EM.

2022

pdf bib
Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling
Kalpa Gunaratna | Vijay Srinivasan | Akhila Yerukola | Hongxia Jin
Findings of the Association for Computational Linguistics: EMNLP 2022

Joint intent detection and slot filling is a key research topic in natural language understanding (NLU). Existing joint intent and slot filling systems analyze and compute features collectively for all slot types, and importantly, have no way to explain the slot filling model decisions. In this work, we propose a novel approach that: (i) learns to generate additional slot type specific features in order to improve accuracy and (ii) provides explanations for slot filling decisions for the first time in a joint NLU model. We perform an additional constrained supervision using a set of binary classifiers for the slot type specific feature learning, thus ensuring appropriate attention weights are learned in the process to explain slot filling decisions for utterances. Our model is inherently explainable and does not need any post-hoc processing. We evaluate our approach on two widely used datasets and show accuracy improvements. Moreover, a detailed analysis is also provided for the exclusive slot explainability.