Liwei Chen


2024

pdf bib
Probing Multimodal Large Language Models for Global and Local Semantic Representations
Mingxu Tao | Quzhe Huang | Kun Xu | Liwei Chen | Yansong Feng | Dongyan Zhao
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

The advancement of Multimodal Large Language Models (MLLMs) has greatly accelerated the development of applications in understanding integrated texts and images. Recent works leverage image-caption datasets to train MLLMs, achieving state-of-the-art performance on image-to-text tasks. However, there are few studies exploring which layers of MLLMs make the most effort to the global image information, which plays vital roles in multimodal comprehension and generation. In this study, we find that the intermediate layers of models can encode more global semantic information, whose representation vectors perform better on visual-language entailment tasks, rather than the topmost layers. We further probe models regarding local semantic representations through object recognition tasks. We find that the topmost layers may excessively focus on local information, leading to a diminished ability to encode global information. Our code and data are released via https://github.com/kobayashikanna01/probing_MLLM_rep.

2018

pdf bib
Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model
Kun Xu | Lingfei Wu | Zhiguo Wang | Mo Yu | Liwei Chen | Vadim Sheinin
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Existing neural semantic parsers mainly utilize a sequence encoder, i.e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency or constituent trees. In this paper, we first propose to use the syntactic graph to represent three types of syntactic information, i.e., word order, dependency and constituency features; then employ a graph-to-sequence model to encode the syntactic graph and decode a logical form. Experimental results on benchmark datasets show that our model is comparable to the state-of-the-art on Jobs640, ATIS, and Geo880. Experimental results on adversarial examples demonstrate the robustness of the model is also improved by encoding more syntactic information.

2015

pdf bib
Learn to Solve Algebra Word Problems Using Quadratic Programming
Lipu Zhou | Shuaixiang Dai | Liwei Chen
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf bib
Encoding Relation Requirements for Relation Extraction via Joint Inference
Liwei Chen | Yansong Feng | Songfang Huang | Yong Qin | Dongyan Zhao
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Joint Inference for Knowledge Base Population
Liwei Chen | Yansong Feng | Jinghui Mo | Songfang Huang | Dongyan Zhao
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2012

pdf bib
Towards Automatic Construction of Knowledge Bases from Chinese Online Resources
Liwei Chen | Yansong Feng | Yidong Chen | Lei Zou | Dongyan Zhao
Proceedings of ACL 2012 Student Research Workshop

pdf bib
Explore Person Specific Evidence in Web Person Name Disambiguation
Liwei Chen | Yansong Feng | Lei Zou | Dongyan Zhao
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning