Andrea Amelio Ravelli


2024

pdf bib
Specifying Genericity through Inclusiveness and Abstractness Continuous Scales
Claudia Collacciani | Andrea Amelio Ravelli | Marianna Bolognesi
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

This paper introduces a novel annotation framework for the fine-grained modeling of Noun Phrases’ (NPs) genericity in natural language. The framework is designed to be simple and intuitive, making it accessible to non-expert annotators and suitable for crowd-sourced tasks. Drawing from theoretical and cognitive literature on genericity, this framework is grounded in established linguistic theory. Through a pilot study, we created a small but crucial annotated dataset of 324 sentences, serving as a foundation for future research. To validate our approach, we conducted an evaluation comparing our continuous annotations with existing binary annotations on the same dataset, demonstrating the framework’s effectiveness in capturing nuanced aspects of genericity. Our work offers a practical resource for linguists, providing a first annotated dataset and an annotation scheme designed to build real-language datasets that can be used in studies on the semantics of genericity, and NLP practitioners, contributing to the development of commonsense knowledge repositories valuable in enhancing various NLP applications.

2023

pdf bib
Coherent or Not? Stressing a Neural Language Model for Discourse Coherence in Multiple Languages
Dominique Brunato | Felice Dell’Orletta | Irene Dini | Andrea Amelio Ravelli
Findings of the Association for Computational Linguistics: ACL 2023

In this study, we investigate the capability of a Neural Language Model (NLM) to distinguish between coherent and incoherent text, where the latter has been artificially created to gradually undermine local coherence within text. While previous research on coherence assessment using NLMs has primarily focused on English, we extend our investigation to multiple languages. We employ a consistent evaluation framework to compare the performance of monolingual and multilingual models in both in-domain and out-domain settings. Additionally, we explore the model’s performance in a cross-language scenario.

2018

pdf bib
One event, many representations. Mapping action concepts through visual features.
Alessandro Panunzi | Lorenzo Gregori | Andrea Amelio Ravelli
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)