Andy Luecking


2024

pdf bib
Dependencies over Times and Tools (DoTT)
Andy Luecking | Giuseppe Abrami | Leon Hammerla | Marc Rahn | Daniel Baumartz | Steffen Eger | Alexander Mehler
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Purpose: Based on the examples of English and German, we investigate to what extent parsers trained on modern variants of these languages can be transferred to older language levels without loss. Methods: We developed a treebank called DoTT (https://github.com/texttechnologylab/DoTT) which covers, roughly, the time period from 1800 until today, in conjunction with the further development of the annotation tool DependencyAnnotator. DoTT consists of a collection of diachronic corpora enriched with dependency annotations using 3 parsers, 6 pre-trained language models, 5 newly trained models for German, and two tag sets (TIGER and Universal Dependencies). To assess how the different parsers perform on texts from different time periods, we created a gold standard sample as a benchmark. Results: We found that the parsers/models perform quite well on modern texts (document-level LAS ranging from 82.89 to 88.54) and slightly worse on older texts, as expected (average document-level LAS 84.60 vs. 86.14), but not significantly. For German texts, the (German) TIGER scheme achieved slightly better results than UD. Conclusion: Overall, this result speaks for the transferability of parsers to past language levels, at least dating back until around 1800. This very transferability, it is however argued, means that studies of language change in the field of dependency syntax can draw on dependency distance but miss out on some grammatical phenomena.

pdf bib
German SRL: Corpus Construction and Model Training
Maxim Konca | Andy Luecking | Alexander Mehler
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

A useful semantic role-annotated resource for training semantic role models for the German language is missing. We point out some problems of previous resources and provide a new one due to a combined translation and alignment process: The gold standard CoNLL-2012 semantic role annotations are translated into German. Semantic role labels are transferred due to alignment models. The resulting dataset is used to train a German semantic role model. With F1-scores around 0.7, the major roles achieve competitive evaluation scores, but avoid limitations of previous approaches. The described procedure can be applied to other languages as well.

2023

pdf bib
TTR at the SPA: Relating type-theoretical semantics to neural semantic pointers
Staffan Larsson | Robin Cooper | Jonathan Ginzburg | Andy Luecking
Proceedings of the 4th Natural Logic Meets Machine Learning Workshop

This paper considers how the kind of formal semantic objects used in TTR (a theory of types with records, Cooper 2013) might be related to the vector representations used in Eliasmith (2013). An advantage of doing this is that it would immediately give us a neural representation for TTR objects as Eliasmith relates vectors to neural activity in his semantic pointer architecture (SPA). This would be an alternative using convolution to the suggestions made by Cooper (2019) based on the phasing of neural activity. The project seems potentially hopeful since all complex TTR objects are constructed from labelled sets (essentially sets of ordered pairs consisting of labels and values) which might be seen as corresponding to the representation of structured objects which Eliasmith achieves using superposition and circular convolution.

pdf bib
Towards Referential Transparent Annotations of Quantified Noun Phrases
Andy Luecking
Proceedings of the 19th Joint ACL-ISO Workshop on Interoperable Semantics (ISA-19)

Using recent developments in count noun quantification, namely Referential Transparency Theory (RTT), the basic structure for annotating quantification in the nominal domain according to RTT is presented. The paper discusses core ideas of RTT, derives the abstract annotation syntax, and exemplifies annotations of quantified noun phrases partly in comparison to QuantML.

2022

pdf bib
I still have Time(s): Extending HeidelTime for German Texts
Andy Luecking | Manuel Stoeckel | Giuseppe Abrami | Alexander Mehler
Proceedings of the Thirteenth Language Resources and Evaluation Conference

HeidelTime is one of the most widespread and successful tools for detecting temporal expressions in texts. Since HeidelTime’s pattern matching system is based on regular expression, it can be extended in a convenient way. We present such an extension for the German resources of HeidelTime: HeidelTimeExt. The extension has been brought about by means of observing false negatives within real world texts and various time banks. The gain in coverage is 2.7 % or 8.5 %, depending on the admitted degree of potential overgeneralization. We describe the development of HeidelTimeExt, its evaluation on text samples from various genres, and share some linguistic observations. HeidelTimeExt can be obtained from https://github.com/texttechnologylab/heideltime.

2021

pdf bib
Requesting clarifications with speech and gestures
Jonathan Ginzburg | Andy Luecking
Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR)

In multimodal natural language interaction both speech and non-speech gestures are involved in the basic mechanism of grounding and repair. We discuss a couple of multimodal clarifica- tion requests and argue that gestures, as well as speech expressions, underlie comparable paral- lelism constraints. In order to make this precise, we slightly extend the formal dialogue frame- work KoS to cover also gestural counterparts of verbal locutionary propositions.

2016

pdf bib
Finding Recurrent Features of Image Schema Gestures: the FIGURE corpus
Andy Luecking | Alexander Mehler | Désirée Walther | Marcel Mauri | Dennis Kurfürst
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

The Frankfurt Image GestURE corpus (FIGURE) is introduced. The corpus data is collected in an experimental setting where 50 naive participants spontaneously produced gestures in response to five to six terms from a total of 27 stimulus terms. The stimulus terms have been compiled mainly from image schemata from psycholinguistics, since such schemata provide a panoply of abstract contents derived from natural language use. The gestures have been annotated for kinetic features. FIGURE aims at finding (sets of) stable kinetic feature configurations associated with the stimulus terms. Given such configurations, they can be used for designing HCI gestures that go beyond pre-defined gesture vocabularies or touchpad gestures. It is found, for instance, that movement trajectories are far more informative than handshapes, speaking against purely handshape-based HCI vocabularies. Furthermore, the mean temporal duration of hand and arm movements associated vary with the stimulus terms, indicating a dynamic dimension not covered by vocabulary-based approaches. Descriptive results are presented and related to findings from gesture studies and natural language dialogue.

pdf bib
TGermaCorp – A (Digital) Humanities Resource for (Computational) Linguistics
Andy Luecking | Armin Hoenen | Alexander Mehler
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

TGermaCorp is a German text corpus whose primary sources are collected from German literature texts which date from the sixteenth century to the present. The corpus is intended to represent its target language (German) in syntactic, lexical, stylistic and chronological diversity. For this purpose, it is hand-annotated on several linguistic layers, including POS, lemma, named entities, multiword expressions, clauses, sentences and paragraphs. In order to introduce TGermaCorp in comparison to more homogeneous corpora of contemporary everyday language, quantitative assessments of syntactic and lexical diversity are provided. In this respect, TGermaCorp contributes to establishing characterising features for resource descriptions, which is needed for keeping track of a meaningful comparison of the ever-growing number of natural language resources. The assessments confirm the special role of proper names, whose propagation in text may influence lexical and syntactic diversity measures in rather trivial ways. TGermaCorp will be made available via hucompute.org.