Yuki Yasuda


2020

pdf bib
NHK_STRL at WNUT-2020 Task 2: GATs with Syntactic Dependencies as Edges and CTC-based Loss for Text Classification
Yuki Yasuda | Taichi Ishiwatari | Taro Miyazaki | Jun Goto
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)

The outbreak of COVID-19 has greatly impacted our daily lives. In these circumstances, it is important to grasp the latest information to avoid causing too much fear and panic. To help grasp new information, extracting information from social networking sites is one of the effective ways. In this paper, we describe a method to identify whether a tweet related to COVID-19 is informative or not, which can help to grasp new information. The key features of our method are its use of graph attention networks to encode syntactic dependencies and word positions in the sentence, and a loss function based on connectionist temporal classification (CTC) that can learn a label for each token without reference data for each token. Experimental results show that the proposed method achieved an F1 score of 0.9175, out- performing baseline methods.

pdf bib
Relation-aware Graph Attention Networks with Relational Position Encodings for Emotion Recognition in Conversations
Taichi Ishiwatari | Yuki Yasuda | Taro Miyazaki | Jun Goto
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Interest in emotion recognition in conversations (ERC) has been increasing in various fields, because it can be used to analyze user behaviors and detect fake news. Many recent ERC methods use graph-based neural networks to take the relationships between the utterances of the speakers into account. In particular, the state-of-the-art method considers self- and inter-speaker dependencies in conversations by using relational graph attention networks (RGAT). However, graph-based neural networks do not take sequential information into account. In this paper, we propose relational position encodings that provide RGAT with sequential information reflecting the relational graph structure. Accordingly, our RGAT model can capture both the speaker dependency and the sequential information. Experiments on four ERC datasets show that our model is beneficial to recognizing emotions expressed in conversations. In addition, our approach empirically outperforms the state-of-the-art on all of the benchmark datasets.