Transformer-Based Models for Automatic Identification of Argument Relations: A Cross-Domain Evaluation

Ramon Ruiz-Dolz, Jose Alemany, Stella M. Heras Barbera, Ana Garcia-Fornes

Research output: Contribution to journalArticlepeer-review

28 Citations (Scopus)

Abstract

Argument mining is defined as the task of automatically identifying and extracting argumentative components (e.g., premises, claims, etc.) and detecting the existing relations among them (i.e., support, attack, rephrase, no relation). One of the main issues when approaching this problem is the lack of data, and the size of the publicly available corpora. In this work, we use the recently annotated US2016 debate corpus. US2016 is the largest existing argument annotated corpus, which allows exploring the benefits of the most recent advances in natural language processing in a complex domain like argument (relation) mining. We present an exhaustive analysis of the behavior of transformer-based models (i.e., BERT, XLNET, RoBERTa, DistilBERT, and ALBERT) when predicting argument relations. Finally, we evaluate the models in five different domains, with the objective of finding the less domain-dependent model. We obtain a macro F1-score of 0.70 with the US2016 evaluation corpus, and a macro F1-score of 0.61 with the Moral Maze cross-domain corpus.
Original languageEnglish
Pages (from-to)62 - 70
Number of pages12
JournalIEEE Intelligent Systems
Volume36
Issue number6
Early online date19 Apr 2021
DOIs
Publication statusPublished - 1 Nov 2021

Fingerprint

Dive into the research topics of 'Transformer-Based Models for Automatic Identification of Argument Relations: A Cross-Domain Evaluation'. Together they form a unique fingerprint.

Cite this