Abstract
Argument mining is defined as the task of automatically identifying and extracting argumentative components (e.g., premises, claims, etc.) and detecting the existing relations among them (i.e., support, attack, rephrase, no relation). One of the main issues when approaching this problem is the lack of data, and the size of the publicly available corpora. In this work, we use the recently annotated US2016 debate corpus. US2016 is the largest existing argument annotated corpus, which allows exploring the benefits of the most recent advances in natural language processing in a complex domain like argument (relation) mining. We present an exhaustive analysis of the behavior of transformer-based models (i.e., BERT, XLNET, RoBERTa, DistilBERT, and ALBERT) when predicting argument relations. Finally, we evaluate the models in five different domains, with the objective of finding the less domain-dependent model. We obtain a macro F1-score of 0.70 with the US2016 evaluation corpus, and a macro F1-score of 0.61 with the Moral Maze cross-domain corpus.
| Original language | English |
|---|---|
| Pages (from-to) | 62 - 70 |
| Number of pages | 12 |
| Journal | IEEE Intelligent Systems |
| Volume | 36 |
| Issue number | 6 |
| Early online date | 19 Apr 2021 |
| DOIs | |
| Publication status | Published - 1 Nov 2021 |
Fingerprint
Dive into the research topics of 'Transformer-Based Models for Automatic Identification of Argument Relations: A Cross-Domain Evaluation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver