AbstractArgument mining (AM) is the process of identifying argument structure contained within a text. It involves segmenting arguments into elementary discourse units (EDUs), distinguishing argumentative units from non-argumentative units, classifying argument components into classes such as premise and claim, identifying and labeling argument relations between the components, and identifying argument schemes.
AM approaches from the literature depend on a specific representation formalized based on an underlying set of argumentation theory and structure conventions. The smallest unit in the representation of argument across the theories and conventions is a proposition and the effort to go beyond proposition level to leverage AM has got less attention, though argument structure is dependent on the fine-grained internal structure of propositions. Moreover, the majority of approaches from the literature rely on EDU level features selected based on the underlying argumentation conventions and attain limited performance. Cross-EDU features like the dependency between the tokens of the competing proposition pair are also used for argument mining however, the nature of such tokens is not studied well. Finally, the type of argument relation existing between propositions is fundamentally dependent on context since often, the information linking propositions is not always explicitly mentioned at proposition level. This work aims to develop robust and context-aware AM that works across heterogeneous settings by proposing a novel representation, reliable features and context adaption. We represent a proposition by decomposing it into four functional components: target concepts (C), aspects (A), opinions on aspects (OA) and opinions on target concepts (OC) based on our hypothesis that the argument relation between propositions is governed by the relations between such functional components. We use cross EDU features, identified by exploiting the relation between the functional components, to construct argument structures.
We first try a pipeline approach which takes the similarity between C and A and the agreement between the polarities of OC and OA to identify AR. This approach is evaluated on three different corpora (AAEC , AMT, and US2016G1tv) and achieve F scores of 0.77, 0.75 and 0.62, respectively. We also try an end-to-end neural models in two settings: (a) propositional encoder-decoder where the encoder encodes the functional components of premise while the decoder decodes that of the conclusion, (b) decompositional encoder-decoder constructed from multiple transformer and LSTM blocks where each encoder encodes one of the functional elements of the premise while the decoder decodes one of the functional elements of the conclusion to predict if the conclusion follows the premise by combining the outputs of each of the components. The LSTM based decompositional model achieves F scores of 0.79, 0.76, 0.65 while the transformer based decompositional model achieves F scores of 0.8, 0.76, 0.66 on the three data-sets. We also fine-tune pre-trained BERT using decompositional architecture on AR category prediction and achieve F scores of 0.82, 0.80, 0.67 on the three data-sets. To identify the context required to connect propositions, we exploit the spectrum of external resources and train encoder-decoder models constructed from transformer and LSTM blocks in multiple settings. Our extended transformers architecture which introduces an additional encoder to encode the external resource, while the conventional encoder and decoder are used to encode and decode a premise and conclusion achieves new state of the art result with F scores of 0.85, 0.84, 0.71 on the three data-sets. In summary, the decompositional models have shown to identify ARs reliably as compared to their propositional counterpart. Moreover, external knowledge based decompositional models achieve better performance as compared to the models devoid such knowledge.
|Date of Award
|Engineering and Physical Sciences Research Council
|Chris Reed (Supervisor) & Katarzyna Budzynska (Supervisor)