Reason-Checking Fake News

Jacky Visser (Lead / Corresponding author), John Lawrence, Chris Reed

Research output: Contribution to journalArticlepeer-review

95 Downloads (Pure)

Abstract

While deliberate misinformation and deception are by no means new societal phenomena, the recent rise of fake news [5] and information silos [2] has become a growing international concern, with politicians, governments and media organizations regularly lamenting the issue. A remedy to this situation, we argue, could be found in using technology to empower people’s ability to critically assess the quality of information, reasoning and argumentation through technological means. Recent empirical findings suggests that “false news spreads more than the truth because humans, not robots, are more likely to spread it” [10]. Thus, instead of continuing to focus on ways of limiting the efficacy of bots, educating human users to better recognize fake news stories could prove more effective in mitigating the potentially devastating social impact misinformation poses. While technology certainly contributes to the distribution of fake news and similar attacks on reasonable decision-making and debate, we posit that technology – and argument technology in particular – can equally be employed to counterbalance these deliberately misleading or outright false reports made to look like genuine news.
Original languageEnglish
Pages (from-to)38-40
Number of pages3
JournalCommunications of the ACM
Volume63
Issue number11
Early online date1 Oct 2020
DOIs
Publication statusPublished - Nov 2020

Fingerprint Dive into the research topics of 'Reason-Checking Fake News'. Together they form a unique fingerprint.

Cite this