October 25, 2021

Date & Time: Thursday, October 28, 9:30-11:00am.
Title: Accountable and Robust Automatic Fact Checking
Abstract:
The past decade has seen a substantial rise in the amount of mis- and disinformation online, from targeted disinformation campaigns to influence politics, to the unintentional spreading of misinformation about public health. This development has spurred research in the area of automatic fact-checking, a knowledge-intensive and complex reasoning task. Most existing fact-checking models predict a claim’s veracity with black-box models, which often lack explanations of the reasons behind their predictions and contain hidden vulnerabilities. The lack of transparency in fact-checking systems and ML models, in general, has been exacerbated by increased model size and by “The right…to obtain an explanation of the decision reached” enshrined in European law. This talk presents some first solutions to generating explanations for fact-checking models. It further examines how to assess the generated explanations using diagnostic properties, and how further optimizing for these diagnostic properties can improve the quality of the generating explanations. Finally, the talk examines how to systemically reveal vulnerabilities of black-box fact checking models.
Bio:
Isabelle Augenstein is an Associate Professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. She also co-heads the research team at CheckStep Ltd, a content moderation start-up. Her main research interests are fact checking, low-resource learning, and explainability. Prior to starting a faculty position, she was a postdoctoral researcher at University College London, and before that a PhD student at the University of Sheffield. She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to Explain Attitudes on Social Media’. She is also president of the ACL Special Interest Group on Representation Learning (SIGREP).