While behavioural tests capture human prior knowledge and insights, there has been little exploration on how to leverage them for model training and development. This question is explored in "Evaluation and Learning with Structured Test Sets", an invited talk by Benjamin Roth of the University of Vienna which represents joint work with Pedro Henrique Luz de Araujo. The talk is part of OFAI's 2022 Lecture Series.
Members of the public are cordially invited to attend the talk on Wednesday, 19 October at 18:30 CEST (UTC+2). Attendance is possible in person at OFAI Headquarters (Freyung 6/6/7, 1010 Vienna); note that it is recommended to wear an FFP2 mask while on the premises. Alternatively, you may attend online via Zoom:
Meeting ID: 842 8244 2460
Talk abstract: Behavioural testing – verifying system capabilities by validating human-designed input-output pairs – is an alternative evaluation method of natural language processing systems proposed to address the shortcomings of the standard approach: computing metrics on held-out data. While behavioural tests capture human prior knowledge and insights, there has been little exploration on how to leverage them for model training and development. With this in mind, we explore behaviour-aware learning by examining several fine-tuning schemes using HateCheck, a suite of functional tests for hate speech detection systems. To address potential pitfalls of training on data originally intended for evaluation, we train and evaluate models on different configurations of HateCheck by holding out categories of test cases, which enables us to estimate performance on potentially overlooked system properties. The fine-tuning procedure led to improvements in the classification accuracy of held-out functionalities and identity groups, suggesting that models can potentially generalise to overlooked functionalities. However, performance on held-out functionality classes and i.i.d. hate speech detection data decreased, which indicates that generalisation occurs mostly across functionalities from the same class and that the procedure led to overfitting to the HateCheck data distribution.
Speaker biography: Benjamin Roth is a professor in the area of deep learning & statistical NLP, leading the WWTF Vienna Research Group for Young Investigators "Knowledge-Infused Deep Learning for Natural Language Processing". Prior to this, he was an interim professor at LMU Munich. He obtained his PhD from Saarland University and did a postdoc at UMass, Amherst. His research interests are the extraction of knowledge from text with statistical methods and knowledge-supervised learning.