Please note, originally this talk was scheduled for 14 February 2024 as part of OFAI's 2023 Fall Lecture Series. For this talk had to be rescheduled until Wednesday, 13 March 2024.
OFAI is proud to present "Bias in Language Models Illustrated by the Example of Gender", a talk by Brigitte Krenn and Stephanie Gross of the Austrian Research Institute for Artificial Intelligence.
Members of the public are cordially invited to attend the talk in person (OFAI, Freyung 6/6/7, 1010 Vienna) or via Zoom on Wednesday, 13 March 2024 at 18:30 CET (UTC+1):
Meeting ID: 842 8244 2460
You can add this event to your calendar.
Talk abstract: Bias in language models is a widely discussed topic in AI. So far, many voices have been raised calling for baises to be prevented in training data. Which, however, is a futile endeavor in many real-world contexts. In the present talk, the speakers argue for a different strategy which can be summarized from an AI ethics point of view as "be aware of and transparent about your desired and undesired biases". The feasibility and technical viability of such an approach will be illustrated in the talk. In particular, experiments in fine-tuning pretrained language models with gender-biased data are presented and the resulting outcomes are qualitatively and quantitatively analysed.
Speaker biography: Brigitte Krenn is Deputy Director of the Austrian Research Institute for Artificial Intelligence (OFAI). She has worked in natural language processing and AI since 1990. Her overall research interest lies in understanding and computationally modelling human language capability. She is board member of the Austrian Society for Artificial Intelligence (ASAI) where she heads the Working Group on Natural Language Processing.
Stephanie Gross is a research scientist at the Austrian Research Institute for Artificial Intelligence (OFAI). She has been involved as PI and Co-PI in different national and international research projects, focusing on the development, implementation, and analysis of AI-based technical systems, including quantitative as well as qualitative approaches. Her main research interests lie in the fields of natural language processing, including large language models, task-based multi-modal human-human and human-robot interaction, and language learning.