OFAI 2023 Fall Lecture Series

Lecture series

OFAI is delighted to announce its 2023 Fall Lecture Series, featuring an eclectic lineup of internal and external speakers.

The talks are intended to familiarize attendees with the latest research developments in AI and related fields, and to forge new connections with those working in other areas. The main theme of the current series is large language models.

Lectures will take place at 18:30 Vienna time, usually every other Wednesday. All lectures will be held online via Zoom; in-person attendance at OFAI is also possible for certain lectures. Attendance is open to the public and free of charge. No registration is required.

Subscribe to our newsletter or our RSS feed, or bookmark this web page, to receive further details for the individual talks.

11 October 2023 at 18:30 CEST (UTC+2)

Nafise Sadat Moosavi (University of Sheffield)

Challenges of End-to-End Reasoning in NLP

To understand human language, language models have to perform various reasoning skills, e.g., logical reasoning, commonsense reasoning, temporal reasoning, etc. There are multiple datasets for directly evaluating each of these reasoning skills. However, these reasoning skills are mostly required for downstream applications and not as standalone skills. For instance, a model may need to perform arithmetic reasoning for answering a question or to correctly summarize a table. However, it is not clear whether a model that performs well on a dataset that is designed to evaluate arithmetic reasoning would also improve the results on a QA dataset that requires arithmetic reasoning. As a result, we should pay special attention to developing end-to-end models for downstream applications that are also capable of performing various reasoning skills. This presentation focuses on the challenges of end-to-end reasoning in downstream applications, with a specific emphasis on end-to-end arithmetic reasoning.

23 October 2023 at 18:30 CEST (UTC+2)

Simon Penny (University of California Irvine and Nottingham Trent University)

Skill: Know-how, Artisanal Practices and 'Higher' Cognition

Skilled practitioners attest that in their experience of skilled practice, intelligence feels like it is happening in peripersonal space, at the fingertips, on the workbench. This paper begins from the premise that skilled embodied practices are intelligence - as much improvisation as hylomorphism (Ingold) - enacted amongst tools, materials and cognitive ecologies. As a lifelong practitioner, I seek to remain grounded in practice, while pursuing an interdisciplinary inquiry into the concept of skill, engaging philosophy, psychology, anthropology, cognitive science and neuroscience. The experience of skilled practices destabilises the (received) skill-intelligence binary, which is seen as a corollary of the mind-body binary. A dualist framework that distinguishes ‘higher' and ‘lower’ cognition and valorises abstraction, is not conducive to optimal discussion of skill. I will discuss the historical construction of this privileging of abstraction in philosophy and theorisation of cognition. A different framework will be suggested, drawing upon concepts of know-how (Ryle), the ‘performative idiom’ (Pickering), enactivism (Varela, Thompson, DiPaolo), pre-reflective awareness (Legrand), epistemic action (Kirsh), cognitive ecologies (Hutchins, Sutton). Arguments from neuroscience are then marshalled, focusing on phylogenetics and on proprioception, in order to build a non-dualist approach to neurophysiology, that provides a more balanced theoretical framework within which to discuss skill and/as cognition. If embodied practices are taken to constitute intelligence, this has ramifications for general conceptualisations of intelligence, and in turn, for rhetorics validating artificial intelligence, and claims made for interactive screen-based pedagogies.

8 November 2023 at 18:30 CET (UTC+1)

Erich Prem (University of Vienna)

Ethics of AI: Good AI Versus the Totalitarian Enforcement of Norms

The interest in the ethics of AI systems has grown significantly over the last few years as evidenced by a growing literature on the topic and a mounting body of strategies, proposed regulations, standards, and technical approaches. In this talk, we provide an overview of some of the key ethical issues discussed for AI systems such as trolley problems or systems that talk back (ChatGPT). We review the related challenges as well as some of the proposed technical solutions such as model cards or rules for online discourse. The talk will focus on open issues and critically discuss technical solutionism and totalitarian tendencies of AI-based norm enforcement.

22 November 2023 at 18:30 CET (UTC+1)

Dagmar Gromann (University of Vienna)

Do Large Language Models Grasp Metaphors?

Conceptual metaphors present a powerful cognitive vehicle to transfer knowledge structures from a source to a target domain, e.g. WORDS ARE WEAPONS as in "Your words pierce my heart". Prior neural approaches focus primarily on detecting whether natural language sequences are metaphoric or literal. In this talk, I will present work on probing metaphoric knowledge in pre-trained language models. The focus is on testing their capability to predict source domains given an input sentence and a target domain in English and Spanish. Several methods from fine-tuning to few-shot prompting are tested. Results show that the most common error type is the hallucination of source domains.

6 December 2023 at 18:30 CET (UTC+1)

Ivan Habernal (Paderborn University)

Privacy in Natural Language Processing: Are we There yet?

In this talk, I will explore the challenges and concerns surrounding privacy in natural language processing (NLP) and present potential solutions to address them. I will discuss the use of anonymization and differential privacy techniques to protect sensitive information while still enabling the training of accurate NLP models. Additionally, I will emphasize the importance of transparency and reproducibility when implementing privacy-preserving solutions in NLP.

13 December 2023 at 18:30 CET (UTC+1)

Klaus M. Stiefel (Silliman University and Neurolinx Research Institute)

The Energy Challenges of Artificial Superintelligence

We argue here that contemporary semiconductor computing technology poses a significant if not insurmountable barrier to the emergence of any artificial general intelligence system, let alone one anticipated by many to be “superintelligent”. This limit on artificial superintelligence (ASI) emerges from the energy requirements of a system that would be more intelligent but orders of magnitude less efficient in energy use than human brains. An ASI would have to supersede not only a single brain but a large population given the effects of collective behavior on the advancement of societies, further multiplying the energy requirement. A hypothetical ASI would likely consume orders of magnitude more energy than what is available in highly-industrialized nations. We estimate the energy use of ASI with an equation we term the “Erasi equation”, for the Energy Requirement for Artificial SuperIntelligence. Additional efficiency consequences will emerge from the current unfocussed and scattered developmental trajectory of AI research. Taken together, these arguments suggest that the emergence of an ASI is highly unlikely in the foreseeable future based on current computer architectures, primarily due to energy constraints, with biomimicry or other new technologies being possible solutions.

20 December 2023 at 18:30 CET (UTC+1)

Thomas Graf (Stony Brook University)

Linguistics and Symbolic Computation in a World of Large Language Models

Language has always played a central role in artificial intelligence, yet AI researchers and linguists have rarely seen eye to eye on things, in particular the status of subsymbolic/neural approaches to language. After decades of debates, it looks like the subsymbolic approaches have finally emerged victorious. Not only are large language models (LLMs) succeeding in incredibly complex real-world tasks, subsymbolic models are also rapidly gaining traction in some areas of theoretical linguistics, e.g. lexical semantics. This raises the question: will symbolic linguistics be left in the dust, or is this actually an opportunity for meaningful synergy between symbolic and subsymbolic approaches?

In this talk, I argue for the latter by presenting “subregular syntax” as a concrete example of what such a synergy may look like. Subregular syntax is a symbolic approach that combines formal language theory with the Minimalist syntax framework proposed by Noam Chomsky, which grants it a large degree of empirical coverage across a wide range of typologically diverse languages. Despite that broad coverage, subregular syntax is a very simple formalism that analyzes all syntactic dependencies in terms of relativized adjacency conditions. Even though these conditions are stated over trees, they can actually be reduced to a very specific types of n-grams over strings. This opens up a new way of representing sentence structure in neural networks while bringing robust learning algorithms like stochastic gradient descent to Minimalist syntax. It also casts doubt on claims in the literature that the behavior of neural networks in specific linguistic tasks, e.g. binding or NPI-licensing, shows that they use tree structure. Instead, these findings may be indicative of a network’s ability to use fairly elaborate types of n-grams. The careful study of the symbolic approach of subregular syntax thus is an opportunity to deepen our understanding of neural networks while also harnessing their advantages for theoretical linguistics.

17 January 2024 at 18:30 CET (UTC+1)

Stefanie Höhl (University of Vienna)

Social Rhythms and Biobehavioral Synchrony in Early Human Development

Caregiver–infant interactions are characterized by interpersonal rhythms at different timescales, from nursery rhymes and interactive games to daily routines. These rhythms make the social environment more predictable for young children and facilitate interpersonal biobehavioral synchrony with their caregivers. In adults, the brain rhythms of interaction partners entrain to communicative rhythms, including speech, supporting mutual comprehension and communication. I will present recent evidence that this is also the case in the infant brain, especially when babies are addressed directly by their caregiver through infant-directed speech in naturalistic interactions. Through using simultaneous measures of neural and physiological rhythms, e.g., dual-fNIRS and dual-ECG, from caregiver and infant during live face-to-face interactions, we can further deepen our understanding of early interactional dynamics and their reciprocal nature. I will present our recent research identifying factors supporting the establishment of caregiver–infant neural synchrony, such as affectionate touch and vocal turn-taking. I will further discuss the functional links and dissociations between caregiver–infant synchrony on the neural and physiological levels. I will outline potential implications of this work and point out important future directions.

How to attend: Attend in person (OFAI, Freyung 6/6/7, 1010 Vienna), or via Zoom (meeting ID: 842 8244 2460; passcode: 678868), or dial in by phone.

You can add this event to your calendar.

31 January 2024 at 18:30 CET (UTC+1)

Clemens Heitzinger (TU Wien)

Reinforcement Learning and its Application in Medicine and Large Language Models

Reinforcement learning has been instrumental in many advances in AI in recent years. The most publicized is certainly the development of ChatGPT and large language models (LLM) in general; the last and crucial training step of ChatGPT is reinforcement learning with human feedback (RLHF). Still, in order to fully solve learning problems, statements about the reliability of the results are necessary in addition to convergence results. For example, reliability and trustworthiness of AI systems is of utmost importance in medicine and other safety critical areas. In this talk, reinforcement-learning algorithms for training LLM and for calculating optimal treatments of sepsis patients are described. The questions of convergence to an optimal policy and of reliability are addressed by PAC (probably approximately correct) estimates and other approaches to policy evaluation.

How to attend: Attend in person (OFAI, Freyung 6/6/7, 1010 Vienna), or via Zoom (meeting ID: 842 8244 2460; passcode: 678868), or dial in by phone.

You can add this event to your calendar.

14 February 2024 at 18:30 CET (UTC+1)

Brigitte Krenn and Stephanie Gross (Austrian Research Institute for Artificial Intelligence)

Bias in Language Models Illustrated by the Example of Gender

Abstract TBA

How to attend: Attend in person (OFAI, Freyung 6/6/7, 1010 Vienna), or via Zoom (meeting ID: 842 8244 2460; passcode: 678868), or dial in by phone.

You can add this event to your calendar.

OFAI 2023 Fall Lecture Series poster
OFAI 2023 Fall Lecture Series Poster