Glass Box Models : The Case for Interpretable Methods in Artificial Intelligence

Wednesday 13 March 2024, 12:00pm to 1:00pm

Venue

LT6, LUMS, LANCASTER, United Kingdom

Open to

Postgraduates

Registration

Free to attend - registration required

Registration Info

Register via Microsoft Forms

Event Details

All staff and students are welcome to join this talk by Professor Mattias Wahde from Chalmers University of Technology.

Abstract: In this talk I will discuss interpretable (glass box) models in artificial intelligence (AI), contrasting them with the currently popular black box models such as deep neural networks (DNNs), especially large language models (LLMs). While the advantages of DNNs and LLMs are well known, in this talk I will focus on some of the drawbacks of such models, especially in high-stakes situations. I will also highlight the differences between interpretability and explainability. Finally, I will illustrate our ongoing work involving interpretable AI, by means of a few examples.

BIO: Mattias Wahde is professor of Applied Artificial Intelligence at Chalmers University of Technology in Göteborg, Sweden. His research interests include natural language processing and conversational agents, stochastic optimization methods, robotics, and automated driving. The unifying theme of his research is the development and use of transparent, accountable, and safe methods in artificial intelligence. He is the author or co-author of numerous papers in these fields, as well as a course book on stochastic optimization. In addition to research and PhD student supervision, he is heavily involved in teaching, as lecturer and examiner in several courses with a total of around 600 students per year. He is currently leading a theme on Interpretable AI in the framework of Chalmers' AI Research Centre (CHAIR).

Contact Details

Name Plamen Angelov
Email

p.angelov@lancaster.ac.uk