ARC@ORU: Mini-colloquium
17 oktober 2025 10:00 – 11:30 Visual Lab, ARC

ARC@ORU bjuder på ett mini-colloquium med Steven Schockaert, professor vid Cardiff University, och Scott Sanner, professor vid University of Toronto.
ARC@ORU
Host: Pedro Zuidberg dos Martires, Associate Senior Lecturer in Computer Science
Unit / Research Area: Computer Science
Speaker: Steven Schockaert and Scott Sanner
Time:
10:00-10:45: Steven Schockaert
10:45-11:30: Scott Sanner
In October 17 we will host two talks, by Professor Scott Sanner, University of Toronto and Professor Steven Schockaert, Cardiff University. They are both part of the jury at Rishi Hazra’s doctoral dissertation in Computer Science (“Neurosymbolic Decision-Making with Large Language Models”) the same day.
Welcome to listen to the talks and to discuss research questions together, in a mini-colloqium, on the morning of Friday, October 17 in Visual Lab.
Reasoning with Region-Based Embeddings
Speaker: Steven Schockaert
Abstract: Most approaches to neuro-symbolic AI rely on a relatively loose coupling between learning and reasoning. To enable a tighter integration between these components, we need some kind of alignment between vector space representations and symbolic knowledge. In this talk, I will outline a strategy for achieving this, which builds on the idea that relations can be represented as convex regions in some vector space. Symbolic rules can then be encoded in terms of constraints on the spatial arrangement of these regions. To enable such representations to be learned effectively, in practice we can only consider regions that are sufficiently simple. At the same time, however, limiting the regions which are considered can dramatically impact the expressivity of the framework. I will give an overview of recent region-based models, with a particular focus on this trade-off between simplicity and expressivity.
Bio: Steven Schockaert is a professor at Cardiff University, working at the intersection of Natural Language Understanding and Knowledge Representation and Reasoning. He is editor-in-chief of the European Journal on Artificial Intelligence, was program chair of COLING 2025, and is a fellow of the Alan Turing Institute. He currently holds an EPSRC Open Fellowship on the topic of “Reasoning about Structured Story Representations”. He was the recipient of the ECCAI Doctoral Dissertation Award, the IBM Belgium Prize for Computer Science, and an ACL 2023 outstanding paper award, among others.
Verifiable, Debuggable, and Repairable Formal Reasoning with large language models
Speaker: Scott Sanner
Abstract: Recent advances in Large Language Models (LLM) have led to substantial interest in their application to commonsense reasoning tasks. Despite their potential, LLMs are susceptible to reasoning errors and hallucinations that may be harmful in use cases where accurate reasoning is critical. This challenge underscores the need for verifiable, debuggable, and repairable LLM reasoning. To address this need, we present LLM-TRes, a logical reasoning framework based on the notion of "theory resolution" that allows for seamless integration of the commonsense knowledge from LLMs with a verifiable logical reasoning framework that mitigates hallucinations and facilitates debugging of the reasoning procedure as well as repair. On diverse language-based reasoning tasks, we demonstrate the superior performance of LLM-TRes vs. state-of-the-art LLM-based reasoning methods in terms of both accuracy and reasoning correctness (i.e., non-hallucination).
Bio: Scott Sanner is a Professor in Industrial Engineering and Cross-appointed in Computer Science at the University of Toronto. His research focuses on a broad range of AI topics spanning sequential decision-making, (conversational) recommender systems, and applications of machine/deep learning. Scott is currently an Associate Editor for ACM Transactions on Recommender Systems (TORS), the Machine Learning Journal (MLJ), and the Journal of AI Research (JAIR). Scott was a co-recipient of paper awards from AI Journal (2014), Transport Research Board (2016), and CPAIOR (2018), a recipient of Google Faculty Research Awards (2011, 2020), and a Visiting Researcher at Google (UK) while on sabbatical (2022-23).