Michele Caprio: Imprecise Probabilistic Machine Learning: Being Precise About Imprecision

Ғылым және технология

The Manchester Centre for AI Fundamentals is hosting a series of seminars featuring expert researchers working in the fundamentals of AI.
Title: Imprecise Probabilistic Machine Learning: Being Precise About Imprecision. Speaker: Michele Caprio, The University of Manchester (joining summer 2024).
Abstract: This talk is divided into two parts. I will first introduce the field of “Imprecise Probabilistic Machine Learning”, from its inception to modern-day research and open problems, including motivations and clarifying examples. In the second part, I will present some recent results that I've derived together with colleagues at Oxford Brookes on Credal Learning Theory. Statistical Learning Theory is the foundation of machine learning, providing theoretical bounds for the risk of models learned from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the data distribution may (and often does) vary, causing domain adaptation/generalization issues. We laid the foundations for a credal theory of learning, using convex sets of probabilities (credal sets) to model the variability in the data-generating distribution. Such credal sets, we argued, may be inferred from a finite sample of training sets. We derived bounds for the case of finite hypotheses spaces (both assuming realizability or not), as well as infinite model spaces, which directly generalize classical results. This talk is based on the following work, doi.org/10.48550/arXiv.2402.0...
Michele is joining The University of Manchester and the Centre for AI Fundamentals in summer 2024.

Пікірлер

    Келесі