Out-of-distribution generalization for learning quantum dynamics

Ғылым және технология

CQT Online Talks - Series: Computer Science Seminars
Speaker: Matthias C. Caro, Freie Universität Berlin
Abstract: Generalization bounds are a critical tool to assess the training data requirements of Quantum Machine Learning (QML). Recent work has established guarantees for in-distribution generalization of quantum neural networks (QNNs), where training and testing data are assumed to be drawn from the same data distribution. However, there are currently no results on out-of-distribution generalization in QML, where we require a trained model to perform well even on data drawn from a distribution different from the training distribution. In this talk, we first introduce a mathematical framework for formalizing questions about in-distribution and out-of-distribution generalization. Then, we prove out-of-distribution generalization for the task of learning an unknown unitary using a QNN and for a broad class of training and testing distributions, so-called locally scrambled distributions. In particular, our results show that one can learn the action of a unitary on entangled states using only product state training data. We also discuss some conceptual implications of these out-of-distribution generalization results and illustrate them with two numerical applications.
Based on arXiv:2204.10268.

Пікірлер: 1

  • @rajivkrishnakumar9925
    @rajivkrishnakumar992510 ай бұрын

    Thank you for the very clear presentation :) I found it very interesting and informative!

Келесі