AI & Machine Learning in Finance: The Virtue of Complexity in Financial Machine Learning

Ғылым және технология

#artificialintelligence #machinelearning #financeresearch
Using AI and Machine learning in asset pricing and asset management is in the midst of a boom. But are portfolios based on these richly parameterized models well understood?
In this video, Bryan Kelly, Professor of Finance at Yale School of Management and Head of Machine Learning at AQR Capital Management, talks about the behavior of return prediction models in a high complexity regime and the ability of high complex models to predict recessions.
Stay up to date on financial research!
Sign-up to the Swedish House of Finance #newsletter to take part of events, listen to interviews with leading experts, and keep informed on relevant policy issues: bit.ly/394CT4Z

Пікірлер: 10

  • @Garrick645
    @Garrick645Ай бұрын

    This video is just so indulging. Couldn't understand a few things but great

  • @Khari99
    @Khari995 ай бұрын

    This was a great lecture with very surprising results. I had always assumed that overfit large models would always be worse to use since the training data was finite. Never thought the opposite was the case. Great work.

  • @brhnkh

    @brhnkh

    Ай бұрын

    Am I missing something or is it just "ridgeless" regression with appropriate penalty (z) being really good?

  • @Khari99

    @Khari99

    Ай бұрын

    @@brhnkh From what I understand from the lecture, one of the key insights is that larger models should be able to perform better out of sample. The general problem with using ML for time series data is that they are easy to overfit due to the limited training set and feature representation. However with a diverse feature set, larger models should be able to generalize better which is intuitive to me.The reason why many experts would recommend against bigger models is because they are much easier to overfit to the training data but this may just because they were training with a small feature set that did not have many predictive outcomes. Whenever this is the case, a model will learn the dataset itself in order to achieve the reward without finding patterns that can be repeated in out of sample data.

  • @brhnkh

    @brhnkh

    Ай бұрын

    @@Khari99 Right. The last sentence is what's novel I think. The model gets better at out-of-sample data because it is appropriately penalized while being trained with very large number of parameters. Apparently, they figured it out using random matrix theory, so the heavy lifting lies there I guess.

  • @Khari99

    @Khari99

    Ай бұрын

    @@brhnkh the reward function is always the most difficult part of ML. It took me a while to figure out how to write mine for my architecture. Simply using max profit is not enough (a model could learn to buy and hold forever for instance) And neither is accuracy. (high accuracy != profitability). You have to reward it and penalize it in a similar way that you would a human based on metrics its able to achieve on a trade by trade and portfolio basis.

  • @maximlamoureux4129

    @maximlamoureux4129

    Ай бұрын

    @@brhnkh I thought a definition of overfitting was when validation error starts to increase rapidly, after reaching its minimum during training, whilst training error continues to decrease. It is therefore not clear to me, why you would want a model to overfit at all, finance or not. I'm only 4 minutes into the video, perhaps he will explain it.

  • @kilocesar
    @kilocesar4 ай бұрын

    Quantitative Finance is really exhausting; many different authors, books, and articles contradict one another. Complexity is usually viewed with disapproval by the industry.

  • @brendanlydon5272
    @brendanlydon52724 ай бұрын

    My eyes wide open - if creed knew ML

Келесі