Managing Sources of Randomness When Training Deep Neural Networks

Ғылым және технология

REFERENCES:
1. Link to the code on GitHub: github.com/rasbt/MachineLearn...
2. Link to the book mentioned at the end of the video: nostarch.com/machine-learning...
DESCRIPTION:
In this video, we managing common sources of randomness when training deep neural networks. We cover sources of randomness, including model weight initialization, dataset sampling and shuffling, nondeterministic algorithms, runtime algorithm differences, hardware and driver variations, and generative AI sampling.
OUTLINE:
00:00 - Introduction
01:14 - 1. Model Weight Initialization
04:28 - 2. Dataset Sampling and Shuffling
07:45 - 3. Nondeterministic Algorithms
11:13 - 4. Different Runtime Algorithms
14:30 - 5. Hardware and Drivers
15:39 - 6. Randomness and Generative AI
20:56 - Recap
22:34 - Surprise

Пікірлер: 10

  • @bobrarity
    @bobrarity2 ай бұрын

    Wow, wasn't expecting a new video, luv the way e explaing things bro, keep it up

  • @SebastianRaschka

    @SebastianRaschka

    2 ай бұрын

    Glad this was useful!

  • @anne-marieroy8812
    @anne-marieroy88122 ай бұрын

    Excellent summary regarding randomness in NN training and generative AI. Very good illustration as well .

  • @SebastianRaschka

    @SebastianRaschka

    2 ай бұрын

    Thanks for the kind words!

  • @masonholcombe3327
    @masonholcombe33272 ай бұрын

    Amazing new video recapping areas of randomness in deep neural nets. I do have a question regarding top-K sampling, why do we have to renormalize the top-k choices in the vocabulary? Can we not just randomly choose between the top-k choices?

  • @SebastianRaschka

    @SebastianRaschka

    2 ай бұрын

    Good question. This is more for interpretability purposes, but you are right, you can skip the normalization step.

  • @mahmoodmohajer1677
    @mahmoodmohajer16772 ай бұрын

    Amazing video. only I'm wondering if we start by sampling different seeds to initialize weights and biases and feed forward them once to see which one results to less loss error. samples of seed can be a range of numbers e.g 1-100 or by themselves a set of random numbers. do you think is it useful in practice?

  • @SebastianRaschka

    @SebastianRaschka

    2 ай бұрын

    That's a good question. And yes, it can be useful. Actually, I use that for creating confidence intervals, for example. E.g., see section 4 here: github.com/rasbt/MachineLearning-QandAI-book/blob/main/supplementary/q25_confidence-intervals/1_four-methods.ipynb

  • @nguyenhuuuc2311
    @nguyenhuuuc23112 ай бұрын

    Should we tune the seed for better results?😂

  • @SebastianRaschka

    @SebastianRaschka

    2 ай бұрын

    Haha, believe it or not, but I've once reviewed a paper where the seed was a hyperparameter.

Келесі