Machine learning - Neural networks

Neural Networks
Slides available at: www.cs.ubc.ca/~nando/540-2013/...
Course taught in 2013 at UBC by Nando de Freitas

Пікірлер: 12

  • @xbuchtak
    @xbuchtak10 жыл бұрын

    I must agree, this is an excellent lecture and the most easy to understand explanation of backprop I've ever seen.

  • @DlVirgin
    @DlVirgin11 жыл бұрын

    this is the best lecture on neural networks I have ever seen (i have seen many)...you very thoroughly explained every aspect of how ANNs work in a way that was easy to understand...

  • @6katei
    @6katei10 жыл бұрын

    I also agree.

  • @JaysonSunshine
    @JaysonSunshine6 жыл бұрын

    There are errors at 1:01:58; the learning rate is missing from the batch equation, and in both cases it is more informative to switch the sign so it's clear we're moving opposite the gradient and the step size is positive.

  • @JaysonSunshine
    @JaysonSunshine6 жыл бұрын

    At 1:03:45, it is stated that the hyperbolic tangent function represents a solution to the vanishing gradient problem, but this false according to Wikipedia (and other sources): en.wikipedia.org/wiki/Vanishing_gradient_problem. The ReLU activation function does help/resolve this problem, though.

  • @qdcs524gmail
    @qdcs524gmail9 жыл бұрын

    Sir, may I know the activation function used in the ANN 4-layer example with the canary where 4 output neurons (sing, move, etc.) are activated at the same time? Does each layer use the same activation function? Please advise. Thanks.

  • @lradhakrishnarao902
    @lradhakrishnarao9027 жыл бұрын

    The videos and lecture are amazing. Have resolved lot of my issues. However, I want to add, something. Where are the topics for SVM and HMM? Also, it would be nice, if one or two complex equations are shown , how to solve.

  • @tobiaspahlberg1506
    @tobiaspahlberg15068 жыл бұрын

    Was there a reason why x_i1 and x_i2 were replaced by just x_i in the regression MLP example?

  • @chandreshmaurya8102

    @chandreshmaurya8102

    8 жыл бұрын

    x_i is vector with components x_i1 and x_i2. Shorthand notation.

  • @shekarforoush

    @shekarforoush

    7 жыл бұрын

    Nop,if you paid attention to the xi values in the table,you may understand they are scalars,so in this example instead of having 2 inputs we only one input x feature at times i,

  • @im_sanjay
    @im_sanjay6 жыл бұрын

    Can I get the slides?

  • @sehkmg

    @sehkmg

    6 жыл бұрын

    Just go to the course website then you'll get slides. www.cs.ubc.ca/~nando/540-2013/lectures.html