Perceptrons: The Building Blocks of Neural Networks

This video presents the perceptron, a simple model of an individual neuron, and the simplest type of neural network. The manner in which perceptrons define a linear decision boundary is shown, as well as the mechanics of the perceptron learning algorithm.

Пікірлер: 49

  • @yuckymoose6745
    @yuckymoose67453 жыл бұрын

    I was so confused about the math and looking for solution all morning and now I found it and I understand clearly how it works now. Thanks a lot !

  • @matattz
    @matattz5 жыл бұрын

    THIS IS JUST PHENOMENAL :) Thank you so much thats what i have been searching the whole day now i get it !!

  • @chevalharrichunder8953
    @chevalharrichunder89534 жыл бұрын

    Thank you Jacob, no fancy presentation etc. Just brilliant explanation on the concept that i need for Machine Learning.

  • @devinvenable4587
    @devinvenable45874 жыл бұрын

    One of the best youtube videos on this topic. Nicely done.

  • @mohamedelkayal8871
    @mohamedelkayal88714 жыл бұрын

    Your videos have helped me on more than one occasion and for that I humbly thank you for your effort.

  • @viciousJavad
    @viciousJavad3 жыл бұрын

    Absolutely astonishing ! this is the first time i understand without skipping !

  • @prvizpirizad4336
    @prvizpirizad4336 Жыл бұрын

    the video I have been looking for! thank you very much!

  • @kadeeraziz
    @kadeeraziz4 жыл бұрын

    Now I got how perceptron works. thank you!!!!

  • @nevzylka2589
    @nevzylka25895 жыл бұрын

    Thank you so much. This is extremely helpful!

  • @Onevideo378
    @Onevideo3785 жыл бұрын

    Very well explained. Thanks a lot!

  • @subramaniamsrivatsa2719
    @subramaniamsrivatsa27193 жыл бұрын

    fluent explanation of comlex mathematical concepts - without missing out on the details .

  • @cr4zyg3n36
    @cr4zyg3n364 жыл бұрын

    Please make more!!! Great Videos!

  • @OriginalJoseyWales
    @OriginalJoseyWales4 жыл бұрын

    You are very smart and knowledgeable.

  • @cr4zyg3n36
    @cr4zyg3n364 жыл бұрын

    Thanks for this clear explanation

  • @ohmakademi
    @ohmakademi4 жыл бұрын

    thank you very much. This is very useful tutorial

  • @pareshb6810
    @pareshb68103 жыл бұрын

    Great work!

  • @cliffmathew
    @cliffmathew5 жыл бұрын

    Thanks. Helpful.

  • @miche2105
    @miche21054 жыл бұрын

    very helpful, thanks

  • @kaushikraghupathrunitechie
    @kaushikraghupathrunitechie4 жыл бұрын

    Loved it!

  • @PrakashSingh-bs2qv
    @PrakashSingh-bs2qv3 жыл бұрын

    Great explanation.

  • @RAJIBLOCHANDAS
    @RAJIBLOCHANDAS2 жыл бұрын

    Nice presentation.

  • @ahmedelsabagh6990
    @ahmedelsabagh69903 жыл бұрын

    Simple and helpful

  • @adeelahmad9875
    @adeelahmad98754 жыл бұрын

    How do we evaluate target, omegas and learning rate?

  • @Harish-ou4dy
    @Harish-ou4dy4 жыл бұрын

    Is there a theorem which says the weights and biases will eventually make correct predictions for small alpha and linearly separable data?

  • @sriramswaminathan1502
    @sriramswaminathan15025 жыл бұрын

    excellent explanation

  • @paristonhill2752
    @paristonhill27524 жыл бұрын

    When you were cycling through the inputs to update the weights, only third inputs were predicted correctly, will the algorithm come back to the inputs that it couldn't predict correctly? If yes then at what stage?

  • @ahmadadil1576
    @ahmadadil15763 жыл бұрын

    Thanks very helpful

  • @diptanshude2525
    @diptanshude25253 жыл бұрын

    Great Videoooo!!!!!

  • @VikasSingh-tc2pe
    @VikasSingh-tc2pe4 жыл бұрын

    How do we update the bias in the last example?

  • @aslaydnlar9663
    @aslaydnlar9663 Жыл бұрын

    amazing

  • @lanfeima5167
    @lanfeima51672 жыл бұрын

    Best!

  • @anikethdas1
    @anikethdas15 жыл бұрын

    Hey Jacob, I'm sorry if i got this wrong but shouldn't the group of points on the top be getting the value 0 instead of 1 and the group below get 1 instead of 0 (At about 12:40)? But i guess you corrected it later.

  • @JacobSchrum

    @JacobSchrum

    4 жыл бұрын

    Consider the point (0,1000). This is clearly above the line. What value would it have? 0wx + 1000wy + b = 1000*0.5 = 500. a(500) = 1 because 500 is positive, so 1 is the correct classification for points on top. It is possible to set the weights and biases in such a way that flips where 0 and 1 are, but this example is correct.

  • @Bridgelessalex

    @Bridgelessalex

    4 жыл бұрын

    Why?

  • @rebeccawalker839
    @rebeccawalker8393 жыл бұрын

    thank you alot

  • @ayoublaouarem3454
    @ayoublaouarem34543 жыл бұрын

    In case of multi layer perceptron, we use the same formula: alpha*(t - p(i))

  • @hackein9435
    @hackein94353 жыл бұрын

    Good one ;)

  • @Pmarmagne
    @Pmarmagne4 жыл бұрын

    Can someone explain me what the x and y axis represent concretely?

  • @JacobSchrum

    @JacobSchrum

    3 жыл бұрын

    In this particular example, one of the perceptron inputs is x, and the other is y. The reason we are trying to draw a line (hyperplane) in this space is that we want to have a way of categorizing all possible inputs. The perceptron assigns a class to each possible set of inputs based on which side of the line you end up on. This can be a little bit confusing, but it is even worse in the kinds of high-dimensional spaces where neural networks are typically applied.

  • @OriginalJoseyWales
    @OriginalJoseyWales3 жыл бұрын

    Why do we introduce a bias unit in the first place?

  • @128mtd128
    @128mtd1282 жыл бұрын

    can make a math course vectors from basic calculation up to this stuff i dont understand vectors and the e3

  • @devenjainn
    @devenjainn4 жыл бұрын

    Here by watching@ sakho kun

  • @takshkamlesh9914
    @takshkamlesh99144 жыл бұрын

    Great video. BTW you sound like Mark Zunckerberg

  • @JacobSchrum

    @JacobSchrum

    3 жыл бұрын

    I don't think that's a compliment.

  • @magnuswootton6181
    @magnuswootton6181 Жыл бұрын

    you cant use a pure step function because you cant propagate the error backward through it!!!

  • @yodarocco
    @yodarocco4 жыл бұрын

    The volume of the voice is damned low

  • @izetassky
    @izetassky5 жыл бұрын

    i think its not clear what did you did from @22:00

  • @JacobSchrum

    @JacobSchrum

    4 жыл бұрын

    alpha*(t - p(i)) = 0.1*1. w = (0,0,0) and i = (1,1,1), so w + alpha*(t - p(i))xi = (0,0,0) + 0.1*(1,1,1) = (0,0,0) + (0.1,0.1,0.1) = (0.1,0.1,0.1)

  • @olatunjifelix2102
    @olatunjifelix21024 жыл бұрын

    After 21 minutes, everything becomes confusing