Perceptrons: The Building Blocks of Neural Networks
This video presents the perceptron, a simple model of an individual neuron, and the simplest type of neural network. The manner in which perceptrons define a linear decision boundary is shown, as well as the mechanics of the perceptron learning algorithm.
Пікірлер: 49
I was so confused about the math and looking for solution all morning and now I found it and I understand clearly how it works now. Thanks a lot !
THIS IS JUST PHENOMENAL :) Thank you so much thats what i have been searching the whole day now i get it !!
Thank you Jacob, no fancy presentation etc. Just brilliant explanation on the concept that i need for Machine Learning.
One of the best youtube videos on this topic. Nicely done.
Your videos have helped me on more than one occasion and for that I humbly thank you for your effort.
Absolutely astonishing ! this is the first time i understand without skipping !
the video I have been looking for! thank you very much!
Now I got how perceptron works. thank you!!!!
Thank you so much. This is extremely helpful!
Very well explained. Thanks a lot!
fluent explanation of comlex mathematical concepts - without missing out on the details .
Please make more!!! Great Videos!
You are very smart and knowledgeable.
Thanks for this clear explanation
thank you very much. This is very useful tutorial
Great work!
Thanks. Helpful.
very helpful, thanks
Loved it!
Great explanation.
Nice presentation.
Simple and helpful
How do we evaluate target, omegas and learning rate?
Is there a theorem which says the weights and biases will eventually make correct predictions for small alpha and linearly separable data?
excellent explanation
When you were cycling through the inputs to update the weights, only third inputs were predicted correctly, will the algorithm come back to the inputs that it couldn't predict correctly? If yes then at what stage?
Thanks very helpful
Great Videoooo!!!!!
How do we update the bias in the last example?
amazing
Best!
Hey Jacob, I'm sorry if i got this wrong but shouldn't the group of points on the top be getting the value 0 instead of 1 and the group below get 1 instead of 0 (At about 12:40)? But i guess you corrected it later.
@JacobSchrum
4 жыл бұрын
Consider the point (0,1000). This is clearly above the line. What value would it have? 0wx + 1000wy + b = 1000*0.5 = 500. a(500) = 1 because 500 is positive, so 1 is the correct classification for points on top. It is possible to set the weights and biases in such a way that flips where 0 and 1 are, but this example is correct.
@Bridgelessalex
4 жыл бұрын
Why?
thank you alot
In case of multi layer perceptron, we use the same formula: alpha*(t - p(i))
Good one ;)
Can someone explain me what the x and y axis represent concretely?
@JacobSchrum
3 жыл бұрын
In this particular example, one of the perceptron inputs is x, and the other is y. The reason we are trying to draw a line (hyperplane) in this space is that we want to have a way of categorizing all possible inputs. The perceptron assigns a class to each possible set of inputs based on which side of the line you end up on. This can be a little bit confusing, but it is even worse in the kinds of high-dimensional spaces where neural networks are typically applied.
Why do we introduce a bias unit in the first place?
can make a math course vectors from basic calculation up to this stuff i dont understand vectors and the e3
Here by watching@ sakho kun
Great video. BTW you sound like Mark Zunckerberg
@JacobSchrum
3 жыл бұрын
I don't think that's a compliment.
you cant use a pure step function because you cant propagate the error backward through it!!!
The volume of the voice is damned low
i think its not clear what did you did from @22:00
@JacobSchrum
4 жыл бұрын
alpha*(t - p(i)) = 0.1*1. w = (0,0,0) and i = (1,1,1), so w + alpha*(t - p(i))xi = (0,0,0) + 0.1*(1,1,1) = (0,0,0) + (0.1,0.1,0.1) = (0.1,0.1,0.1)
After 21 minutes, everything becomes confusing