My name is Xander Steenbrugge, and I read a ton of papers on Machine Learning and AI.
But papers can be a bit dry & take a while to read. And we are lazy right?
In this channel I try to summarize my core take-aways from a technical point of view while making them accessible for a bigger audience.
If you love technical breakdowns on ML & AI but you are often lazy like me, then this channel is for you!
Пікірлер
쇼킹 샤킹~~
Thank you so much for this video, helped a lot
You explain something you are mastering. However in order for other people to understand you are speaking too fast. And more difficult to understand when English is not your native language
i think without know math you will dive in sea
You should watch the section on dangers and politics now, six years later. I’d be curious to know your opinions now. 😂
This is brilliant. Thank you.
very good explanations
One must stress what you say at the end of the video at 28:20, that although AlohaFold 2.0 can predict native confirmation of an amino acid sequence, there are other contributing factors, and the algorithm isn’t able to answer the why, nor how proteins find their native state out of the vast combinatorial complexity of native confrontation structures. Levinthal’s Paradox.
After going through most of the KZread videos on this topic. This one was one of the best out of all. Very clear and crisp explanation. Thank you ❤
4:00
subscribed
1:00
Amazing
Great breakdown and links for additional resources
Bro! you were soo ahead of your time! Like Scooby Doo
This is so good. Thank you!
GOAT
12:28 what did you use to connect the machine learning to a 3d model?
This is gold!!
Great work
6 years ago and I now use this video as a guidance to understanding StableDiffusion
an you help me out as well? I have so many questions but no one to answer them.
This is cool, but after the third random jumpscare sound I couldn't pay attention to what you were saying--all I could think about was when the next one would be. Gave up halfway through since it was stressing me out
this guy too hadnsome, itlain hands
Rest in peace Tishby
This is a marvel. I read a book with similar content, and it was a marvel to behold. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn
feel like beta should be decreased as training progresses and the learning rate decreases too. Sounds like hyperparameter tuning though
Figuratively exploded*
Five years later and RL is a dream's product. Nothing was really solved in real world. I think there's pratical areas of IA better than that.
Great video, and the algorithm is finally recognizing it! Come back and produce more videos?
This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.
Very good video
If you dont understand this explanation, the fault is on you.
Lmao this must be a joke. Anyone who supports this theory has no understanding of the exponentially nature of how AI learns.
Excellent video
Very interesting. It looks like Nature is alive -very much alive.
Glad to see that human biological computer network is still much efficient than machine with artificial neural network.
Excellent educational video on artificial and deep neural network learning.
Excellent video education on bio-molecular technology.
Another amazing video ... thanks ... any chance of some new videos coming out on recent papers?
5:03 "to test the presence and influence of different kinds of human priors" ... this is pretty cool ...
3:12 This reminds me of Chomsky's critique of AI and LLMs. Any comments?
Thanks for sharing this! I may be misunderstanding something, but it seems like there might be a mistake in the description. Specifically, the claim in 12:50 that "this is the only region where the unclipped part... has a lower value than the clipped version". I think this claim might be wrong, because there could be another case where the unclipped version would be selected: For example, if the ratio is e.g 0.5 (and we assume epsilon is 0.2), that would mean the ratio is smaller than the clipped version (which would be 0.8), and it would be selected. Is that not the case?
Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work
I didn't forget the subscrip, but you seems to forget updating
Why have you stopped doing wonderful tutorial? I wish you would have continued your channel.
extremely amazing, thanks for creating this incredible vedio
The term "activation" in the context of neural networks generally refers to the output of a neuron, regardless of whether the network is recognizing a specific pattern. The activation is indeed a numerical value that represents the result of applying the neuron's activation function to the weighted sum of its inputs. Just posting here what ChatGPT told me, because the definition of "activation" in this video confused me
Thank you! This was comprehensive and comprehendible.
Very very good video, thank you
thank you sir! appreciate the effort that went into this video