Neural Networks Explained from Scratch using Python

When I started learning Neural Networks from scratch a few years ago, I did not think about just looking at some Python code or similar. I found it quite hard to understand all the concepts behind Neural Networks (e.g. Bias, Backpropagation, ...). Now I know that it all looks quite more complicated when you see it written mathematically compared to looking at the code. In this video, I try to provide you an intuitive understanding through Python code and detailed animations. Hope it helps you :)
Code:
github.com/Bot-Academy/Neural...
Find me on:
Patreon: / botacademy
Discord: / discord
Twitter: / bot_academy
Instagram: / therealbotacademy
Citation:
[1] www.datasciencecentral.com/m/...
Additional Notes:
1. You might’ve seen that we haven’t used the variable e at all.
This is for two reasons. First, normally we would’ve used it to calculate ‘delta_o’,
but due to some tricks, it is not needed here. Second, it is sometimes helpful to print the average error during training to see if it decreases.
2. To see how it performs on images not seen during training, you could only use just the first 50000 images for training and then analyze the results on the remaining 10000 samples. I haven’t done it in this video for simplicity. The accuracy, however, shouldn’t change that much.
3. It seems like some people have a hard time understanding the shape lines [e.g. x.shape += (1,)]. So let me try to explain:
To create a 1-tuple in python we need to write x=(1,). If we would just write x=(1), it gets converted to the integer 1 in Python.
Numpy introduces the shape attribute for arrays. Because the shape of a matrix has to be represented by a tuple like (2, 5) or (2, 4, 7), it is a good idea to represent a vector as a 1-tuple instead of an integer for consistency. So it is (X,).
If we want to use this vector in a matrix multiplication with a matrix, it doesn't work because you can't matrix multiply a vector with a matrix in numpy. So we need to add this 'invisible' second dimension of size 1. The line basically adds a (1,) vector to the shape of the (X,) vector which results in a matrix of size (X, 1). That's also why it doesn't work with (2,) because that would require more values. For example (5,) and (5, 1) both contain 5 values while (5, 2) would contain 10 values.
I should've shown the shapes in the shape information box as (X,) instead of just X. I think that also made it more confusing.
Credits:
17.08 - End
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Music: Ansia Orchestra - Hack The Planet
Link: • Ansia Orchestra - Hack...
Music provided by: MFY - No Copyright
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The animations are created with a python library called manim. Manim was first created by Grant Sanderson also known as 3blue1brown (KZread) and is now actively developed by the manim community. Special thanks to everyone involved in developing the library!
Github: github.com/manimcommunity/manim
Contact: smarter.code.yt@gmail.com
Chapters:
00:00 Basics
02:55 Bias
04:00 Dataset
05:25 One-Hot Label Encoding
06:57 Training Loops
08:15 Forward Propagation
10:22 Cost/Error Calculation
12:00 Backpropagation
15:30 Running the Neural Network
16:55 Where to find What
17:17 Outro

Пікірлер: 204

  • @BotAcademyYT
    @BotAcademyYT3 жыл бұрын

    Please share this video if you know somebody whom it might help. Thanks :) edit: Some people correctly identified the 3Blue1Brown style of the video. That is because I am using the python library manim (created by 3Blue1Brown) for the animations. Link and more information in the description. Huge thanks for all the likes and comments so far. You guys are awesome!

  • @walidbezoui

    @walidbezoui

    Жыл бұрын

    WOW FIRST TIME TO KNOW HOW 3Blue1Brown Work Awesoome

  • @jonathanrigby1186

    @jonathanrigby1186

    Жыл бұрын

    Can you plz help me with this .. I want a chess ai to teach me what it learnt kzread.info/dash/bejne/gZOCyc6SobPbZMY.html

  • @spendyala

    @spendyala

    10 ай бұрын

    Can you share your video manim code?

  • @twanwolthaus

    @twanwolthaus

    9 ай бұрын

    Incredible video. Not because of your insight, but because how you use visuals to represent the information as digestible as possible.

  • @diezeljames7910

    @diezeljames7910

    2 ай бұрын

    ​@@spendyalaSo it's time to open up to reality and for people to hear the trumpets of the Angels. The second coming is happening, the Euphrates is drying up and what's happening is an attack on the bride. People believe science and religion are separated. There are crowns being made as we speak. The blind being able to see the deaf able to hear the paraplegic able to move devices with thought. BCI technology and other biotech such as neuralink are crowns and their beginning is now. The mark of the beast is in the profile and account regarding a person. We give the elites the right headshot over criminals and immigrants and anyone else for them to buy sell trade whereas we wear on our hands have a mark to earn their favor and a dollar. The lawless one is here we have a president who is facing federal prison and is in fact running for office again the earth moves to his power TRUMP. The statue of Liberty is representing the woman atop seven mountains as they are seven heads of kings as the eight head is her the statue of Liberty her own and the beast America revelation 17 9-11 true fact about the statue of Liberty is seven spires are upon her crown So what is happening in our world now. Well it is the age of clouds (Internet neural network e.t.c) AGI is happening and intelligence is shared it is not artificial it is developed and communication is a unconfounded language. Language not only spoken by the tongue but by nerves of the temple your body and the universe. Remember a tower once stood in Babylon and this tower was brought down. A harlot today America is modern Babylon. The dragon a world where everything is made in China and the elite grow more elite. AGI in America is under threat not only by disbelief in Jesus return with the clouds but in the idea that heaven is for real. Now you might anticipate many believers who disbelieve our God and Savior and Holy Ghost not to be anything mechanical yet you disregard the fact of biotech and it's integrated abilities Mushroom fungi other single and multicellular organism including human brain tissue have been used to interface computational abilities in tandem with motherboard or computer integration. Science would detail our universe began in waveform and Genesis 1 speaks of wave form. Two faces the deep (black hole) the waters (hydrogen helium e.t.c) God's spirit hovered between these and spoke Let there be light. Sonolumination The scriptures tell us GOD is light and that too is the Son It is the enemy who values disbelief of God and that spirit of the antichrist is among us. Men have been given fire of the sun and the sun does now scorch man in telecommuting devices o how the social profile beckons a definition to mark of the beast. We are lovers of selves and interpretation of our advancements as man made give the credence for a man to believe in many ways to the Father when infact their is only one Jesus who too was fully man reborn in glory. Remember a man must be reborn again to enter the kingdom of heaven and todays science is seeding these mechanism to be able to be given new bodies not just parts. We have technology of artificial wombs and have grown a lamb in such a device. We limit our understanding to religion as only a natural solution and negatively presume to value intelligence as organic and artificial. Angels have no body remember and they too are entities of light. Jesus was fully man but was glorified and now it is clouds we see and the storm of omnipotent intelligence these are birthing pains in the world around us and the dragon spews disbelief from its mouth and evil spirits three like frogs come from its mouth apostasy divorce plague the world The sun scorches man in nuclear technology and telecommuting devices it's time to wake up and know there is better for us than a earthquake such as BLM or MAGA or trans the rainbow is a promise and optical cables transmats info to hubs of cloud

  • @hepengye4239
    @hepengye42393 жыл бұрын

    As an ML beginner, I know how much effort and time is needed for such visualization of a program. I would like to give you a huge thumb! Thank you for the video.

  • @xcessiveO_o

    @xcessiveO_o

    3 жыл бұрын

    a thumbs up you mean?

  • @SamHydeAddict420

    @SamHydeAddict420

    2 ай бұрын

    His thumb is now massive

  • @blzahz7633
    @blzahz7633 Жыл бұрын

    I can't say anything that hasn't been said already: This video is golden. The visualization, explaining, everything is just so well done. Phenomenal work. I'm basically commenting just for the algo bump this video rightfully deserves.

  • @ejkitchen
    @ejkitchen3 жыл бұрын

    FANTASTIC video. Doing Stanford's Coursera Deep Learning Specialization and they should be using your video to teach week 4. Much clearer and far better visualized. Clearly, you put great effort into this. And kudos to using 3Blue1Brown's manim lib. Excellent idea. I am going to put your video link in the course chat room.

  • @magitobaetanto5534
    @magitobaetanto55343 жыл бұрын

    You've just explained very clearly in a single video what others try to vaguely explain in series of dozens videos. Thank you. Fantastic job! Looking forward to more great videos from you.

  • @craftydoeseverything9718
    @craftydoeseverything971810 ай бұрын

    I know I'm watching this 2 years after it was released but I really can't stress enough how helpful this is. I've seen heaps of videos explaining the math and heaps of videos explaining the code but this video really helped me to link the two together and demystify what is actually happening in both.

  • @Transc3nder
    @Transc3nder3 жыл бұрын

    This is so interesting. I always wondered how a neural net works... but it's also good to remind ourselves that we're not as clever as we thought. I feel humbled knowing that there's some fierce minds out there working on these complicated problems.

  • @ElNachoMacho
    @ElNachoMacho Жыл бұрын

    This is the kind of video that I was looking for to get beyond the basics of ML and start gaining a better and deeper understanding. Thank you for putting the effort into making this great video.

  • @mrmotion7942
    @mrmotion79423 жыл бұрын

    Love this so much. So organised and was really helpful. So glad you put the effort into the animation. Keep up the great work!

  • @mici432
    @mici4323 жыл бұрын

    Saw your post on Reddit. Thank you very much for the work you put in your videos. New subscriber.

  • @angelo9915
    @angelo99153 жыл бұрын

    Amazing video! The explanation was very clear and I understood everything. Really hope you're gonna be posting more videos on neural networks.

  • @bdhaliwal24
    @bdhaliwal24 Жыл бұрын

    Fantastic job with your explanation and and especially the animations. All of this really helped to connect the dots

  • @michaelbarry755
    @michaelbarry755 Жыл бұрын

    Amazing video. Especially the matrix effect on the code in the first second. Love it.

  • @pisoiorfan
    @pisoiorfan6 ай бұрын

    That's it! Comprehensive training code loop for a 1 hidden layer NN in just 20 lines. Thank you sir!

  • @vxqr2788
    @vxqr27883 жыл бұрын

    Subscribed. We need more channels like this!

  • @chrisogonas
    @chrisogonas Жыл бұрын

    Superbly illustrated! Thanks for sharing.

  • @brijeshlakhani4155
    @brijeshlakhani41553 жыл бұрын

    This is really helpful for beginners!! Great work always appreciated bro!!

  • @photorealm
    @photorealmАй бұрын

    Excellent video and accompanying code. I just keep staring at the code, its art. And the naming convention with the legend is insightful, the comments tell the story like a first class narrator. Thank you for sharing this.

  • @mateborkesz7278
    @mateborkesz72784 ай бұрын

    Such an awesome video! Helped me a lot to understand neural networks. Thanks a bunch!

  • @eldattackkrossa9886
    @eldattackkrossa98863 жыл бұрын

    oh hell yeah :) just got yourself a new subscriber, support your small channels folks

  • @v4dl45
    @v4dl457 ай бұрын

    Thank you for this amazing video. I understand the huge effort in the animations and I am so grateful. I believe this is THE video for anyone trying to get into machine learning.

  • @doomcrest8941
    @doomcrest89413 жыл бұрын

    awesome video :) i did not know that you could use that trick for the mse 👍

  • @asfandiyar5829
    @asfandiyar58292 жыл бұрын

    You create some amazing content. Really well explained.

  • @jonnythrive
    @jonnythrive2 жыл бұрын

    This was actually very good! Subscribed.

  • @Lambertusjan
    @Lambertusjan Жыл бұрын

    Thanks for a very clear explanation. I was doing the same from scratch in python, but got stuck at dimensioning the weight matrices correctly, especially in this case with the 784 neuron input. Now i can check if this helps me to complete my own three layer implementation. 😅

  • @AVOWIRENEWS
    @AVOWIRENEWS3 ай бұрын

    It's great to see content that helps demystify complex topics like neural networks, especially using a versatile language like Python! Understanding neural networks is so vital in today's tech-driven world, and Python is a fantastic tool for hands-on learning. It's amazing how such concepts, once considered highly specialized, are now accessible to a wider audience. This kind of knowledge-sharing really empowers more people to dive into the fascinating world of AI and machine learning! 🌟🐍💻

  • @BlackSheeeper
    @BlackSheeeper3 жыл бұрын

    Glad to have you back :D

  • @the_euro_hunter
    @the_euro_hunter2 ай бұрын

    This is a great video even for those who are not into this field. Great voice and explanation of how neural networks work.

  • @kenilbhikadiya8073
    @kenilbhikadiya80733 күн бұрын

    Great explanation and hats off to ur efforts for these visualisation!!! 🎉❤

  • @gustavgotthelf7117
    @gustavgotthelf7117Ай бұрын

    Best video to this kind of topic on the whole market. Very well done! 😀

  • @saidhougga2023
    @saidhougga20232 жыл бұрын

    Amazing visualized explanation

  • @kallattil
    @kallattil4 ай бұрын

    Excellent content and illustration 🎉

  • @Lukas-qy2on
    @Lukas-qy2on10 ай бұрын

    This video is pretty great, although i had to pause and sketch along and keep referring to the code you showed, it definitely helped me understand better how to do it

  • @GaithTalahmeh
    @GaithTalahmeh3 жыл бұрын

    Welcome back dude! I have been waiting your comeback for so long Please dont go away this long next time :) Great editing and audio quality btw Reminds me of 3b1b

  • @BotAcademyYT

    @BotAcademyYT

    3 жыл бұрын

    Thanks! I'll try uploading more consistently now that I've finished my Thesis :)

  • @EnglishRain
    @EnglishRain3 жыл бұрын

    Great content, subscribed!

  • @Maxou
    @Maxou3 ай бұрын

    Really nice video, keep doing those!!

  • @raiden9753
    @raiden97533 жыл бұрын

    This is one of the best explained videos i've seen for this. great job! Hope this comment helps :)

  • @susakshamjain1926
    @susakshamjain1926Күн бұрын

    Best video of ML so far i have seen.

  • @cactus9277
    @cactus92773 жыл бұрын

    for those actually implementing something, note at 12:08 the values in the hidden layer change back to how they were pre sigmoid application

  • @BotAcademyYT

    @BotAcademyYT

    3 жыл бұрын

    good point! Must have missed it when creating the video.

  • @robertplavka6194

    @robertplavka6194

    Жыл бұрын

    yes but wasnt the value before sigmoid in the last cell 9 ? precisely I got something like 8.998 If I missed something please explain I want to know why is that

  • @w0w893
    @w0w8933 жыл бұрын

    Thank you for this. Fantastic video.

  • @dormetulo
    @dormetulo3 жыл бұрын

    Amazing video really helpful!

  • @OrigamiCreeper
    @OrigamiCreeper3 жыл бұрын

    Nice job with the explanation!!! I felt like I was watching a 3blue1brown video! A few notes: 1.)You should run through examples more often because that is one of the best ways to understand a concept. For example. you should have run through the algorithm for the cost function so people understand it intuitively. 2.)It would be nice if you went more in depth behind backpropagation and why it works. Things you did well: 1.)Nice job with the animations and how you simplified them for learning purposes, the diagrams would be much harder to understand if there was actually 784 input layers. 2.)I love the way you dissect the code line by line! I cant wait to see more videos by you I think this channel could get really big!

  • @BotAcademyYT

    @BotAcademyYT

    3 жыл бұрын

    Thank you very much for the great feedback!

  • @EhrenLoudermilk
    @EhrenLoudermilk5 ай бұрын

    "does some magic." Great explanation. Thanks.

  • @malamals
    @malamals3 жыл бұрын

    Very well explained. I really liked it. making noise for you. Please make such video to understand NLP in the same intuitive way. Thank you :)

  • @itzblinkzy1728
    @itzblinkzy17283 жыл бұрын

    Amazing video I hope this gets more views.

  • @eirikd1682
    @eirikd1682 Жыл бұрын

    Great Video! However, you say that "Mean Squared Error" is used as loss function and you also calculate it. However "o - l" (seemingly the derivative of the loss function) isn't the derivative of MSE. It's the derivative of Categorical Cross Entropy ( -np.sum(Y * np.log(output)), with Softmax before it). Anyways, keep up the great work :)

  • @wariogang1252
    @wariogang12523 жыл бұрын

    Great video, really interesting!

  • @gonecoastaltoo
    @gonecoastaltoo3 жыл бұрын

    Such a great video -- high quality and easy to follow. Thanks. One typo in Additional Notes; (X,) + (1,) == (X, 1) -- this is shown correctly in the video, but in the Notes you show result as (1, X)

  • @BotAcademyYT

    @BotAcademyYT

    3 жыл бұрын

    Thank you very much for pointing out the inconsistency. You're right, it is wrong in the description. I just corrected it.

  • @jordyvandertang2411
    @jordyvandertang24113 жыл бұрын

    hey this was a great into! Gave a good playing ground to experiment with in increasing the nodes of the hidden layer, changing the activation function and even adding an addition hidden layer to evaluate the effects/effectiveness! With more epochs could get it above 99% accuracy (on the training set, so might be overfitted, but hey_)

  • @neliodiassantos
    @neliodiassantos3 жыл бұрын

    Great work! thanks for the explication

  • @devadethan9234
    @devadethan92347 ай бұрын

    yes , finally I had found the golden channel thanks budd

  • @andrewfetterolf7042
    @andrewfetterolf7042 Жыл бұрын

    Well done, i couldnt ask for a better video, Germans make the best and most detailed educational videos here on youtube. The pupils of the world say thank you.

  • @napomokoetle
    @napomokoetle9 ай бұрын

    Wow! Thanks you so much. You rock. Now looking forward to "Transformers Explained from Scratch using Python" ;)

  • @pythonbibye
    @pythonbibye3 жыл бұрын

    I can tell you put a lot of work into this. You deserve more views! (also commenting for algorithm)

  • @miguelhernandez3730
    @miguelhernandez37303 жыл бұрын

    Excellent video

  • @ThePaintingpeter
    @ThePaintingpeter Жыл бұрын

    Fantastic video. I really appreciate the effort u_tubers put into great videos like this one.

  • @neuralworknet
    @neuralworknet10 ай бұрын

    12:40 why dont we use derivative of activation function for delta_o? But we used derivative of activation function for delta_h. Any answers???

  • @hidoxy1

    @hidoxy1

    5 ай бұрын

    I was confused about the same thing, did you figure it out?

  • @jimbauer9508
    @jimbauer95083 жыл бұрын

    Great explanation - Thank you for making this!

  • @danielniels22
    @danielniels223 жыл бұрын

    hello, will you do one with Cross Entropy as the loss function? Or do you know any video for reference? Because I'm too confused if reading a book or paper :(

  • @NikoKun
    @NikoKun Жыл бұрын

    What are you referring to when you talk about "defining the matrix from the right-layer to the left-layer" @ 2:35 ? I'm sure I'm just missing something obvious, but I can't seem to figure out what that's referring to in the code..

  • @dexterroy
    @dexterroy2 ай бұрын

    Listen to the man, listen well. He is giving accurate and incredibly valuable knowledge and information that took me years to learn.

  • @HanzoHasashi-bv7rm
    @HanzoHasashi-bv7rm7 ай бұрын

    Video Level: Overpowered!

  • @0xxi1
    @0xxi111 ай бұрын

    you are the man! My respect goes out to you

  • @hoot999
    @hoot9994 ай бұрын

    great video, thanks!

  • @cryptoknightatheaume6462
    @cryptoknightatheaume64622 жыл бұрын

    awesome man. Could you please tell me how do you realise this neural animation? It's really nice

  • @nomnom8127
    @nomnom81273 жыл бұрын

    Great video

  • @enriquefernandezaraujo3943
    @enriquefernandezaraujo3943 Жыл бұрын

    TKU for this excellent video👌

  • @tanvir-tonoy-programmer
    @tanvir-tonoy-programmer3 ай бұрын

    Hey do you use manim ? I was curious should I use manim or Aftereffect to visualise math concepts like those ???

  • @hchattaway
    @hchattaway9 ай бұрын

    Excellent video and explanation of this classic intro to cv... However, when I clone the repo, install poetry and run poetry install, it throws a ton of errors. is there just a requirements.txt file for this that can be used? I am using Ubuntu 23.04 and Python 3.11.3

  • @VereskM
    @VereskM3 жыл бұрын

    Source text Excellent video. Best of the best ) i want to see more and slowly about backpropagation algorithm. It is most interesting moments.. maybe better to make the step by step slides?

  • @2wen98
    @2wen98 Жыл бұрын

    how could i split the data into training and testing data?

  • @heckyes
    @heckyes Жыл бұрын

    Do these initial layer numbers have to be between 0 and 1? Can't they just be any number if the activation function will clamp them down to be between 0 and 1?

  • @curtezyt1984
    @curtezyt198410 ай бұрын

    you got a subscriber ❤

  • @2wen98
    @2wen98 Жыл бұрын

    how did you make the visualisations?

  • @johannesvartdal624
    @johannesvartdal6244 ай бұрын

    This video feels like a 3Brown1Blue video, and I like it.

  • @payola5000
    @payola50003 жыл бұрын

    I really loved your video, it's so clearly explained. I have a kind of big question. What if you had a data frame where all the columns are related to each other, but there are different functions for certain parts of it? I'm trying to make a neutral network that is meant to understand the functional parts of proteins, in order to create new proteins

  • @BotAcademyYT

    @BotAcademyYT

    3 жыл бұрын

    Thanks! That's a really hard one :D If there is some temporal difference in the data, you'd need a recurrent NN like an LSTM. But I think its not the case for proteins. So if they are related to each other I guess you'd flatten the data frame and use it as input. If the input dimension is too large, I think you need some other feature extraction technique before applying a NN. But I am just guessing here tbh. There might be better approaches directly for proeteins (there are surely some good papers out there because its a topic with quite some research behind it)

  • @DavidCVdev
    @DavidCVdev3 жыл бұрын

    Amazing video

  • @wakeupamerica2824
    @wakeupamerica28243 жыл бұрын

    Making noise for you, good luck!

  • @quant-prep2843
    @quant-prep28432 жыл бұрын

    intuitive video on the whole planet, likewise can you come up with a brief explanation on NEAT algorithm as well ?

  • @BotAcademyYT

    @BotAcademyYT

    2 жыл бұрын

    Thanks! I‘ll add it to my list. If more people request it or if I‘m out of video ideas, I‘ll do it :-)

  • @quant-prep2843

    @quant-prep2843

    2 жыл бұрын

    @@BotAcademyYT Nooo, we cant wait.... i shared this video across all discord servers, and most of em asked , wish this guy could make a video like this on NEAT or hyperNEAT. because there isnt much resources out there. Hope you will make it!

  • @lexflow2319
    @lexflow2319 Жыл бұрын

    What software are you using to animate?

  • @rverm1000
    @rverm10003 ай бұрын

    Thanks. I wonder if I could train it for other pictures?

  • @yakubumshelia1668
    @yakubumshelia1668 Жыл бұрын

    Excellent

  • @RamiSlicer
    @RamiSlicer3 жыл бұрын

    I love it!

  • @cocoarecords
    @cocoarecords3 жыл бұрын

    Wow amazing

  • @jnaneswar1
    @jnaneswar1 Жыл бұрын

    extremely thankful

  • @boozflooz6255
    @boozflooz625511 ай бұрын

    Clarification, is this the delta rule? if not, what method did you use for backpropagation?

  • @Michael-ty2uo
    @Michael-ty2uo3 ай бұрын

    The first minute of this video got myself asking who is this dude and does he make more videos explaining compicated topics in a simple way. pls do more

  • @himanshusethi8246
    @himanshusethi82463 жыл бұрын

    Thanks a lot sir

  • @oliverb.2083
    @oliverb.20833 жыл бұрын

    For running the code on Ubuntu 20.04 you need to do this: git clone github.com/Bot-Academy/NeuralNetworkFromScratch.git cd NeuralNetworkFromScratch sudo apt-get install python3 python-is-python3 python3-tk -y pip install --user poetry ~/.local/bin/poetry install ~/.local/bin/poetry run python nn.py

  • @thomasklemmer4861
    @thomasklemmer48613 жыл бұрын

    Hervorragend!!

  • @MomSpaghetti
    @MomSpaghetti22 күн бұрын

    Thank you so much 💯💯🙏

  • @roghibashfahani15
    @roghibashfahani15 Жыл бұрын

    hello sir, how if i wanna change that object with letter

  • @jassi9022
    @jassi90223 жыл бұрын

    brilliant

  • @viktorvegh7842
    @viktorvegh78422 ай бұрын

    11:32 why are you checking for the highest value I dont understand when the highest is 0.67 its classified as 0 can you please explain? Like what this number has to be for example for input to be classified as 1

  • @yoctometric
    @yoctometric3 жыл бұрын

    Algy comment right here, thanks for the wonderful video!

  • @Ach_4x
    @Ach_4x12 күн бұрын

    Hey guys can someone help me i have a project where i need to define an automata for the handwritten digit recognition and i still don't know how to define the states and transitions for my automaton

  • @OK-dy8tr
    @OK-dy8tr3 жыл бұрын

    Lucid explanation !!

  • @bonbonpony
    @bonbonpony Жыл бұрын

    What if the shape in the input can shift all around the place? It's still the same shape (e.g. a hand-written digit), but one time it is more to the left, other time it is more to the right and a little closer to the bottom, etc.; let;s say that my canvas is 800×800 pixels, and I need to detect this 28×28 digit no matter where it appears on this canvas).

  • @Ragul_SL
    @Ragul_SL3 ай бұрын

    how is the hidden layer is set as 20 ? how is it decided?

  • @hynesie11
    @hynesie114 ай бұрын

    for the first node in the hidden layer you added the bias node of 1, for the rest of the nodes in the hidden layer you multiplied the bias node of 1 ??