Loss Functions in Deep Learning | Deep Learning | CampusX
In this video, we'll understand the concept of Loss Functions and their role in training neural networks. Join me for a straightforward explanation to grasp how these functions impact model performance.
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
👍If you find this video helpful, consider giving it a thumbs up and subscribing for more educational videos on data science!
💭Share your thoughts, experiences, or questions in the comments below. I love hearing from you!
✨ Hashtags✨
#DeepLearning #LossFunctions #NeuralNetworks #MachineLearning #AI #LearningBasics #SimplifiedLearning #modeltraining
⌚Time Stamps⌚
00:00 - Intro
01:09 - What is loss function?
11:08 - Loss functions in deep learning
14:20 - Loss function vs cost function
24:35 - Advantages/Disadvantages
59:13 - Outro
Пікірлер: 117
When will you cover RNN, encoder-decoder & transformers? Also, if you could make mini projects on these topics, it would be great. Keep doing this great work of knowledge sharing, hope your tribe grows more. 👍
My Morning begins with campusX...
@santoshpal8612
5 ай бұрын
Gentlemen u r on right track
Your every word and every minute of sayings are worth a lot!
the only channel I have ever seen on youtube is underrated! best content seen so far....... Thanks a lot
Fantastic Explanation Sir ! Absolutely brilliant ! Way to go Sir ! Thank you so much for the crystal clear explanation
Good Content, great explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.
Great content for me....now everything about loss function is clear .......thank you
This is the best explaination about the whole basic of losses , all doubt are cleared thank you so much for this video.
These loss functions are the same as taught in machine learning. Difference in Huber, Binary and Categorical loss function.
Your Content deleviery is truely outstanding sir . Although the numbers don,t justify with your teaching talent but let me tell i came here after seeing many of the paid courses and became a fond of ur teaching method .So, please don,t stop making such fabulous videos . I am pretty sure that this channel will be among top channels for ML and data science soon !!
Sir, you are really amazing. I have learned lot of things from your KZread channel.
please continue the "100 days of deep learning" sir its humble request to you. This playlist and this channel is best on this entire youtube for machine learner ❤❤❤❤
Ony day this channel become most popular for Deep learning ❤️❤️
was able to understand each and every word, concept just because of you sir. Your teaching has brought me to this place where i can understand such concepts easily. Thank you very much sir. Really appreciate your hard work and passion. ❣🌼🌟
It was a great Explanation . Thank you so much for such amazing videos.
Such wonderful learning experience
Great work sir. Amazing 😍
I wanted this video and got it. Thank you.
With all respect....thank you very much ❤
thank you for your hard work
Thank you so much sir for another amazing lecture ❤😊
Great video sir as expected
hmesha ki trha kmaaaal Sir g
Very very excellent teaching skills you have Sir! Its like college senior explaining concept to me sitting in hostel room.
Amazing sir 🙏🏻
Beautiful explanation
Awesome sir!
Thanks for the timestamps It's really helpful
Very well explained, Thanks
Mindboggling !!!!!!!!!!!!!!!!!!
thank you so much sir, clear explaination
great content
Thank you!!!
Great content!
How beautiful this is 🥰
Great lecture as usual. Just one small clarification: binary cross entropy has a convex (but not close formed) solution hence it only has a single global minima and no local minima. This can be proved using simple calculus by noticing the second derivatives and check if it is always greater than 0. Hence, you mentioned that there are multiple local minima which is not right. But thanks for your comprehensive material which is helping us learn such complex topics with ease!
awesome man just amazing ... ! ! !
Nowadays my morning and night end with your lecture sir😅.. thanks for putting so much effort.
Very well explained
nice explanation sir thank you so much
This is so very important
Great work
Thank you
Thank You Sir.
this playlist is a 💎💎💎💎💎
Welcome Back Sir 🤟
Amazing
amazing lectureeeeeeee
44:52 Binary cross entropy loss is convex function,it will only have one local minima or only one global minima
Thanxs sir
Thank you sir 😁😊
at timestamp 44:40 --> sir, you told that binary corss entropy may have multiple minimal, but binary cross entropy is a convex function so it won't have multiple minima, i think.
thanks sir
Thanks Sir
Hi. i think the in huber loss example plot @ 36:59, it is for clasification example rather than regression example. regression line should pass through the data points instead of seperating them.
great explanation. can you tell me why we need bias in NN . how it is useful
🦸♂Thank you Bhaiya ...
sir carryon this series
Awesome
One disadvantage of MSE that, i can figure out if there are multiple local minima then there might be a case where MSE loss function can lead to a local minima instead of global minima
22:25 unit^2
I am enjoying your video like a web series sir
if the difference in (yi - y^i) is in decimals, then the loss value is diminished and not magnified, so maybe a novelty would be take this into account.
Sir, which tool are you using for explanation in this video
can we use step function as the activation function for the last layer/ prediction node while doing classification problem using binary cross entropy? for 0 and 1 outputs?
easyy thankssss
excellent teaching skill.sir plz provide notes pdf
Great concise video. Loved it. A small question 💡: Sometimes we do drop='first', to remove that redundant first column during onehotencoding. So does that make a difference while using either of these categorical losses!?
@pratikghute2343
Жыл бұрын
I think this might be happening automatically or not needed bcoz that way we could not get the loss for that category
@AmitUtkarsh99
9 ай бұрын
yes it affects the model because u should keep no. of parameters as less as possible for optimised model. but we dont always . it depends on variables or input. like 2 inputs can be interpreted by just one variable. 2^1. 3 variables require at least 2 variables but 2^2 is 4 so we can drop one column.
at 36:27, shouldnt the line be nearly perpendicular to what you drew? seems like a case of simpson's paradox.
best
Wouldn't Categorical and Sparse Entropy become same ? As after OHE, all log terms become zero except the current one which gives same result as from Sparse.
Can you please create a videos for remainig Loss Function , for AutoEncoders, GANS, Transformers also. Thanks
At 21:06,[MEAN SQUARE ERROR] In order to calculate totel error by doing [y - y^] some value may be negative and can reduce the error {That we don't want} that is why we are doing square after doing substraction as you said. So here my doubt is that can we make that negative value to positive. then there is no need to do square. Please explain this. Thank you. :)
Respect
Superb Video Sirr! Can you tell me which is the stylus that your using? And what is the name of the drawing/writing pad that you use. I want to buy one too
@campusx-official
2 жыл бұрын
Galaxy tab s7+
please share the white board @CampusX
Learning DL and Hindi together, respect from Afghanistan Sir!
maza aagya
ML MICE SKLEARN video is still pending sir pleases make that video, other Playlist are also very helpfull thanks for all content.
❤
grate
finished watching
what is the difference : 1.) if we update the weights and bias on each row ,for all epoches , 2) for each batch (all rows togeather), for all epoches . can you tell senarios where one is better over other?
@shashankshekharsingh9336
2 ай бұрын
+1
thank your sir for this great content. 13/05/24
43.32 cost function = 1/n∑ ( loss function )
can someone explain me how 0.3 0.6 0.1 is coming @ 52:37 I want to know how can I get these values and which formula is used
Isn't logloss convex?
please put timestamp for each topic in this video.
Revising my concepts. August 04, 2023 😅
Thank you sir for resuming
Please take care of background noises
Hi sir I want complete end to end project video.please share me
Black bord achha tha
First viewer
Why you stopped posting videos in this Playlist?
@campusx-official
2 жыл бұрын
Creating the next one right now... Backpropogation
@8791692532
2 жыл бұрын
@@campusx-official please upload atleast one videos in 3-4 days to maintain continuity. by the way this playlist is going to be game changer for most learners, because comprehensive video content for Deep Learning is not available on youtube! Your method of teaching is very simple and understandable. Thank You for providing credible content!
As usual crystal clear explanation Sir ji❤❤🙌 @CampusX
aise explain karoge to like to karna padega na....
Birds ka voice aara background me
Time series in details 😓
@geekyprogrammer4831
2 жыл бұрын
Let him finish this series na. Why forcing like this???
@namanmodi7536
2 жыл бұрын
@@geekyprogrammer4831 true brother
Avoid Hindi speaking in video