Loss Functions in Deep Learning | Deep Learning | CampusX

In this video, we'll understand the concept of Loss Functions and their role in training neural networks. Join me for a straightforward explanation to grasp how these functions impact model performance.
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
👍If you find this video helpful, consider giving it a thumbs up and subscribing for more educational videos on data science!
💭Share your thoughts, experiences, or questions in the comments below. I love hearing from you!
✨ Hashtags✨
#DeepLearning #LossFunctions #NeuralNetworks #MachineLearning #AI #LearningBasics #SimplifiedLearning #modeltraining
⌚Time Stamps⌚
00:00 - Intro
01:09 - What is loss function?
11:08 - Loss functions in deep learning
14:20 - Loss function vs cost function
24:35 - Advantages/Disadvantages
59:13 - Outro

Пікірлер: 117

  • @kumarabhishek1064
    @kumarabhishek10642 жыл бұрын

    When will you cover RNN, encoder-decoder & transformers? Also, if you could make mini projects on these topics, it would be great. Keep doing this great work of knowledge sharing, hope your tribe grows more. 👍

  • @paragvachhani4643
    @paragvachhani4643 Жыл бұрын

    My Morning begins with campusX...

  • @santoshpal8612

    @santoshpal8612

    5 ай бұрын

    Gentlemen u r on right track

  • @AhmedAli-uj2js
    @AhmedAli-uj2js Жыл бұрын

    Your every word and every minute of sayings are worth a lot!

  • @pratikghute2343
    @pratikghute2343 Жыл бұрын

    the only channel I have ever seen on youtube is underrated! best content seen so far....... Thanks a lot

  • @anuradhabalasubramanian9845
    @anuradhabalasubramanian9845 Жыл бұрын

    Fantastic Explanation Sir ! Absolutely brilliant ! Way to go Sir ! Thank you so much for the crystal clear explanation

  • @anitaga469
    @anitaga469 Жыл бұрын

    Good Content, great explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.

  • @abhaykumaramanofficial
    @abhaykumaramanofficial Жыл бұрын

    Great content for me....now everything about loss function is clear .......thank you

  • @HirokiKudoGT
    @HirokiKudoGT Жыл бұрын

    This is the best explaination about the whole basic of losses , all doubt are cleared thank you so much for this video.

  • @tanmayshinde7853
    @tanmayshinde78532 жыл бұрын

    These loss functions are the same as taught in machine learning. Difference in Huber, Binary and Categorical loss function.

  • @HARSHKUMAR-cm7ek
    @HARSHKUMAR-cm7ek8 ай бұрын

    Your Content deleviery is truely outstanding sir . Although the numbers don,t justify with your teaching talent but let me tell i came here after seeing many of the paid courses and became a fond of ur teaching method .So, please don,t stop making such fabulous videos . I am pretty sure that this channel will be among top channels for ML and data science soon !!

  • @AlAmin-xy5ff
    @AlAmin-xy5ff Жыл бұрын

    Sir, you are really amazing. I have learned lot of things from your KZread channel.

  • @FaizKhan-zu2kp
    @FaizKhan-zu2kp Жыл бұрын

    please continue the "100 days of deep learning" sir its humble request to you. This playlist and this channel is best on this entire youtube for machine learner ❤❤❤❤

  • @abchpatel4745
    @abchpatel47455 ай бұрын

    Ony day this channel become most popular for Deep learning ❤️❤️

  • @practicemail3227
    @practicemail3227 Жыл бұрын

    was able to understand each and every word, concept just because of you sir. Your teaching has brought me to this place where i can understand such concepts easily. Thank you very much sir. Really appreciate your hard work and passion. ❣🌼🌟

  • @palmangal
    @palmangal10 ай бұрын

    It was a great Explanation . Thank you so much for such amazing videos.

  • @barunkaushik7015
    @barunkaushik7015 Жыл бұрын

    Such wonderful learning experience

  • @RanjitSingh-rq1qx
    @RanjitSingh-rq1qx Жыл бұрын

    Great work sir. Amazing 😍

  • @talibdaryabi9434
    @talibdaryabi9434 Жыл бұрын

    I wanted this video and got it. Thank you.

  • @paruParu-rc1bu
    @paruParu-rc1bu Жыл бұрын

    With all respect....thank you very much ❤

  • @farhadkhan3893
    @farhadkhan3893 Жыл бұрын

    thank you for your hard work

  • @amLife07
    @amLife0714 күн бұрын

    Thank you so much sir for another amazing lecture ❤😊

  • @sambitmohanty1758
    @sambitmohanty17582 жыл бұрын

    Great video sir as expected

  • @GamerBoy-ii4jc
    @GamerBoy-ii4jc2 жыл бұрын

    hmesha ki trha kmaaaal Sir g

  • @parikshitshahane6799
    @parikshitshahane67994 ай бұрын

    Very very excellent teaching skills you have Sir! Its like college senior explaining concept to me sitting in hostel room.

  • @True_Laughfing
    @True_Laughfing Жыл бұрын

    Amazing sir 🙏🏻

  • @SumanPokhrel0
    @SumanPokhrel0 Жыл бұрын

    Beautiful explanation

  • @IRFANSAMS
    @IRFANSAMS2 жыл бұрын

    Awesome sir!

  • @Shisuiii69
    @Shisuiii694 ай бұрын

    Thanks for the timestamps It's really helpful

  • @narendraparmar1631
    @narendraparmar16315 ай бұрын

    Very well explained, Thanks

  • @rb4754
    @rb4754Ай бұрын

    Mindboggling !!!!!!!!!!!!!!!!!!

  • @rashidsiddiqui4502
    @rashidsiddiqui45023 ай бұрын

    thank you so much sir, clear explaination

  • @jayantsharma2267
    @jayantsharma22672 жыл бұрын

    great content

  • @Sara-fp1zw
    @Sara-fp1zw2 жыл бұрын

    Thank you!!!

  • @uzairrehman5765
    @uzairrehman57658 ай бұрын

    Great content!

  • @safiullah353
    @safiullah353 Жыл бұрын

    How beautiful this is 🥰

  • @Tusharchitrakar
    @Tusharchitrakar3 ай бұрын

    Great lecture as usual. Just one small clarification: binary cross entropy has a convex (but not close formed) solution hence it only has a single global minima and no local minima. This can be proved using simple calculus by noticing the second derivatives and check if it is always greater than 0. Hence, you mentioned that there are multiple local minima which is not right. But thanks for your comprehensive material which is helping us learn such complex topics with ease!

  • @rohansingh6329
    @rohansingh63295 ай бұрын

    awesome man just amazing ... ! ! !

  • @amitmishra1303
    @amitmishra13035 ай бұрын

    Nowadays my morning and night end with your lecture sir😅.. thanks for putting so much effort.

  • @faheemfbr9156
    @faheemfbr915610 ай бұрын

    Very well explained

  • @uddhavsangle2219
    @uddhavsangle221911 ай бұрын

    nice explanation sir thank you so much

  • @narendersingh6492
    @narendersingh64922 ай бұрын

    This is so very important

  • @manashkumarbhadra6208
    @manashkumarbhadra6208Ай бұрын

    Great work

  • @pavangoyal6840
    @pavangoyal68402 жыл бұрын

    Thank you

  • @ParthivShah
    @ParthivShah2 ай бұрын

    Thank You Sir.

  • @ShubhamSharma-gs9pt
    @ShubhamSharma-gs9pt2 жыл бұрын

    this playlist is a 💎💎💎💎💎

  • @RajaKumar-yx1uj
    @RajaKumar-yx1uj2 жыл бұрын

    Welcome Back Sir 🤟

  • @mrityunjayupadhyay7332
    @mrityunjayupadhyay733211 ай бұрын

    Amazing

  • @kindaeasy9797
    @kindaeasy979722 күн бұрын

    amazing lectureeeeeeee

  • @stoic_sapien1
    @stoic_sapien126 күн бұрын

    44:52 Binary cross entropy loss is convex function,it will only have one local minima or only one global minima

  • @bhojpuridance3715
    @bhojpuridance37159 ай бұрын

    Thanxs sir

  • @hey.Sourin
    @hey.Sourin3 ай бұрын

    Thank you sir 😁😊

  • @ANKUSHKUMAR-jr1pf
    @ANKUSHKUMAR-jr1pf Жыл бұрын

    at timestamp 44:40 --> sir, you told that binary corss entropy may have multiple minimal, but binary cross entropy is a convex function so it won't have multiple minima, i think.

  • @sanchitdeepsingh9663
    @sanchitdeepsingh96638 ай бұрын

    thanks sir

  • @zkhan2023
    @zkhan20232 жыл бұрын

    Thanks Sir

  • @OguriRavindra
    @OguriRavindra4 ай бұрын

    Hi. i think the in huber loss example plot @ 36:59, it is for clasification example rather than regression example. regression line should pass through the data points instead of seperating them.

  • @shantanuekhande4788
    @shantanuekhande47882 жыл бұрын

    great explanation. can you tell me why we need bias in NN . how it is useful

  • @sumitprasad035
    @sumitprasad035 Жыл бұрын

    🦸‍♂Thank you Bhaiya ...

  • @partharora6023
    @partharora60232 жыл бұрын

    sir carryon this series

  • @ManasNandMohan
    @ManasNandMohanАй бұрын

    Awesome

  • @lakshya8532
    @lakshya853211 ай бұрын

    One disadvantage of MSE that, i can figure out if there are multiple local minima then there might be a case where MSE loss function can lead to a local minima instead of global minima

  • @kindaeasy9797
    @kindaeasy979723 күн бұрын

    22:25 unit^2

  • @nxlamik1245
    @nxlamik12455 ай бұрын

    I am enjoying your video like a web series sir

  • @ariondas7415
    @ariondas741523 күн бұрын

    if the difference in (yi - y^i) is in decimals, then the loss value is diminished and not magnified, so maybe a novelty would be take this into account.

  • @suriyab8143
    @suriyab8143 Жыл бұрын

    Sir, which tool are you using for explanation in this video

  • @aakiffpanjwani1089
    @aakiffpanjwani10893 ай бұрын

    can we use step function as the activation function for the last layer/ prediction node while doing classification problem using binary cross entropy? for 0 and 1 outputs?

  • @kindaeasy9797
    @kindaeasy979722 күн бұрын

    easyy thankssss

  • @Avsjagannath
    @Avsjagannath6 ай бұрын

    excellent teaching skill.sir plz provide notes pdf

  • @techsavy5669
    @techsavy56692 жыл бұрын

    Great concise video. Loved it. A small question 💡: Sometimes we do drop='first', to remove that redundant first column during onehotencoding. So does that make a difference while using either of these categorical losses!?

  • @pratikghute2343

    @pratikghute2343

    Жыл бұрын

    I think this might be happening automatically or not needed bcoz that way we could not get the loss for that category

  • @AmitUtkarsh99

    @AmitUtkarsh99

    9 ай бұрын

    yes it affects the model because u should keep no. of parameters as less as possible for optimised model. but we dont always . it depends on variables or input. like 2 inputs can be interpreted by just one variable. 2^1. 3 variables require at least 2 variables but 2^2 is 4 so we can drop one column.

  • @ashwinjain5566
    @ashwinjain556610 ай бұрын

    at 36:27, shouldnt the line be nearly perpendicular to what you drew? seems like a case of simpson's paradox.

  • @vinayakchhabra7208
    @vinayakchhabra7208 Жыл бұрын

    best

  • @tejassrivastava6971
    @tejassrivastava6971 Жыл бұрын

    Wouldn't Categorical and Sparse Entropy become same ? As after OHE, all log terms become zero except the current one which gives same result as from Sparse.

  • @74kumarrohit
    @74kumarrohit4 ай бұрын

    Can you please create a videos for remainig Loss Function , for AutoEncoders, GANS, Transformers also. Thanks

  • @user-fr9fg3rf2h
    @user-fr9fg3rf2h4 ай бұрын

    At 21:06,[MEAN SQUARE ERROR] In order to calculate totel error by doing [y - y^] some value may be negative and can reduce the error {That we don't want} that is why we are doing square after doing substraction as you said. So here my doubt is that can we make that negative value to positive. then there is no need to do square. Please explain this. Thank you. :)

  • @alastormoody1282
    @alastormoody12823 ай бұрын

    Respect

  • @AkashBhandwalkar
    @AkashBhandwalkar2 жыл бұрын

    Superb Video Sirr! Can you tell me which is the stylus that your using? And what is the name of the drawing/writing pad that you use. I want to buy one too

  • @campusx-official

    @campusx-official

    2 жыл бұрын

    Galaxy tab s7+

  • @abhisheksinghyadav4970
    @abhisheksinghyadav4970 Жыл бұрын

    please share the white board @CampusX

  • @wahabmamond4368
    @wahabmamond43687 ай бұрын

    Learning DL and Hindi together, respect from Afghanistan Sir!

  • @vinayakchhabra7208
    @vinayakchhabra7208 Жыл бұрын

    maza aagya

  • @lonehawk4096
    @lonehawk40962 жыл бұрын

    ML MICE SKLEARN video is still pending sir pleases make that video, other Playlist are also very helpfull thanks for all content.

  • @alokmishra5367
    @alokmishra53677 ай бұрын

  • @user-xw1eu7jx7n
    @user-xw1eu7jx7n4 ай бұрын

    grate

  • @sandipansarkar9211
    @sandipansarkar9211 Жыл бұрын

    finished watching

  • @KiyotakaAyanokoji1
    @KiyotakaAyanokoji110 ай бұрын

    what is the difference : 1.) if we update the weights and bias on each row ,for all epoches , 2) for each batch (all rows togeather), for all epoches . can you tell senarios where one is better over other?

  • @shashankshekharsingh9336

    @shashankshekharsingh9336

    2 ай бұрын

    +1

  • @shashankshekharsingh9336
    @shashankshekharsingh93362 ай бұрын

    thank your sir for this great content. 13/05/24

  • @vishalpatil228
    @vishalpatil2287 ай бұрын

    43.32 cost function = 1/n∑ ( loss function )

  • @KaranGupta-kv6wq
    @KaranGupta-kv6wq3 ай бұрын

    can someone explain me how 0.3 0.6 0.1 is coming @ 52:37 I want to know how can I get these values and which formula is used

  • @spyzvarun5478
    @spyzvarun5478 Жыл бұрын

    Isn't logloss convex?

  • @anishkhatiwada2502
    @anishkhatiwada25026 ай бұрын

    please put timestamp for each topic in this video.

  • @ahmadtalhaansari4456
    @ahmadtalhaansari445611 ай бұрын

    Revising my concepts. August 04, 2023 😅

  • @sam-mv6vj
    @sam-mv6vj2 жыл бұрын

    Thank you sir for resuming

  • @amitbaderia6385
    @amitbaderia63853 ай бұрын

    Please take care of background noises

  • @praveendeena1493
    @praveendeena14932 жыл бұрын

    Hi sir I want complete end to end project video.please share me

  • @vikeshdas3909
    @vikeshdas39092 жыл бұрын

    Black bord achha tha

  • @vikeshdas3909
    @vikeshdas39092 жыл бұрын

    First viewer

  • @8791692532
    @87916925322 жыл бұрын

    Why you stopped posting videos in this Playlist?

  • @campusx-official

    @campusx-official

    2 жыл бұрын

    Creating the next one right now... Backpropogation

  • @8791692532

    @8791692532

    2 жыл бұрын

    @@campusx-official please upload atleast one videos in 3-4 days to maintain continuity. by the way this playlist is going to be game changer for most learners, because comprehensive video content for Deep Learning is not available on youtube! Your method of teaching is very simple and understandable. Thank You for providing credible content!

  • @Sandesh.Deshmukh
    @Sandesh.Deshmukh2 жыл бұрын

    As usual crystal clear explanation Sir ji❤❤🙌 @CampusX

  • @Pipython
    @PipythonАй бұрын

    aise explain karoge to like to karna padega na....

  • @assetss
    @assetss Жыл бұрын

    Birds ka voice aara background me

  • @Lucifer-wd7gh
    @Lucifer-wd7gh2 жыл бұрын

    Time series in details 😓

  • @geekyprogrammer4831

    @geekyprogrammer4831

    2 жыл бұрын

    Let him finish this series na. Why forcing like this???

  • @namanmodi7536

    @namanmodi7536

    2 жыл бұрын

    @@geekyprogrammer4831 true brother

  • @mrarul1
    @mrarul14 ай бұрын

    Avoid Hindi speaking in video