Shapley Values : Data Science Concepts

Interpret ANY machine learning model using this awesome method!
Partial Dependence Plots : • Partial Dependence Plo...
My Patreon : www.patreon.com/user?u=49277905

Пікірлер: 117

  • @adityanjsg99
    @adityanjsg992 жыл бұрын

    No fancy tools, yet you are so effective!! You must know that you provide deeper insights that even the standard books do not.

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Appreciated!

  • @rbpict5282
    @rbpict52822 жыл бұрын

    I prefer the marker pen style. Here, my complete focus is on the paper in focus and not the surrounding region.

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Thanks for the feedback!!

  • @whoopeedoopee251
    @whoopeedoopee2512 жыл бұрын

    Great explanation!! Love how you managed to explain the concept so simply! ❤️

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Thanks!

  • @reginaphalange2563
    @reginaphalange25632 жыл бұрын

    Thank you for the drawing and intuition explanation, which really help me understand Shapley value.

  • @kokkoplamo
    @kokkoplamo2 жыл бұрын

    Wonderful explanation! You explained a very difficult concept simply and concisely! Thanks

  • @niks4u93
    @niks4u932 жыл бұрын

    one of the easiest + thorough explanation thank you

  • @xxshogunflames
    @xxshogunflames2 жыл бұрын

    Awesome video, I don't have a preference on paper or whiteboard just keep the vids coming! First time I learn about Shapley Values, thank you for that

  • @SESHUNITR
    @SESHUNITR Жыл бұрын

    very crisp explanation. liked it

  • @djonatandranka4690
    @djonatandranka4690 Жыл бұрын

    what a great video! such a simple and effective explanation. Thank you very much for that

  • @lythien390
    @lythien3902 жыл бұрын

    Thank you for a very well-explained video on Shapley values :D. It helped me.

  • @amrittiwary080689
    @amrittiwary080689 Жыл бұрын

    Hats off to you. Understood most of the explanability techniques

  • @ritvikmath

    @ritvikmath

    Жыл бұрын

    Glad to hear that

  • @Mar10001
    @Mar10001 Жыл бұрын

    This explanation was beautiful 🥲

  • @yulinliu850
    @yulinliu8502 жыл бұрын

    Nicely explained. Thanks!

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Thanks!

  • @shre.yas.n
    @shre.yas.n Жыл бұрын

    Beautifully Explained!

  • @ritvikmath

    @ritvikmath

    Жыл бұрын

    Thanks!

  • @Aditya_Pareek
    @Aditya_Pareek Жыл бұрын

    Great video, simple and easily comprehensible

  • @ritvikmath

    @ritvikmath

    Жыл бұрын

    Thanks!

  • @000000000000479
    @000000000000479 Жыл бұрын

    This format is great

  • @ritvikmath

    @ritvikmath

    Жыл бұрын

    Thanks!

  • @PabloSanchez-ih2ko
    @PabloSanchez-ih2ko4 ай бұрын

    Great explanation! Thanks a lot

  • @ericafontana4020
    @ericafontana4020 Жыл бұрын

    nice explanation! loved it!

  • @JorgeGomez-kt3oq
    @JorgeGomez-kt3oq3 ай бұрын

    Most underrated channel ever

  • @kanakorn.h
    @kanakorn.h Жыл бұрын

    Excellent explaination, thanks.

  • @nature_through_my_lens
    @nature_through_my_lens2 жыл бұрын

    Beautiful Explanation.

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Thanks!

  • @mahesh1234m
    @mahesh1234m2 жыл бұрын

    Hi Rithvik, Really a nice video. Please cover advanced concepts like Fast gradient sign method . Ur way of explaining those concepts would be really helpful for everyone.

  • @Ali-ts6po
    @Ali-ts6po Жыл бұрын

    simply aswesome!

  • @MatiasRojas-xc5ol
    @MatiasRojas-xc5ol2 жыл бұрын

    Great video. The whiteboard is the better because of all the non-verbal communication: facial expressions, gestures,...

  • @suryaashishece

    @suryaashishece

    2 жыл бұрын

    +1

  • @niknoor4044
    @niknoor40442 жыл бұрын

    Definitely the marker pen style!

  • @cgmiguel
    @cgmiguel2 жыл бұрын

    I enjoy both!

  • @kancherlapruthvi
    @kancherlapruthvi2 жыл бұрын

    amazing video

  • @tamar767
    @tamar7672 жыл бұрын

    Yes, this is the best !

  • @alphar85
    @alphar852 жыл бұрын

    Hey Ritvikmath, grateful for your content. Wanted to ask you how many data science / machine learning methods someone needs to know to start a career in data science ? I know the more the better lol

  • @oliverlee2819
    @oliverlee28195 ай бұрын

    This is very clear explanation better than most of the articles that I could find online, thanks! I have one question though: when getting the global shapley value (average across all the instances), why do we sum up the absolute value of the Shapley value of all the instances? Is it how we need to keep the desirable properties of the Shapley value? Is there any meaning of summing up the plain value of the Shapley value (e.g. positive and negative will now cancel off each other)? Another question is, when you said the expected value of the difference, is it just an arithmetic average of all the difference from all those permutations? I remember seeing something that Shapley value is actually the "weighted" average of the difference, which is related to the ordering of those features. Is the step 1 already taking into this into consideration, such that we only need the arithmetic average to get the final Shapley value for that instance?

  • @koftu
    @koftu2 жыл бұрын

    How well do Shapley values align with the composition of various Principal Components? Is there a mathematical relationship between the two, or is it just wholly dependent on the features of the dataset?

  • @pravirsinha5012
    @pravirsinha50122 жыл бұрын

    Very interesting video, Ritvik. Also very curious about your tattoo.

  • @daunchoi8679
    @daunchoi86792 жыл бұрын

    Thank you very much for the intuitive and clear explanation! One question is, so is Step1~5 basically the classic Shapley value and is Step6 SHAP (Shapley Additive exPlanation )?

  • @florianhetzel9157
    @florianhetzel91577 ай бұрын

    Thank you for the video, really appreciate it! I have a question about Step3: Is it necessary to 'undo' the permutation after creating the Frankenstein Samples and before feeding them in the model, since the model expects Temp to be in the first position from the training? Thank you very much for clarification

  • @songjiangliu
    @songjiangliu8 ай бұрын

    cool man!

  • @preritchaudhary2587
    @preritchaudhary25872 жыл бұрын

    Could you create a video on Gain and Lift Charts. That would be really helpful.

  • @anmolchandrasingh2179
    @anmolchandrasingh21792 жыл бұрын

    Hey Ritvikmath, great video as always. I have a doubt, on step 5 the contributions of each of the features adds up to the difference btw the actual and predicted values. Will they always add up perfectly?

  • @Yantrakaar

    @Yantrakaar

    2 жыл бұрын

    I have the same question! I don't think they do. We are randomly creating the Frankenstein samples and taking the difference in their outputs, then doing this many many times and finding the average difference. This gives the Shapley value of just one feature for that sample. Because of the random nature of this process, and because this is done for each feature separately from the other features, I don't think the sum of the Shapley values for each feature necessarily add up to the difference between the expected and the sample output.

  • @juanorozco5139

    @juanorozco5139

    2 жыл бұрын

    Please note that this method approximates the Shapley values, so I'd not expect the efficiency property to hold. If you were to compute exactly the Shapley values, their sum would certainly amount to the difference between the predicted value and the average response. However, the exact computation involves powersets (which increase exponentially w.r.t. the number of features), so we have to settle with approximations.

  • @geoffreyanderson4719
    @geoffreyanderson47192 жыл бұрын

    Shapley values were also taught in the AI for Medicine specialization online. There, it was intended for use with individual patients as opposed to groups or aggregates of patients. You would use Shapley to make individualized prognoses for patients, like what is the best course of treatment for this specific individual patient. Clearly valuable information, however it was super computationally expensive, requiring all permutations to have a different model trained. Therefore only the simplest of model was used, particularly linear regression. I have not yet watched Ritvikmath's video, and I'm curious how much different his material is from the AI for Medicine courses.

  • @geoffreyanderson4719

    @geoffreyanderson4719

    2 жыл бұрын

    In this video there was only one model trained. Inferencing (predicting) was re-run as many times as needed with different inputs to the same trained model. Very interesting. Much more efficient, but I'm wondering about the correctness and if it's solving a slightly different problem than in the AI for Med course --- not sure.

  • @juanete69
    @juanete69 Жыл бұрын

    Hello. In a linear regression model are SHAP values equivalent to the partial R^2 for a given variable? Don't they take into account the variance as the p-values do?

  • @starkest
    @starkest2 жыл бұрын

    liked and subscribed

  • @DivijPawar
    @DivijPawar2 жыл бұрын

    Funny, I was part of a project which dealt with this exact thing!

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Cool!

  • @juanete69
    @juanete69 Жыл бұрын

    I like both the whiteboard and the paper. But I think it's even better to use something like a Powerpoint because it lets you reveal only important information at that moment, hiding future information which can distract you.

  • @prateekyadav9811
    @prateekyadav981118 күн бұрын

    Bhai, haven't finished this video but I am sure it's gonna be informative like all of your DS videos that I have watched. Just curious, why have you tattooed Mumbai's coordinates on your arm? :D

  • @ghostinshell100
    @ghostinshell1002 жыл бұрын

    NICE!

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Thanks!!

  • @JK-co3du
    @JK-co3du Жыл бұрын

    The SHAP function explainer expects a data set input called "background data". Is this the data set used to create the "Frankenstein" Vectors explained in the video?

  • @yesitisme3434
    @yesitisme34342 жыл бұрын

    Great video as always ! Would prefer more pen style

  • @chakib2378
    @chakib2378 Жыл бұрын

    Thank you for your explanation but with the SHAP library, one only gives the trained model without the training set. How the sampling from the original dataset can be done with only the trained model ?

  • @beautyisinmind2163
    @beautyisinmind21632 жыл бұрын

    what is the difference between the work done by Shapley value and the feature selection technique(filter,wrapper and embedded method)? aren't both of them trying to find the best feature?

  • @sawmill035
    @sawmill0352 жыл бұрын

    Excellent explanation! The only question I have is that, sure, in practice you can (and probably should) probably calculate all these through random sampling of feature interactions (random permutations from step 1) because as the number of features increases, you would have a exponentially increasing number of feature interactions to have to be handled, rendering random sampling of features as the only viable method. My question is wouldn't you have to iterate through all possible feature interactions and all data set points for each in order to calculate exact Shapley values? In other words, is the method you proposed just an approximation of the correct values?

  • @justfacts4523

    @justfacts4523

    Жыл бұрын

    i know it's late but this is my understanding of it in case someone else has the same question. Yes, we are getting an approximation of the correct values. But if the sample is large enough and considering that we are taking the expected value, according to the law of big numbers we are pretty confident to get an appropriate estimation of the measure

  • @johanrodriguez241
    @johanrodriguez241 Жыл бұрын

    great. How doy think we can apply it for stacking where we can create a stacknet of network of multiples layers with multiple models and for big data problems cuz this approach is based in monte Carlo to "approximate" the shapley values?

  • @saratbhargavachinni5544
    @saratbhargavachinni5544 Жыл бұрын

    In Idea-1 slide: Aren't we getting more composite effect instead of isolated effect? As the feature is correlated, the second order interactions with other features is also lost by randomly sampling on this dimension.

  • @jacobmoore8734
    @jacobmoore8734 Жыл бұрын

    So, if you had x features, say 50, instead of 4, would you randomly subset 15 (half) of them and create x1...x25? And in each of these x1...25, the differences will be that feature 1:i will be conditioned on the random vector whereas feature[i+n] will not be conditioned on the random vector? Trying to visualize what happens when more than 4 features are available.

  • @ghostinshell100
    @ghostinshell1002 жыл бұрын

    Can you put out similar content for other interpretable techniques like PDP, ICE etc.

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Good suggestion! As a start, you can check out my PDP video linked in the description of this video!

  • @sachinrathi7814
    @sachinrathi78145 ай бұрын

    Thank you for the great explanation but I have one doubt here, how we get 200 there for temperature ? you said it is the expected difference so say when we run the sample 100 time and each time we get some difference so how that 200 number came out from those 100 difference , did we took average or what math's we applied there? Any response on this would be highly appreciated.

  • @junkbingo4482
    @junkbingo44822 жыл бұрын

    i would say that this vid points out the fact that most of the ML tools are black boxes; but now, people want ' black boxes' to be explained! it's a pb you don't have when you use statistics and/or econometrics as to me it's rather curious to calculate an average value in models that are supposed to be non linear; well in ann there is the sensitivity ( based on the gradient); can be a good start of course, but one have to be cautious

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Thanks for your notes!

  • @KetchupWithAI
    @KetchupWithAIАй бұрын

    13:59 I did not fully understand how the values in the chart give you the contribution of variables to difference b/w given and avg prediction. I think what you were doing all along was take the difference in predictions b/w two vectors (x1 and x2) you generated from an OG vector and a randomly chosen vector from data. How does this give you the difference in prediction from OG vector and the mean cones sold (which is what you started with)?

  • @nikhilnanda5922
    @nikhilnanda59222 жыл бұрын

    Can anyone recommend any good books for Data science in general and for such concepts and beyond? Thanks in advance!

  • @aelloro
    @aelloro Жыл бұрын

    Hello, Ritvik! Thank you for the video! The marker style works great! I'm curious, how to deal with the situation when a feature can have a great importance, but we lack of observations? Following the Ice-cream example, let's add a feature for the time of the day (ToD). And let assume for some reason, that 03:00AM-04:00AM there is a line of airport workers and passengers willing to buy. If we operate the shop at that time, we could sell 5000 cones in one hour regardless other features values. But among our observations there are only working hours (9AM-5PM), and the importance of this feature is quite low. It may sound as an imaginary problem, but in medicine field for rare diseases that's the case.

  • @justfacts4523

    @justfacts4523

    Жыл бұрын

    these are my two cents. You can't use that that are outside of your training data. Mainly because the prediction would not be reliable and as a consequence your explanation won't be reliable either. Let's remember that one of the assumptions of any machine learning model is that the production data must come from the same distribution of our training data. Hence using data for which you have no observations whatsoever would be dangerous. Different is the case in which you have very few data but you still have something. In that case I think you can still be able to solve the problem

  • @aelloro

    @aelloro

    Жыл бұрын

    @@justfacts4523 Thank you very much! Your content is the best!

  • @geoffreyanderson4719
    @geoffreyanderson47192 жыл бұрын

    Question: Which of the following two questions is the shown algorithm really answering: "How much does Temp=80 contribute to the prediction FOR THIS PARTICULAR EXAMPLE vs mean prediction?" versus "How much does Temp=80 contribute to the prediction FOR ALL REALISTIC EXAMPLES vs mean prediction?" Is there a link to the source reference used by Ritvikmath here? Thanks!

  • @mauriciotorob
    @mauriciotorob2 жыл бұрын

    Hi, great explanation. Can you please explain me how does Shapley values are calculated for classification problems?

  • @justfacts4523

    @justfacts4523

    Жыл бұрын

    Hi, i know it's late for you but I want to give my understanding in case someone else will have the same question. Instead of considering the class as the output we can use the exact same concept by taking the probabilities generated by the last softmax layer (in case of a nn or any probabilistic like model) Or eventually I think we can compute that probability by checking how many times that class has been "outputted"

  • @simranshetye4694
    @simranshetye46942 жыл бұрын

    Hello Ritvik, I love your videos. I was wondering if there is a way to contact you. I had a couple questions about learning data science. Hope to hear from you soon, thank you.

  • @juanete69
    @juanete69 Жыл бұрын

    What does it mean in your example that SHAP is a "local" explanation?

  • @mohitdwivedi4588
    @mohitdwivedi45882 жыл бұрын

    we stored difference in array or list after step 3 (must be many values). How can SHAP at T=80 can be a single value(200) in your example. Did we take average of that? this E(diff) value how it can be a single value basically?

  • @dustuidea
    @dustuidea2 жыл бұрын

    Difference between adj r2 and shapley?

  • @apargarg9914
    @apargarg99142 жыл бұрын

    Hey Ritvik! May I know how to do this process for a multi-class classification problem? You have taken a regression problem as an example.

  • @thomassimancik1559

    @thomassimancik1559

    2 жыл бұрын

    I would assume that for a classification problem, the approach remains the same. The only thing that differs for the classification problem, is that you would choose and observe the prediction for a single class value.

  • @michellemichelle3557
    @michellemichelle35572 жыл бұрын

    hello, I guess it should be combination instead of permutation according to the coalitional game theory where SHAP method originates

  • @bal1916
    @bal19162 жыл бұрын

    Thanks for the informative video. I just have one issue, I thought Shapley values measure the impact of feature absence. Is this correct? If so, how this was realized here?

  • @justfacts4523

    @justfacts4523

    Жыл бұрын

    Hi, i know it's late for you but I want to give my understanding in case someone else will have the same question. We are realizing this because we are taking different samples. Hence the interested feature will be random hence it won't provide any meaningful information. I'm not 100% sure of this though

  • @bal1916

    @bal1916

    Жыл бұрын

    @@justfacts4523 thanks for your reply

  • @juanete69
    @juanete69 Жыл бұрын

    I haven't understood how you decide what variables to keep fixed and what to change. Imagine you get the permutation [F,T,D,H] or [F,H,D,T]

  • @aaronzhang932
    @aaronzhang9322 жыл бұрын

    8:16 I don't get Step 2. It seems you're lucky to get H = 8. What if the second sample is [200, 5, 70, 7]?

  • @offchan

    @offchan

    2 жыл бұрын

    Why is H=8 a lucky thing? H can be anything. The original H is 4. The new H is 8. Just the fact that it changes is what's important.

  • @harshavardhanachyuta2055

    @harshavardhanachyuta2055

    Жыл бұрын

    ​@@offchan so the H value for form vectors is from the random sample ??

  • @offchan

    @offchan

    Жыл бұрын

    @@harshavardhanachyuta2055 yes

  • @juanete69
    @juanete69 Жыл бұрын

    OK, SHAP is better than PDP but... What are the advantages of SHAP vs LIME (Local Interpretable Model Agnostic Explanation) and ALE (Accumulated Local Effects)?

  • @lilrun7741
    @lilrun77412 жыл бұрын

    I prefer the marker pen style too!

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Thanks for the feedback! Much appreciated

  • @kisholoymukherjee
    @kisholoymukherjee Жыл бұрын

    Great video but I do prefer the whiteboard style

  • @abrahamowos
    @abrahamowos Жыл бұрын

    I didn't get the part of how he got the 2000, c^

  • @hassanshahzad3922
    @hassanshahzad39222 жыл бұрын

    The white board is the best

  • @baqirhusain5652
    @baqirhusain56527 ай бұрын

    I still do not understand how this would be applied to text

  • @offchan
    @offchan2 жыл бұрын

    Let me try to put it into my own words. In order to make it easy to understand, I have to simplify it by lying first. So here's a soft lie version: you have a sample with temperature 80, you replace it by a temperature from a random sample. So if the random sample has temperature of 70, then replace 80 by 70. Then you ask a question "If I convert this 70 back to 80, what will be the predicted difference?" If the difference is positive, it means the temperature of 80 is increasing prediction value. If it's negative, it's decreasing the prediction value. And this difference is called the SHAP value. We call a feature with large absolute SHAP value as important. Now let's fix the lie a little bit: instead of only replacing the temperature, we also replace a few other features from the random sample to the original sample. But we still only try to convert back the temperature. Then we average the SHAP value by doing many random sampling to reduce variance. Another thing to do even more is to calculate SHAP value for every sample, then you will have a global SHAP value instead of a local SHAP for a specific sample. So this is pretty much an intense iterative process. And that's it done.

  • @oliesting4921
    @oliesting49212 жыл бұрын

    Pen and paper is better. It would be awesome if you can share the notes. Thank you.

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Thanks for the feedback!

  • @tariqkhasawneh4536
    @tariqkhasawneh4536 Жыл бұрын

    Monginis Cake Shop?

  • @taiwoowoseni9364
    @taiwoowoseni93642 жыл бұрын

    Not Fahrenheit 😁

  • @rahulprasad2318
    @rahulprasad23182 жыл бұрын

    Pen and paper is better.

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Appreciate the feedback!

  • @sorsdeus
    @sorsdeus2 жыл бұрын

    Whiteboard better :)

  • @ritvikmath

    @ritvikmath

    2 жыл бұрын

    Noted!

  • @jawadmehmood6364
    @jawadmehmood63642 жыл бұрын

    Whiteboard

  • @dof0x88
    @dof0x882 жыл бұрын

    for noobs like me trying to learn about new things, your handwriting makes me miss lots of things, Im not getting anything .

  • @vivekcp9582
    @vivekcp95822 жыл бұрын

    Marker- Pen style does help with focus. But the tattoo on your hand doesn't. :P I aborted the video mid-way and went on a google map hunt. :/

  • @a00954926
    @a009549262 жыл бұрын

    You made this so simple to understand, that I will get to Python and do this ASAP!! Thank you @ritvikmath