Probability Calibration : Data Science Concepts

The probabilities you get back from your models are ... usually very wrong. How do we fix that?
My Patreon : www.patreon.com/user?u=49277905
Link to Code : github.com/ritvikmath/KZread...

Пікірлер: 53

  • @a00954926
    @a009549262 жыл бұрын

    This is super amazing!! It's such an important concept that like you said, doesn't get all the credit it deserves. And sometimes we forget this step.

  • @zijiali8349
    @zijiali8349 Жыл бұрын

    I got asked about in an interview. Thank you so much for posting this!!!

  • @chineloemeiacoding
    @chineloemeiacoding2 жыл бұрын

    Awesome video!! I was trying to figure out how this concept work using the SK-Learning documentation, but I found the material too much theoretical. And in your video you put the things in a more friendly way!! Many thanks :)

  • @danielwiczew
    @danielwiczew2 жыл бұрын

    Great video, it's very interesting concept that I never heard about, but mathematically speaking would make a sense. Also it's interesting that a linear model was able to correct the error so profoundly. Nevertheless, isn't a kind of metalearning ? Also I think you shouldn't use the name of "testing" dataset for traning the "calibration model", but rather e.g. metadataset. Test dataset is reserved only and only for the final, crossvalidated model

  • @aparnamahalingam1595
    @aparnamahalingam1595 Жыл бұрын

    This was FABULOUS, thank you.

  • @ritvikmath

    @ritvikmath

    Жыл бұрын

    Glad you enjoyed it!

  • @nishadseeraj7034
    @nishadseeraj70342 жыл бұрын

    Great material as usual!! Always look forward to learning from you. Question: Are you planning on doing any material covering xgboost in the future?

  • @abhishek50393
    @abhishek503932 жыл бұрын

    great video, keep it up!

  • @hameddadgour
    @hameddadgour Жыл бұрын

    Great video!

  • @IgorKuts
    @IgorKutsАй бұрын

    Thank you! Brilliant video on such an important applied-ML topic. Tho i haven't seen, in the top section of comments, mentions of the Isotonic Regression (which also can be found in Scikit-Learn package). More often than not, it performs way better on such a task, compared to the Logistic regression, due to it's inherent monotonicity constraint and piecewise nature. Personally i found the most useful - the part about using different sets (test / val), for calibration and calibration validation. Right now i am in the process of developing the production classification ML model, and i think i have made a mistake of performing calibration using training set. Oops

  • @mohammadrahmaty521
    @mohammadrahmaty5212 жыл бұрын

    Thank you!

  • @accountname1047
    @accountname10472 жыл бұрын

    does it generalize or is it just overfitting with more steps?

  • @buramjakajil5232

    @buramjakajil5232

    10 ай бұрын

    I also have some problem with the second phase of configuring, I'm curious what happens to the out-of-sample performance after the calibration. I don't claim I understand the background here, but I just easily get the feeling in mind that: "the model fit did not produce performance that matches the observed distribution, so lets wrap the random forest into logistic function and fit it to the empirical distribution". Naturally this would perform better, but does the out-of-sample performance also improve? Sorry for my confusion, also pretty new concept for me as well.

  • @davidwang8971
    @davidwang8971 Жыл бұрын

    awesome!

  • @tusharmadaan5480
    @tusharmadaan548010 ай бұрын

    This is such an important concept. I feel guilty of deploying models without a calibration layer.

  • @MuhammadAlfiansyah
    @MuhammadAlfiansyah2 жыл бұрын

    If I already use log loss as loss function, do I need to calibrate it again? Thank you

  • @yangwang9688
    @yangwang96882 жыл бұрын

    I thought we don't touch test dataset until we have decided which model we are going to use?

  • @yogevsharaby45
    @yogevsharaby45 Жыл бұрын

    Hey, thanks for the great video! I have a question regarding the predicted probability versus the empirical probability plot. I'm a bit confused because, if I understand correctly, the empirical observations are either 0 or 1 (or in this plot, are you grouping multiple observations together to obtain empirical observations that represent a probability?) Could you clarify this to help me understand it better? thanks very much again :)

  • @rohanchess8332
    @rohanchess833210 ай бұрын

    Wow, that is an amazing video, I might be wrong but generally we use validation set first no, for calibration, and test is on the unseen data, I mean it is like that in hyperparameter tuning, so I assumed it should be same here. Correct me if I'm wrong.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y2 жыл бұрын

    I think I know how you computed empirical probability. For me, it would have helped to see an explicit calculation, just to be sure.

  • @raise7935
    @raise7935 Жыл бұрын

    thanks

  • @The_Jarico1
    @The_Jarico16 ай бұрын

    Your right I've seen this exact phenomenon happen in the wild and the model needed adjustment as such. Does anyone know why this happens?

  • @junhanouyang6593
    @junhanouyang65932 жыл бұрын

    How do you calculate the empirical probability if all the data in dataset is unique? Because if every datapoint is unique the empirical probability will be 0 or 1

  • @FahadAlkarshmi
    @FahadAlkarshmi Жыл бұрын

    I like the explanation. It is very clear. But one thing I've noticed is data snooping. Mainly in the training setting that you proposed, why not training both the classifier and the calibrator on the training set and optimise them using a validation set? as we may not (and should not) have access to the testing set. Thanks.

  • @mohsenvazirizade6334
    @mohsenvazirizade63345 ай бұрын

    Thank you very much for such an amazing video. I like your videos that you explain the reasons behind something and then show the math. Could you please do the same for probability calibration? It is not clear to me why this happens and if changing the loss function in the classifier can change anything.

  • @hemalmunbodh1007
    @hemalmunbodh1007 Жыл бұрын

    I'm late to the party, but surely since random forest is not performing optimally in your example, you need to tweak its hyperparameters(tweak data, tune model) to fit a better curve. What if you create a badly performing model and try to calibrate it further with logistic regression when you could've gotten a better performing model just using random forest?

  • @laxmanbisht2638
    @laxmanbisht26382 жыл бұрын

    Thanks. So calibration is basically done to reduce error. right?

  • @duynguyen4154
    @duynguyen41542 жыл бұрын

    Very good tutorial, I have one question: Is this concept based on any background theory/algorithm??? If so, could you please introduce the specific name. Thanks

  • @jacklandhyde

    @jacklandhyde

    2 жыл бұрын

    It is called platt scaling

  • @felixmorales3713
    @felixmorales37139 ай бұрын

    You could solve the calibration issue more easily by tuning hyperparameters. Specifically, you choose to tune hyperparameters to optimize a cost function that is considered a "proper scoring rule", such as logistic loss/cross entropy (the cost function of logistic regression, actually). At least in my RF implementations, that has resulted in calibrated probabilities right off the bat, without any post-processing. That being said, you'll probably notice that scikit-learn's LogisticRegression() class doesn't return calibrated probabilities all of the time. You can blame that on the class using regularization by default. Just turn it off, and you'll likely get calibrated probabilities again :)

  • @Ad_bazinga
    @Ad_bazingaАй бұрын

    Can you do a video on calibrating scorecards? like doubling of odds?

  • @tompease95
    @tompease956 ай бұрын

    The notebook section of this video is quite misleading - it is basically just plotting a line of best fit on a calibration curve. To actually calibrate the predictions, the trained logistic regression model should make predictions on a set of model outputs, and those 'calibrated' outputs can then be used to plot a newly calibrated calibration curve.

  • @gunhatornie
    @gunhatornie Жыл бұрын

    Opened from vertible coornadation found a reciever according to molecular dissedent alluminum

  • @nihirpriram69
    @nihirpriram698 ай бұрын

    I get that it works, but ultimately, I can't help but feel like this is a band-aid fix to a more underlying issue, namely that something is wrong fundamentally with the model (in this case random forest). It feels like throwing in a fudge factor and hoping for the best.

  • @martinkunev9911
    @martinkunev991111 ай бұрын

    Isn't it weird that the empirical probability is not monotonically increasing as a function of the uncalibrated probability? This would mean that the calibration model needs to learn to transform, e.g. 0.4 to 0.3 but 0.5 to 0.2.

  • @thechen6985
    @thechen69854 ай бұрын

    If you calibrate it on the test set, that would introduce bias does it? Shouldn't it be the validation set?

  • @Corpsecreate
    @Corpsecreate2 жыл бұрын

    Why do you assume the blue line is not correct?

  • @houyao2147
    @houyao21472 жыл бұрын

    It looks to me it's already caliberated during training phase because we minimize the error between predicted and empirical probability. I don't quite understand its necessity.

  • @aparnamahalingam1595
    @aparnamahalingam1595 Жыл бұрын

    Is this the same way we implement calibration for a multi-class problem?

  • @mattsamelson4975
    @mattsamelson49752 жыл бұрын

    You linked the code but not the data. Please add that link.

  • @petroskoulouris3225
    @petroskoulouris32252 жыл бұрын

    Great vid. I cant find the data on your github account

  • @payam-bagheri
    @payam-bagheri9 ай бұрын

    Some people are wondering whether the initial calibration shouldn't be done on the calibration set rather than the test set. I'd say the presenter in this video has the right concepts, but he's calling what's usually called validation set, test set, and vice versa. Usually, the set that's kept out for our final testing of the performance of the model is called the test set and the validation set is used before that to do whatever adjustments and tuning that we want to do.

  • @Ziemecki
    @Ziemecki Жыл бұрын

    Thank you for this video! I didn't understand why we bias if we train the calibration in the training set and not in the test set. Could you give us an example please? +Subscribe

  • @Ziemecki

    @Ziemecki

    Жыл бұрын

    I know you gave an example later in the notebook, but the what if the data is the other way around? I mean the training is the testing and testing is the training, will we still see this behavior?

  • @MsgrTeves
    @MsgrTeves Жыл бұрын

    I am confused why you train the logistic regression with input being predicted probabilities and output being the targets themselves. It seems you would train it with input being predicted probabilities and outputs being empirical probabilities. The probabilities should have nothing to do with the actual targets only how likely the prediction is to match the actual target which we calculate when we calculate the empirical probabilities. What am I missing?

  • @ramanadeepsingh
    @ramanadeepsingh2 күн бұрын

    Shouldn't we first do a min-max scaling on the original probabilities you get from the models? Let's say I have three models and I run them on the same training data to get the below distribution of probabilities: 1) Naive Bayes: all predicted values between 0.1 to 0.8 2) Random Forest: all predicted values between 0.2 to 0.7 3) XGBoost: all predicted values between 0.1 to 0.9 If I want to take an average prediction, I am giving an undue advantage to XGBoost. So we should scale all of them to be between 0 to 1. The second step then is to feed these original scaled probabilities to the Logistic Regression model to the calibrated probabilities by feeding in these new scaled probabilities.

  • @jasdeepsinghgrover2470
    @jasdeepsinghgrover24702 жыл бұрын

    But I find it difficult to understand why non-probabalistic models aren't configured by default... The probability is derived from dataset itself... So if the dataset is large enough then it should be already configured

  • @user-or7ji5hv8y
    @user-or7ji5hv8y2 жыл бұрын

    on surface, it looks like you are using ML twice, with the second iteration to correct error from the first run. I can't seem to see why that second iteration is a legitimate step to do. It's like you made a bad prediction, and now we are going to give you another chance and coach you to adjust your prediction to arrive at a more accurate prediction. I know you used test data, but still can't see how you won't be overfitting.

  • @buramjakajil5232

    @buramjakajil5232

    10 ай бұрын

    exactly my thoughts

  • @bonnyphilip8022
    @bonnyphilip80222 жыл бұрын

    Unlike the looks, you simply are a great teacher... (Looks in the sense i mean, your attitude and looks are more similar to a freaky artist not a studious person)..:D:D

  • @EdiPrifti
    @EdiPrifti3 ай бұрын

    Thank you. This makes sense in a regression task. How about a binary classification task. What would be the real emperical probability to fit the calibration task ?