7.7 Stacking (L07: Ensemble Methods)
This video explains Wolpert's stacking algorithm (stacked generalization) and shows how to use stacking classifiers in mlxtend and scikit-learn.
-------
This video is part of my Introduction of Machine Learning course.
Next video: • 8.1 Intro to overfitti...
The complete playlist: • Intro to Machine Learn...
A handy overview page with links to the materials: sebastianraschka.com/blog/202...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka
Пікірлер: 39
Hi Sebastian, Thank you very much for making and sharing these. You are an excellent teacher and i hope your content reaches the hands of many other students
Easy to follow explanation for a very difficult concept. Well done for the good work.
thx for adding this video. I didn't find it in your book, but I can see why this approach is popular for machine learning competitions
this is very useful for my research, thank you for the thorough and understandable explanation!
Thank you so much for explainning everything so clearly and I really like the examples :)
Thank you for your videos and open source code !! 👍🏾🧑🏽💻🙏🏾
I really enjoyed this lesson and now I know a new package that can help me with ML :p
Very nice explanation, thank you
Thank you for the nice and simple explanation.
Doctor. For the stacking regression problem. Can we use the same algorithm with the classification algorithm that you 3xplin in the video ?
Thank you for these videos Dr. Sebastian. They are really gold mines especially for those who love to understand the underlying mathematics and reasoning under the hood of these algorithms. You've managed to answer most of the questions I had than when I first started the video, but one question remains in the implementation part. I see that you've divided the dataset into train, val and test and using validation dataset to validate first layer classifiers is a crucial step, no?. However, I don't see you using the val dataset.
I had trouble on understanding how to integrate k fold validation and stacking method together. But with your clear explanation, now I can figure it out. Thank you so much!!! Good work!!!
Thank you so much this it was really simplified.
I watched several videos on KZread on this topic and I only can say, this one is the most comprehensive and detailed video how to feed your stacking model with the data set. Thanks!
At
Thanks for your simple explanation. Just one question, I have 4 datasets (with different shapes and features, but similar target columns) that I would need to use for training my model. And I would like to implement stacking, probably train each dataset on different classification models, and then create new dataset of predictions and pass that dataset to a meta classifier to have a more accurate model. Would that be feasible and effective even though the dataset show differences of features?
Hi prof:Sebastia , Thank you very much.
Really enjoyed your lecture; but I got one big question mark. Let's assume you have 3 base learners (an RF, a neural net and an XGBoost) and you apply 5-fold cross-validation. For the first base learner, the RF, you generate predictions for every training data entry - to avoid over-fitting you use the predictions from cross-validation. But doesnt that mean that you effectively used RF-predictions from 5 distinct RF-models?
Hi Sebastian can we use ANN, SVM, and Logistic Regression in the base classifier and ANN in the meta classifier???
Hi Prof, how can we use tscv for (sequential) financial timeseries data? It seems mlxtend application is limited to non-time series. Can you please throw some light on this?