Bias/Variance (C2W1L02)
Take the Deep Learning Specialization: bit.ly/3amgU4n
Check out all our courses: www.deeplearning.ai
Subscribe to The Batch, our weekly newsletter: www.deeplearning.ai/thebatch
Follow us:
Twitter: / deeplearningai_
Facebook: / deeplearninghq
Linkedin: / deeplearningai
Пікірлер: 22
High bias----> underfitting -----> More train set error High variance -----> overfitting -----> More dev set error
Thank You very much for making these concepts that easy to understand
In Deep learning era, we can overcome the bias-variance trade-off **bias variance trade-off** the - is the property of a set of predictive models whereby models with a lower in parameter estimation have a higher **high variance ** when there is a huge difference between training set error and validation error train : 1% error dev / test : 11 % error **high bias ** when the training set error doesn't even results in proper classification, maybe particular class is being predicted more i.e. false positive train : 15% dev : 16% **high bias and high variance ** when the error on training set is poor, and there is a huge difference between training and validation error train : 15% dev : 30 % **low bias and low variance ** when both the error on training set and validation set is less train : 1% dev : 2 %
I think it would be important to explain why the terms “bias” and “variance” are used to describe these phenomena. Without explaining the context that our overall training algorithm is sampling specific outcome models from a distribution over all possible models that our algorithm might train, it’s not very clear what insight these terms add beyond the simpler concepts of overfitting and underfitting.
@MrCmon113
4 жыл бұрын
Yeah. What's central to understanding this is imagining other possible training sets.
@bpc1570
3 жыл бұрын
What you are describing is related to notion of empirical risk minimization, which is explained in his cs229 class (lec 9 I believe) also searchable from here
The best and easiest explanation of bias variance tradeoff ❤❤
The whole discussion around "bias-variance trade-off" was invented by some statisticians who were baffled when trying to fit (classic) machine learning model training/testing into the old statistical paradigm. And then, some interviewer still found it really effective in baffling (and disqualifying) some candidates applying for ML/Data scientist jobs and further popularized / abused the concept. The whole issue can be readily explained and addressed in a direct way without bringing in these two out-of-place and thus confusing terms. Thank Andrew for bringing clarity into this.
Thank you so much for your content, sir.
Thank you for the video, can you help me how to prove that is unbiased in this question? Question: Compare the average height of employees in Google with the average height in the United States, do you think it is an unbiased estimate? If not, how to prove it is not matched?
Very nice explanation .Need to watch again
wow. clear explanation
Please is it possible to calculate bias for the Actual and predicted values
@saanvisharma2081
5 жыл бұрын
Yes we can!!!! I know how to do that in linear regression, but have to relate it with multiple regression/complex algorithms
Is the dev set the same as validation set?
@raghavgupta2794
6 жыл бұрын
Yes, it's just another name for validation set
@paradise_relaxation
4 жыл бұрын
yes
@MrCmon113
4 жыл бұрын
Yes, this applies equally to the test set or some completely external population, though.
are you GM Eric Hansen's brother or something?
The audio hurts my ears.
You are God
What is this mess?! You actually charge people for this crap?!!!