#11 Machine Learning Specialization [Course 1, Week 1, Lesson 3]

Ойын-сауық

The Machine Learning Specialization is a foundational online program created in collaboration between DeepLearning.AI and Stanford Online. This beginner-friendly program will teach you the fundamentals of machine learning and how to use these techniques to build real-world AI applications.
This Specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field.
This video is from Course 1 (Supervised Machine Learning Regression and Classification), Week 1 (Introduction to Machine Learning), Lesson 3 (Regression Model), Video 3 (Cost function formula).
To learn more and access the full course videos and assignments, enroll in the Machine Learning Specialization here: bit.ly/3ERmTAq
Download the course slides: bit.ly/3AVNHwS
Check out all our courses: bit.ly/3TTc2KA
Subscribe to The Batch, our weekly newsletter: bit.ly/3TZUzju
Follow us:
Facebook: / deeplearningaihq
LinkedIn: / deeplearningai
Twitter: / deeplearningai_

Пікірлер: 9

  • @himanshusinghchandel9972
    @himanshusinghchandel99724 ай бұрын

    for anyone wondering why Ng used the mean squared error not mean absolute error - Squaring: By squaring the errors, MSE ensures that larger errors contribute more to the overall loss function. This can be desirable as it penalizes larger deviations from the true values more heavily, which might be necessary depending on the context. Differentiability: MSE is differentiable everywhere, making it suitable for optimization algorithms like gradient descent. This allows for efficient computation of gradients, enabling faster convergence during model training. Positive Definiteness: Squaring the errors ensures that the loss is always positive or zero. This property simplifies mathematical analysis and optimization procedures. Symmetry: Squaring errors removes the sign, which means that both positive and negative errors contribute equally to the loss. This can be desirable in certain contexts where the direction of the error is less important than its magnitude. While mean error (also known as mean absolute error, or MAE) has its own advantages, such as being more robust to outliers since it doesn't square the errors, MSE is often preferred in many machine learning applications due to its mathematical properties and its alignment with optimization objectives. However, the choice between MSE and MAE depends on the specific requirements of the problem and the characteristics of the data.

  • @chamirngandjia1198

    @chamirngandjia1198

    3 ай бұрын

    These are excellent points to mention!!!!

  • @zhiyingwang1234
    @zhiyingwang12349 ай бұрын

    Thanks for making the efforts to explain every detail about function notation, abbreviation. As a hindsight, I realize this will lay a solid foundation when it comes to learning more advanced functions.

  • @jeremynx
    @jeremynx2 ай бұрын

    great explanation, thanks

  • @ahmadshahmirkhail6559
    @ahmadshahmirkhail6559Күн бұрын

    I think in the second example when the b is zero the predation is zero not x is zero the predation is zero.

  • @thegbfolks
    @thegbfolks10 ай бұрын

    I understand we are considering a straight line to predict the y values. But I don't understand that on what basis we fit a straight line on the data. Is it just like drawing a straight line through the training examples? Or are there any rules for fitting a straight line through the training examples?

  • @divyakarlapudi

    @divyakarlapudi

    8 ай бұрын

    It must pass through all the training set points or we consider the line which is close to most of the training set points.

  • @himanshusinghchandel9972

    @himanshusinghchandel9972

    4 ай бұрын

    @thegbfolks There will be many straight lines drawn through this training examples but at the end we will choose the one with least cost/error (Average Mean Squared Error). The algorithm will get trained over and it will try to fit the least error line.

Келесі