What is Least Squares?
Ғылым және технология
A quick introduction to Least Squares, a method for fitting a model, curve, or function to a set of data.
TRANSCRIPT
Hello, and welcome to Introduction to Optimization. This video provides a basic answer to the question, what is Least Squares?
Least squares is a technique for fitting an equation, line, curve, function, or model to a set of data. This simple technique has applications in many fields, from medicine to finance to chemistry to astronomy. Least squares helps us represent the real world using mathematical models.
Let’s take a closer look at how this works. Imagine that you have a set of data points in x and y, and you want to find the line that best fits the data. This is also called regression. For a given x, you have the y value from your data and the y value predicted by the line. The difference between these values is called the error, or the residual.
This residual value is calculated between each of the data points and the line. To give only positive error values, the error is usually squared. Next the individual residuals are summed to give the total error between the data and the line, the sum of squared errors.
One way to think of least squares is as an optimization problem. The sum of squared errors is the objective function, and the optimizer is trying to find the slope and intercept of the line that best minimizes the error.
Here we’re trying to fit a line, which makes this a linear least-squares problem. Linear least squares has a closed form solution, and can be solved by solving a system of linear equations. It’s worth noting that other equations, such as parabolas and polynomials can also be fit using linear least squares, as long as the variables being optimized are linear.
Least squares can also be used for nonlinear curves and functions to fit more complex data. In this case the problem can be solved using a dedicated nonlinear least squares solver, or a general purpose optimization solver.
To summarize, least squares is a method often used for fitting a model to data. Residuals express the error between the current model fit and the data. The objective of least squares is to minimize the sum of the squared error across all the data points to find the best fit for a given model.
Пікірлер: 29
Visual learning makes things so much better.
excellent animation and explanation, simple to follow and understand!
compact and thorough at the same time. thanks !
Omg, your explanation is better than other youtube videos and my teacher because I'm a visual learner.
@DiditCoding
Жыл бұрын
Noice
Simple yet extremely informative👍
Excellent video and also quite easy to understand
Great video and helpful channel! Khan academy and the organic chemistry guy are getting old and less helpful as school curriculums develop. Super grateful for these simple, direct explanations
lovely brooo, such good animation, now i have the concept in my head.
Thanks a lot, I have the curse of being a visual learner and this was amazing.
Thank you for simplifying this
This is a great little video !
Excellent! Thank you.
clear and brief idea
great explanation!
All videos are excellent
Just perfect. Thanks
Thank you... ❤
in 2 mins just you explained everything
But residual != error?
fit a function f(x) to data with normal noise, f(x) can be a line, or a polynomial, etc, includes outlier handling, least squares is very sus
@Jkauppa
2 жыл бұрын
line with normal noise is a better answer than just a line
@Jkauppa
2 жыл бұрын
constant std additive normal noise assumed
@Jkauppa
2 жыл бұрын
think like you are removing a base line function from the data points, either linear (like pca, f(x)=kx+c) or polynome (nonlinear pca, f(x)=...+ax^2+bx+c), then checking if the noise is from a normal distribution, ie, trying to make the noise after removing the base line as normal as possible, if you do linear, the noise might not be normal, so you get only a partial pca component fit, kinda
Why use the squares instead of the absolute values?
@laraelnourr
Жыл бұрын
because they are easier to compute and deal with mathematically. But we can use absolute values too!
@faheemrasheed9967
Жыл бұрын
because it gives more clear picture if we have error of ,1 and if we square it it will give 0,01 which is kind of scaled.
@AchiragChiragg
6 ай бұрын
@@faheemrasheed9967actually it's the other way around. It's better to use absolute value instead of squares as it can amplify the outliers and influence the final fit.
no example, pretty useless