Great video . Can you do some Multivariate analysis data series with different business case scenarios
@somethingness21 сағат бұрын
What an excellent teacher you are! Thank you for all your videos.
@mukeshkumaryadav350Күн бұрын
Great Explanation
@yuxiang3147Күн бұрын
How do you decide whether to accept a sample proposed by g(x)? For example, say f(s) / (M*g(s)) = 0.4, do you accept it or not ?
@reemaalamoudi4959Күн бұрын
Amazing
@andreapanuccio295Күн бұрын
I would add that it's not experimental but quasi experimental because it's not really granting that the effect is due to our treatment. It might be due to a common cause or be causally biased by it so we are also assuming causal sufficiency and a sufficient knowledge of the underlying process. Still we have no way to guarantee that some unobserved cause happened in that time window. Good and clear video, as usual btw, keep going bro °u°
@TechnologyBuddaКүн бұрын
gender pay gap is fake
@michelarruda7634Күн бұрын
Great content! Thanks for posting it
@ib2002-o6dКүн бұрын
Stationarity conditions - mean = c - var = c - no seasonality or periodic repetiotions
@ib2002-o6dКүн бұрын
ACF - takes into account all possible t-k which can affect t. Calculated using Pearson coefficient. Direct and Indirect effects. PACF - takes into account only t-k, and how it affects t. A regression line is fitted for this. The coefficients obtained explains the affect of t-k on t. Direct effects only.
@ronit62352 күн бұрын
Dear ritvik why does the determinant value of the first explanation results comes out to be zero(I did try to change some conditions like a single path to every next point but then also that value also comes out to be zero ) what does it indicates?
@ib2002-o6d2 күн бұрын
Notes for future - TS is an extrapolation problem and error keeps on increasing as we move away from known data - Reg is an interpolation problem and error is more or less same, since prediction is usually made in the range of available data.
@blairt81012 күн бұрын
saved my life again!
@samuelrojas37662 күн бұрын
I am still confused about how you developed the kernels in the first place. I know what they do but don't know how to obtain them without using the transformed space.
@aftermath__70602 күн бұрын
hi R, one ques. in what sequence should we watch these lectures ? the playlist seems to be jumbled , or it serves a purpose ?
@robharwood35382 күн бұрын
Hi Ritvik! The way you've set up the linear model(s), the intercept parameters B0 and B2 will represent the intercept from Tt=0, rather than at the particular time that the interruption happens. So, if you got a large positive change in the B3 slope parameter, you'll probably get a negative change in the B2 parameter (since a steeper line at positive Tt will intercept the yt axis *below* where the original line did! Wouldn't it make more sense to do a shift/translation of the Tt parameter so that it works as if the interruption happens at the yt axis? For example, use a translated variable Ut = (Tt - t). So, if the interruption happens at t = 75, then when Tt = 75, Ut = (Tt - t) = (75 - 75) = 0. And your model can then be: yt = B0 + B1⋅Ut + Dt⋅(B2 + B3⋅Ut) At least this way, the B0 and B2 params will have a much more interpretable meaning. B0 will be the value of yt at Tt = t of the main linear model, and B2 will be the *initial change* in vt at Tt = t. In other words, how much 'immediate effect' did the interruption have; how much of a vertical 'jump' up or down. Granted, to calculate vt at any particular Tt, you'll have to first convert to Ut, but that's not so bad, just a simple shift. And if you really need to find the linear params in terms of Tt, it's fairly easy to just plug in Ut = (Tt - t) and expand out to find the transformed linear params for Tt.
@davidheilbron2 күн бұрын
Thank you so much
@wjrasmussen6663 күн бұрын
How can we take advantage of the GPU?
@marcfruchtman94733 күн бұрын
Thanks for this tutorial on ITS. I have to admit, I am a bit confused by this example. I wasn't sure at first why. But basically, it is because the "data" is "too simple". At first, I look at this and I say "Any one" can see that Pre-advertising is not as good as "post" advertising treatment. So, Why would I need to use this technique. Why not just count sales per "month" and say, yup ... we boosted total sales per month. There's nothing inherent in this data sample that confounds the analysis or makes it difficult or non-obvious. On the other hand, if you could show something where the sales data points are more difficult to track, so that it is non-obvious by immediately looking at the data (in graph form) that the trend after treatment is looking seemingly more positive, but using ITS we can see that while visually it seems to look better, in fact, it is not as good after treatment (per this method.) And perhaps a second example, where it seemingly looks more negative after treatment, but by using ITS we can prove that it is better after treatment Then, I think those would give more credence to this method. Because as far as I can tell, the graph makes it far to easy too see the trend... so why bother with ITS unless it can show something that isn't obvious. And, again, thanks for this explanation on Interrupted Time Series.
@vanshr.sachan2643 күн бұрын
i still dont get the part of getting the effect without the experiment.... if i dont have the time series data of my ice cream shop where i decided to advertise as right now im in middle of taking this decision then how can i fit a model which can measure the effects of it?
@Petershd1383 күн бұрын
I confess, I didn't understand any single word of explanations from my MIT professor about perceptron. How ever, after I saw this video and understood clearly what the idea is. Thanks.
@cornagojar3 күн бұрын
thanks, may I ask what do you do for a living?
@polgonzalez9783 күн бұрын
the equation of the hyperplane is w·x+b=0, isn't it? The video says - b=0 instead of +b=0
@jfndfiunskj52993 күн бұрын
Great vid. Please explain how to compute the confidence intervals.
@liltarnation26413 күн бұрын
Absolute legend
@MrEo894 күн бұрын
Man your channel would blow up spectacularly if you invested the time into learning how to make really nice visuals.. the whole poorly hand drawn example thing is really 2005 && screams laziness and/or amateur..
@junal274 күн бұрын
Excellent video and content, thank you. I am not a trader nor a financial person but the training window size vs. the holding window size may be part of the problem unless the choosen entry time would be an outlier by chance. The holding size may play a huge role in certain market dinamics with periods of revovery much greater than the chose holding window time. When market goes down correlations tend to one and in many cases do revert to positive from negative in calm situation. Best
@siddharthabhakta32614 күн бұрын
I thought U is mxm, V is nxn and SIGMA is mxn
@TheKyprosGaming4 күн бұрын
Can you show us the code for this, especially for VAR
@ritvikmath4 күн бұрын
there's some pretty basic code here scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_noisy_targets.html#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-targets-py
@andrelu35614 күн бұрын
This means that apples are far better than bananas right? So when I've been told that you cant compare apples and bananas ... thats a lie ...
@meanreversion10834 күн бұрын
Thank you for the video. it was nicely explained. There are a lot of simplifications. Could you also talk about how best select sigma and l - is it all done empirically? also do you have any example of implementation?
@hoatanky39264 күн бұрын
Pls recommend a book for data science n machine learning
@jessicas29784 күн бұрын
Very clear and helpful. Thank you so much!
@ritvikmath4 күн бұрын
You're very welcome!
@TheRealHassan7894 күн бұрын
So simple… but so powerful!
@ritvikmath4 күн бұрын
glad you think so!
@thejll4 күн бұрын
Go on, tell us how to test if and when a change occurred :)
@christusrex3344 күн бұрын
Any data science text book recommendations?
@tuananhtran75323 күн бұрын
Pattern Recognition and Machine Learning from Bishop
@rishikalodha12364 күн бұрын
Thank you for this
@user-cy7up3kq7o4 күн бұрын
I would argue that 2 has seasonality...
@GeoffryGifari4 күн бұрын
Hmmm i noticed that if two categories are strongly correlated, the plot will look close to a straight line. Going to multidimensional space, that "line" looks like the vector u1 in the video, on which the data are projected. Does that mean PCA will perform better the more correlated two (or more) categories are?
@studentaccount43545 күн бұрын
Can you do this again? Polls mean nothing. There are too many confounding variables.
@just_a_viewer55 күн бұрын
amazingly taught. thank you so much
@zsomborveres-lakos5 күн бұрын
nice
@michaelvogt77875 күн бұрын
Nicely done.
@domillima6 күн бұрын
Your platform is awesome man keep up the great work!
@domillima6 күн бұрын
This is an amazing analysis/discussion. If people could entertain a discussion like the one you present which is level-headed and focused on objective (albeit hypothetical) points, we might have world peace right now 😅
@miroslavdyer-wd1ei6 күн бұрын
this guy is good. I recommend him. He is about 1.5 Sarrano (sorry, Louis!!)
@amaurylaine20936 күн бұрын
Best Explanation of the ROC curve I have seen on KZread so far!
Пікірлер
Did you do training-test split on the data?
I'm a simple man. When Ritvik posts, I watch.
Knocking it out of the park, as usual.
Great video . Can you do some Multivariate analysis data series with different business case scenarios
What an excellent teacher you are! Thank you for all your videos.
Great Explanation
How do you decide whether to accept a sample proposed by g(x)? For example, say f(s) / (M*g(s)) = 0.4, do you accept it or not ?
Amazing
I would add that it's not experimental but quasi experimental because it's not really granting that the effect is due to our treatment. It might be due to a common cause or be causally biased by it so we are also assuming causal sufficiency and a sufficient knowledge of the underlying process. Still we have no way to guarantee that some unobserved cause happened in that time window. Good and clear video, as usual btw, keep going bro °u°
gender pay gap is fake
Great content! Thanks for posting it
Stationarity conditions - mean = c - var = c - no seasonality or periodic repetiotions
ACF - takes into account all possible t-k which can affect t. Calculated using Pearson coefficient. Direct and Indirect effects. PACF - takes into account only t-k, and how it affects t. A regression line is fitted for this. The coefficients obtained explains the affect of t-k on t. Direct effects only.
Dear ritvik why does the determinant value of the first explanation results comes out to be zero(I did try to change some conditions like a single path to every next point but then also that value also comes out to be zero ) what does it indicates?
Notes for future - TS is an extrapolation problem and error keeps on increasing as we move away from known data - Reg is an interpolation problem and error is more or less same, since prediction is usually made in the range of available data.
saved my life again!
I am still confused about how you developed the kernels in the first place. I know what they do but don't know how to obtain them without using the transformed space.
hi R, one ques. in what sequence should we watch these lectures ? the playlist seems to be jumbled , or it serves a purpose ?
Hi Ritvik! The way you've set up the linear model(s), the intercept parameters B0 and B2 will represent the intercept from Tt=0, rather than at the particular time that the interruption happens. So, if you got a large positive change in the B3 slope parameter, you'll probably get a negative change in the B2 parameter (since a steeper line at positive Tt will intercept the yt axis *below* where the original line did! Wouldn't it make more sense to do a shift/translation of the Tt parameter so that it works as if the interruption happens at the yt axis? For example, use a translated variable Ut = (Tt - t). So, if the interruption happens at t = 75, then when Tt = 75, Ut = (Tt - t) = (75 - 75) = 0. And your model can then be: yt = B0 + B1⋅Ut + Dt⋅(B2 + B3⋅Ut) At least this way, the B0 and B2 params will have a much more interpretable meaning. B0 will be the value of yt at Tt = t of the main linear model, and B2 will be the *initial change* in vt at Tt = t. In other words, how much 'immediate effect' did the interruption have; how much of a vertical 'jump' up or down. Granted, to calculate vt at any particular Tt, you'll have to first convert to Ut, but that's not so bad, just a simple shift. And if you really need to find the linear params in terms of Tt, it's fairly easy to just plug in Ut = (Tt - t) and expand out to find the transformed linear params for Tt.
Thank you so much
How can we take advantage of the GPU?
Thanks for this tutorial on ITS. I have to admit, I am a bit confused by this example. I wasn't sure at first why. But basically, it is because the "data" is "too simple". At first, I look at this and I say "Any one" can see that Pre-advertising is not as good as "post" advertising treatment. So, Why would I need to use this technique. Why not just count sales per "month" and say, yup ... we boosted total sales per month. There's nothing inherent in this data sample that confounds the analysis or makes it difficult or non-obvious. On the other hand, if you could show something where the sales data points are more difficult to track, so that it is non-obvious by immediately looking at the data (in graph form) that the trend after treatment is looking seemingly more positive, but using ITS we can see that while visually it seems to look better, in fact, it is not as good after treatment (per this method.) And perhaps a second example, where it seemingly looks more negative after treatment, but by using ITS we can prove that it is better after treatment Then, I think those would give more credence to this method. Because as far as I can tell, the graph makes it far to easy too see the trend... so why bother with ITS unless it can show something that isn't obvious. And, again, thanks for this explanation on Interrupted Time Series.
i still dont get the part of getting the effect without the experiment.... if i dont have the time series data of my ice cream shop where i decided to advertise as right now im in middle of taking this decision then how can i fit a model which can measure the effects of it?
I confess, I didn't understand any single word of explanations from my MIT professor about perceptron. How ever, after I saw this video and understood clearly what the idea is. Thanks.
thanks, may I ask what do you do for a living?
the equation of the hyperplane is w·x+b=0, isn't it? The video says - b=0 instead of +b=0
Great vid. Please explain how to compute the confidence intervals.
Absolute legend
Man your channel would blow up spectacularly if you invested the time into learning how to make really nice visuals.. the whole poorly hand drawn example thing is really 2005 && screams laziness and/or amateur..
Excellent video and content, thank you. I am not a trader nor a financial person but the training window size vs. the holding window size may be part of the problem unless the choosen entry time would be an outlier by chance. The holding size may play a huge role in certain market dinamics with periods of revovery much greater than the chose holding window time. When market goes down correlations tend to one and in many cases do revert to positive from negative in calm situation. Best
I thought U is mxm, V is nxn and SIGMA is mxn
Can you show us the code for this, especially for VAR
there's some pretty basic code here scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_noisy_targets.html#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-targets-py
This means that apples are far better than bananas right? So when I've been told that you cant compare apples and bananas ... thats a lie ...
Thank you for the video. it was nicely explained. There are a lot of simplifications. Could you also talk about how best select sigma and l - is it all done empirically? also do you have any example of implementation?
Pls recommend a book for data science n machine learning
Very clear and helpful. Thank you so much!
You're very welcome!
So simple… but so powerful!
glad you think so!
Go on, tell us how to test if and when a change occurred :)
Any data science text book recommendations?
Pattern Recognition and Machine Learning from Bishop
Thank you for this
I would argue that 2 has seasonality...
Hmmm i noticed that if two categories are strongly correlated, the plot will look close to a straight line. Going to multidimensional space, that "line" looks like the vector u1 in the video, on which the data are projected. Does that mean PCA will perform better the more correlated two (or more) categories are?
Can you do this again? Polls mean nothing. There are too many confounding variables.
amazingly taught. thank you so much
nice
Nicely done.
Your platform is awesome man keep up the great work!
This is an amazing analysis/discussion. If people could entertain a discussion like the one you present which is level-headed and focused on objective (albeit hypothetical) points, we might have world peace right now 😅
this guy is good. I recommend him. He is about 1.5 Sarrano (sorry, Louis!!)
Best Explanation of the ROC curve I have seen on KZread so far!