5. Stochastic Processes I

MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013
View the complete course: ocw.mit.edu/18-S096F13
Instructor: Choongbum Lee
*NOTE: Lecture 4 was not recorded.
This lecture introduces stochastic processes, including random walks and Markov chains.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Пікірлер: 356

  • @lucastrojanowski
    @lucastrojanowski3 күн бұрын

    The best teachers do a great job of introducing problems and then showing you the tools to solve them. With these teachers, you always know why you're doing something, you always have a sense of intuition for the problem, and you easily build a sense of experience having worked with these tools in future similar scenarios. There are so many instances wherein this professor does just that and it's a huge blessing to have access to this content for free

  • @SeikoVanPaath
    @SeikoVanPaath3 жыл бұрын

    Some notable Timestamps 0:00:33 Stochastic Process 0:10:57 (Simple) Random Walk 0:32:43 Markov Chain 0:58:41 Martingale 1:06:47 Stopping time / Optional Stopping Theorem

  • @luismoreyra6804

    @luismoreyra6804

    3 жыл бұрын

    Thanks pal!

  • @biaschatterjee9836

    @biaschatterjee9836

    3 жыл бұрын

    Thank you

  • @HeitorSilvadeAlvarenga

    @HeitorSilvadeAlvarenga

    3 жыл бұрын

    thank you

  • @aliciaterok49

    @aliciaterok49

    3 жыл бұрын

    thanks!

  • @louislee1574

    @louislee1574

    3 жыл бұрын

    Thanks!

  • @edwardantonian7296
    @edwardantonian72967 жыл бұрын

    This guy is absolutely fantastic. Could not have been explained more clearly, with a sound logical structure. People complaining about him should probably try lecturing themselves before offering their criticism.

  • @nickfleming3719

    @nickfleming3719

    3 жыл бұрын

    And people like you are confusing people even more when they get caught up in one of this guy's many mistakes and think that THEY are the ones who are wrong.

  • @realwaynesun

    @realwaynesun

    2 жыл бұрын

    @@nickfleming3719 No offense, this is a free course for us, it's our own responsibility to find out wether the information is right or not when we get caught up in the instructors' mistakes. I mean the most important ability for self-taught learners like us is to be skepticism and check other information sources when we feel confused, not only in a free course but also in other paid courses. We can certainly say whatever we want in comments and I always learned a lot by some critical comments, however, I think it would be better to be grateful when we have chance to access high quality educations like this.

  • @minutichaudhary4542

    @minutichaudhary4542

    2 жыл бұрын

    @@nickfleming3719 aaaaaaaaaaaaaaaaaaa_aaa$zzzzq xzxzzxxzxaa$zzz azxaaaa_x¢

  • @maxpopkov1432

    @maxpopkov1432

    Жыл бұрын

    Let’s see you lecture, I really want to see your descriptions on such topics as: Real analysis, Complex Analysis, Functional Analysis, or Harmonic Analysis; oh please it would be delightful to see such confidence coming from you.

  • @adamfattal9602

    @adamfattal9602

    11 ай бұрын

    @@maxpopkov1432 Easy game

  • @mattiascardecchia799
    @mattiascardecchia799 Жыл бұрын

    Recursive argument at 28:00: Call p the probability you hit -50 first. There’s a 50% chance you hit -50 before you hit 50, by symmetry. Once you hit 50, the game is reversed, by stationary property. Hence p = 0.5 + 0.5 * (1 - p), from which p is 1/3.

  • @Tyokok

    @Tyokok

    10 ай бұрын

    Thank you!

  • @HUEHUEUHEPony

    @HUEHUEUHEPony

    3 ай бұрын

    Ahh, yeah idk why I didn't get that the first time

  • @sahilsood1664
    @sahilsood16642 жыл бұрын

    0:00:33 Stochastic Process 0:10:57 (Simple) Random Walk 0:32:43 Markov Chain 0:58:41 Martingale 1:06:47 Stopping time / Optional Stopping Theorem For my reference

  • @sahilsood1664

    @sahilsood1664

    2 жыл бұрын

    49:03 ahh

  • @aidanokeeffe7928

    @aidanokeeffe7928

    2 жыл бұрын

    This is a really useful comment!

  • @bigollameo
    @bigollameo8 жыл бұрын

    This guy has the most elegant writing style and manner of presentation.

  • @frasersmall181
    @frasersmall1812 жыл бұрын

    There is a reason he teaches at MIT this guy explains things so clearly and with ease! Im in H.S and can understand this! Absolutely amazing

  • @Eizengoldt

    @Eizengoldt

    5 ай бұрын

    Stop the cap

  • @AE-cj8ch
    @AE-cj8ch5 жыл бұрын

    Top universities have the best lecturers, making it easier for the students. It’s like a “poverty trap” for higher education.

  • @chrstfer2452

    @chrstfer2452

    10 ай бұрын

    Luckily the best ones (MIT, Stanford) recognize that and release things like this OCW

  • @sylvienguyen1010

    @sylvienguyen1010

    6 ай бұрын

    So you're talking about the boot theory in higher education?

  • @takashikashiwase3461
    @takashikashiwase34617 жыл бұрын

    when you don't wanna read or write anymore but still wanna do some math, well you've got to the right place.

  • @ApiolJoe
    @ApiolJoe2 жыл бұрын

    27:00 The argument to make it work the way the intuition of the student worked is via markov chains. Set up the states -50, 0, 50 and 100, write the transition probabilities, then calculate the absorption probabilities of the two recurrent states (-50 and 100) from 0 which give 1/4 and 1/2. The probability to end up with $100 is the probability of ending up becomes 1/4 / (1/4 + 1/2) (since the two other states will eventually bleed into either one of these states we know their steady state probability will be 0) which indeed gives 1/3.

  • @Boringpenguin
    @Boringpenguin3 жыл бұрын

    49:03 ahh the "click" moment, seeing all the maths pieces coming together is really satisfying

  • @samgao7996
    @samgao7996 Жыл бұрын

    I am currently working on understanding the stochastic processes, and I am very confused by the concept of “a collection of random variables”, but the trajectory thing given by the lecturer helped me understand the concept a lot easier. For a continuous random process, if I sample at very high frequency, I will get several curves in the “x(t)-t” plain (the curve depending on the setting of the random process).

  • @masterofallhesurveys
    @masterofallhesurveys Жыл бұрын

    Wow ! What a clear and concise lecturer. His ability in minimizing excess data to keep to the pure path of understanding is excellent. He is a star.

  • @jerryzhang7124
    @jerryzhang71243 жыл бұрын

    insane lecture, tried so many different online materials, this one is clear af!

  • @Grey_197
    @Grey_1972 жыл бұрын

    OMFG! This guy is genius in explaining and presenting concepts.

  • @michaelcheng7597
    @michaelcheng75973 жыл бұрын

    28:00 Following the thought process of the student from the audience, after the balance reaches $50, there is a 1/2 chance for the balance to reach $100 (overall probability = 1/4) or fall back to $0 (overall probability = 1/4). If the balance falls back to zero, we can consider that as the start of the second cycle, where the distribution of the conditional probability is the same as the first cycle (1/2 chance to reach $-50, 1/4 chance to reach $100, and 1/4 to reach $50 first then return to $0). Same for the third cycle, forth cycle, etc. Therefore, we can express the overall probability for the balance to reach $100 as the infinite series of 1/4 + (1/4)^2 + (1/4)^3... which gives us 1/3.

  • @gamebm

    @gamebm

    2 жыл бұрын

    yes, and this is also consistent with Lee's solution, except that in the equation, one only needs to consider three (large) steps/grids, instead of a total of A+B steps/grids :)

  • @69erthx1138
    @69erthx11383 жыл бұрын

    In the 1st and 2nd cases he's talking about delta hedge parity (in trading/market practice) as reflected by trend lines. In the 3rd case he's referring to the vol of vol, in this situation one must employee stochastic volitility models.

  • @francoisallouin1865
    @francoisallouin18655 жыл бұрын

    Bravo for the stopping time definition . Very helpful

  • @intom1639
    @intom16397 жыл бұрын

    This guy is amazing. His explanation is clear.

  • @qinweizhang2849
    @qinweizhang28496 жыл бұрын

    Continue the reasoning from 27:22: Assume the probability of the game ends at 100 is x. As probability of the game reaches 50 is 0.5; The probability from 50 to 100 is actually (1-x). So x=0.5*(1-x) --> x=1/3

  • @user-oz8mj1uj6e
    @user-oz8mj1uj6e6 жыл бұрын

    Thanks for ur efforts, I was just preparing for my first class about stochastic.

  • @youtubeiscruel3946
    @youtubeiscruel39462 жыл бұрын

    To get variance, applied variance to both sides, var(sum(yi) over i). because yis are iid variance becomes sum(var(yi)). Var of each yi is one, and so variance is t. Var of each yi is one by computational formula of variance, E[yi^2]-E[yi]=1

  • @thedan2
    @thedan24 жыл бұрын

    Amazing lecture. Made it A LOT easier to understand the concepts and applications. Books on subject don't usually give examples, which makes it that much harder to understand.

  • @fidelesteves6393
    @fidelesteves63934 жыл бұрын

    Would be a honor to be part of your class, professor. Your content is just awesome and your care with the understanding of the students can be noticed by your looks. Thank you.

  • @nickfleming3719
    @nickfleming37193 жыл бұрын

    All you people praising this lecturer, saying how easy and simple he makes everything, are not helping. He's making tons of mistakes, and I'm thinking I must be going crazy since everybody else seems to think this is the best lecture ever.

  • @faisalajin491

    @faisalajin491

    3 жыл бұрын

    What mistakes?

  • @nickfleming3719

    @nickfleming3719

    3 жыл бұрын

    @@faisalajin491 47:02

  • @lucasgarcia78

    @lucasgarcia78

    3 ай бұрын

    @@nickfleming3719 please explain further what is the mistake

  • @HUEHUEUHEPony

    @HUEHUEUHEPony

    3 ай бұрын

    ​​@@lucasgarcia78matrix values not in the right position

  • @bigollameo
    @bigollameo7 жыл бұрын

    They have the audacity to call Choongbum Lee an instructor, when he can give a presentation so complete, elegant, and accessible that he could (and maybe should) teach ALL of the other professors at MIT a thing or two about how to give a lecture.and communicate ideas throughout it. This guy is @#$%ing amazing! What a beast. God, I feel stupid in comparison.

  • @MrCmon113

    @MrCmon113

    4 жыл бұрын

    What is your problem with the word "instructor"? "How dare they call him a teacher! He is too good at teaching for that!"

  • @xinkeguo-xue

    @xinkeguo-xue

    4 жыл бұрын

    @@MrCmon113 I think they mean that he should be promoted to the position of professor. Instructors are not generally permanent positions at a university.

  • @jamesfullwood7788

    @jamesfullwood7788

    4 жыл бұрын

    MIT is a top research university, and as such, professors at MIT (and other research institutions) are judged mostly according to the quality of their research, not teaching.

  • @caunesandrew1476

    @caunesandrew1476

    4 жыл бұрын

    I have seen quite a few MIT courses and every time, the teachers were amazing. This teacher is honestly not the best, although he is very much alright.

  • @aliciaa470
    @aliciaa4704 жыл бұрын

    the best intuition behind stochastic processes !, really good

  • @ComposingGloves
    @ComposingGloves4 жыл бұрын

    you sir are a gift! Thankyou for your clear lecturing!

  • @michaelwatt5007
    @michaelwatt50073 жыл бұрын

    Absolutely fantastic video, presented with such clarity. Extremely helpful. Thank you.

  • @user-ok4wr4zm5i
    @user-ok4wr4zm5i2 жыл бұрын

    a completely different level can not be compared with the first lectures

  • @user-wu9zj1ro6o
    @user-wu9zj1ro6o8 ай бұрын

    Don't spend your time for another channels. It is the best one!

  • @vijayk7387
    @vijayk73872 жыл бұрын

    Very easy solution for 28:00. P(B), P(A) be probabilities that B,A occur first respectively. Probability that we hit 50$ before -50$ is 1/2 and also probability that we hit -50$ before 50$ is 1/2. If we reach 50$ first, we see problem is flipped now, we are 50$ closer to B and -100$ closer to A. So P(B/start at 50$) = P(A/start at 0$) So we can write P(B) = 1/2(P(A)) = 1/2(1-P(B)) Solving this simple equation we get P(B) = 1/3 In fact for any A,B there is a point where we can flip the problem, so try to generalize this and come up with a proof.

  • @shakesbeer00
    @shakesbeer008 жыл бұрын

    1:15:16 you might want to say that E(X_\tau) = E(X0). Remember that X0 is a random variable too.

  • @mariushav

    @mariushav

    3 жыл бұрын

    Or condition on the value of X_0

  • @michal234486
    @michal2344867 жыл бұрын

    the last corollary is neat indeed, but the assumption of the theorem seems not be fulfilled. there does not exist T>tau, since it's possible for the random walker to bump between the lines -50 and 100 as long as it likes... can sb clarify?

  • @TamNguyen-bt7lc
    @TamNguyen-bt7lc3 ай бұрын

    56:13 I think the confusion here comes from the fact that for the other eigenvalue, which actually is less than 1 and greater than 0, the corresponding eigenvector will converge to the 0 vector. The “sum trick” he did earlier wouldn’t work because v_1 + v_2 = \lambda (v_1 + v_2) doesn’t imply that \lambda = 1 when both v_1 and v_2 are 0. Hope I didn’t overlook anything!

  • @phillipthompson1580
    @phillipthompson15808 жыл бұрын

    This is great and simple stuff for students studying the particle theory and Brownian motion

  • @ajarivas72

    @ajarivas72

    2 жыл бұрын

    In 1996 I took the most mathematical advanced course I have ever taken: RANDOM VIBRATIONS. This course reminded me of that great course.

  • @4mb127
    @4mb1274 жыл бұрын

    Great lecture. Learned a lot.

  • @salmakrichene844
    @salmakrichene8443 жыл бұрын

    OMG you are a genius stochastic process never looked this simple and intuitive

  • @CubeCubesen
    @CubeCubesen9 жыл бұрын

    very good presentation, enjoyed it!

  • @danieldasilva2057
    @danieldasilva20578 жыл бұрын

    I wish my lecturers could lecture in such a well structured way :(

  • @housemagicians
    @housemagicians4 жыл бұрын

    @42:00 Isn't the transition prob matrix incorrect. Where the lower left corner should be P_{m,1} instead of P_{2,m}

  • @kellybrower301

    @kellybrower301

    3 жыл бұрын

    Yes

  • @DilanChecker
    @DilanChecker4 ай бұрын

    I mean i dont get all these praises. The guy gives an overview of the topic but not rigorously at all. This is not the level of depth I would have expected but it serves me well in my preparations. It feels like I have to dive deeper on my own to get real understanding of the topic.

  • @Nikifuj908

    @Nikifuj908

    Ай бұрын

    It's a class for finance people. Did you expect a graduate course?

  • @DilanChecker

    @DilanChecker

    Ай бұрын

    @@Nikifuj908 To me it seems it's more taylored towards Math Majors who want to specialize in quantitative finance.

  • @kingshukdutta2064
    @kingshukdutta20642 жыл бұрын

    At 41:35, it should be P_m1 instead of P_2m.

  • @fernandoiglesiasg
    @fernandoiglesiasg7 жыл бұрын

    Interesting to see a proof that the simple random walk is expected to take t steps in order to move sqrt(t), which is relevant in Markov chain Monte Carlo theory.

  • @conoroneill8067

    @conoroneill8067

    4 жыл бұрын

    Also, if the Riemann Hypothesis is true, then it means the variance of the number of prime numbers up to x compared to the expected number given by the Prime Number Theorem is proportional to sqrt(x), which is connected to this as well.

  • @divyakrishnamalik3933
    @divyakrishnamalik39335 жыл бұрын

    Does anyone knows about more basic content so as to form a stonger intuition and be able fathom this deeper? Also recommendations for time series analysis will be appreciated as I'm basically working on that.

  • @KevinLanguasco
    @KevinLanguasco9 жыл бұрын

    Good presentation

  • @TheAlx32
    @TheAlx32 Жыл бұрын

    There is a mistake at 1;15:23 . An Expectation of a random variable is a number not a random variable. So E(Xtau)=E(X0).

  • @user-xt3jo3sk6u
    @user-xt3jo3sk6u8 жыл бұрын

    In 47:42 Multiplying a 2x2 matrix with a vector (1,0) will give back the p11 and p21 which stands for working today and working tomorrow(p11) and broken today but working tomorrow(p21) not the probability working and not working.

  • @N4mch3n

    @N4mch3n

    8 жыл бұрын

    it gives the probability of the machine working tomorrow, no matter if it's broken or not today therefore p reflects the probability of the machine working in 10 years. however he should've multiply with a vector (1,1) to adjust the same for q, since if you multiply the matrix with (1,0) the value of q will be 0

  • @ghale10

    @ghale10

    7 жыл бұрын

    N4mch3n there cannot be a vector (1,1) as they represent probabilties of the machine working and not working.The rows of the vector must add upto 1. With (1,1) it implies that the machine is working and not working at the same time.

  • @francoisallouin1865

    @francoisallouin1865

    5 жыл бұрын

    You are right. The error is that the entrees which should sum up to one are the ones in ROWS not columns. Because he is not multiplying A^3650 by the correct vector, he had to amend the matrix A when computing the eigenvector in 52:00.

  • @HenriqueSantos-xd1eg
    @HenriqueSantos-xd1eg4 жыл бұрын

    Show me the lectures of the Poisson process

  • @victorolagunju
    @victorolagunju2 жыл бұрын

    Thanks a lot. Very clear explanation.

  • @davidhashford9874
    @davidhashford98743 жыл бұрын

    Very good explanation.

  • @Marmann100
    @Marmann1004 жыл бұрын

    Can someone explain why tau would be bounded in the case (i) at 1:12:23 ?

  • @gamebm
    @gamebm2 жыл бұрын

    58:10 Someone asked whether the algebraic manipulation led to the (seeming incorrect) conclusion that all eigenvalues lambda are 1. That was not true, since the assumption for that equation is that we are dealing with a stationary state, and therefore, the conclusion is for a stationary state, its eigenvalue must be 1, as stated by Lee.

  • @eigentejas

    @eigentejas

    Жыл бұрын

    The equation was just an eigenvalue equation for A - it didn’t assume anything about stationary state. The correct argument, against the incorrect conclusion that all eigenvalues of A is 1, is that (v1 + v2) can be 0 and hence you can’t divide that out to conclude much about lambda. The case where you can do it turns out to be when v1 and v2 are positive - thus the theorem about the unique highest eigenvalue isn’t broken.

  • @gamebm

    @gamebm

    Жыл бұрын

    @@eigentejas You are correct. If one assumes a stationary state (some vector (p, q) of probability that remains unchanged by further multiplying A from the left), it simply implies the existence of an eigenvalue of 1.

  • @ucleminh1616
    @ucleminh16164 жыл бұрын

    Who is this guy? His explanation on the subject is awesome

  • @TheLukeStein

    @TheLukeStein

    3 жыл бұрын

    Choongbum Lee

  • @marcoardanese6013
    @marcoardanese601327 күн бұрын

    simply amazing

  • @debmallyachanda5384
    @debmallyachanda53843 жыл бұрын

    I don't understand how 2 and 3 are different? They seem same to me. 6:00

  • @HUEHUEUHEPony

    @HUEHUEUHEPony

    3 ай бұрын

    Uhm one is 2 paths and the other is infinite paths

  • @rationalmind3567
    @rationalmind35674 жыл бұрын

    what is the prerequisite for this course, does anywhere i can find a detail simplified version of all the explanation relating to this topic.

  • @mitocw

    @mitocw

    4 жыл бұрын

    Here are the prerequisites for this course: 18.01 Single Variable Calculus, 18.02 Multivariable Calculus, 18.03 Differential Equations, 18.05 Introduction to Probability and Statistics or 18.440 Probability and Random Variables, 18.06 Linear Algebra. We did a quick search of our videos and maybe this video would help? kzread.info/dash/bejne/aXeNuttyepenkdI.html See the course on MIT OpenCourseWare for more info and materials at: ocw.mit.edu/18-S096F13. Best wishes on your studies!

  • @leangsivlinh9372
    @leangsivlinh93729 жыл бұрын

    thank so much for MIT...it very helpful for my short time study at University.

  • @sonalimahajan8960
    @sonalimahajan89606 жыл бұрын

    does stochastic process varies linearly with time? because in your first example function f(t) varies linearly with the time. in some books it is referred as random process. quite confusing ,guide me

  • @tomofadown

    @tomofadown

    Жыл бұрын

    Not necessarily. You may have some stochastic process with linear delta to time but you can also have stochastic processes with non linear delta to time. For instance think about the process X(t) = t**2 for all t.

  • @moneyeye24
    @moneyeye242 жыл бұрын

    @48:24 "probability distribution of day 3651 and day 3650 are the same." @54:04 if av=v, day 3651=day3650, then the machine of his example last forever?

  • @123TeeMee
    @123TeeMee3 жыл бұрын

    Can technically everything be a markov chain if the history is included in the current state?

  • @mvmlego1212
    @mvmlego12126 ай бұрын

    I think that I don't understand the independence property of random walks, given around 21:00. His verbal explanation sounds a lot like the Markov property, but I doubt that he would define the same thing two different ways without saying that they're equivalent. Are there any systems with the independence property, but not the Markov property, or vice-versa?

  • @TheAwesomoe
    @TheAwesomoe7 жыл бұрын

    19:14 what's the name of that theorem?

  • @yamiashigaru

    @yamiashigaru

    6 жыл бұрын

    Chanson reflection principle of Wiener process / brownian motion

  • @Boringpenguin

    @Boringpenguin

    3 жыл бұрын

    and it is useful for pricing barrier options and lookback options

  • @aborucu
    @aborucu2 жыл бұрын

    @23:00 how can simple random walk be stationary when variance grows with time ? Did he mean increments are stationary ?

  • @EulerNumber_e_2.7183
    @EulerNumber_e_2.71832 жыл бұрын

    He is sooo good!

  • @Adam-rt2ir
    @Adam-rt2ir4 жыл бұрын

    In the definition of p_ij, was homogeneity assumed anywhere? Maybe I missed it, but it definitely needs to be a homogeneous process! That means, p_ij shouldn't depend on t.

  • @alexanderchristiansson2335

    @alexanderchristiansson2335

    3 жыл бұрын

    I noted this too. I don't think it was mentioned anywhere.

  • @dhruvvansrajrathore2148

    @dhruvvansrajrathore2148

    3 жыл бұрын

    Thanks. I was also wondering about this and now the computation at 43:15 makes sense.

  • @user-bh9ei9fl1z
    @user-bh9ei9fl1z4 жыл бұрын

    stopping time 개념이 헷갈렸었는데, 정말 직관적으로 이해가 가네요. 감사합니다!

  • @nazaninrahimirad7344
    @nazaninrahimirad73444 жыл бұрын

    wonderful teacher, but I couldn't understand the last example. Why the probability is=0?

  • @buraknuhemiroglu6033
    @buraknuhemiroglu60335 жыл бұрын

    i dont understand the difference between 2 and 3 at 4:39

  • @bereketyisehak5584
    @bereketyisehak55845 жыл бұрын

    Awesome lecture. Just found out he went to the same college for his undergrad as me

  • @gustavallen4992
    @gustavallen49922 жыл бұрын

    great job

  • @sandeepjangir6079
    @sandeepjangir60793 жыл бұрын

    Amazing Lecture, I think at 57:56 , the equation v1 + v2 = lambda(v1+v2) only holds for lambda = 1(the only case where both v1 and v2 can be positive) , for the other eigenvalue v1+ v2 =0. This Should extend to any dimension.

  • @jianingzhuang104

    @jianingzhuang104

    3 жыл бұрын

    Brilliant! Thank you.

  • @mathisdifficult666
    @mathisdifficult6662 жыл бұрын

    the matrix at 0:49:09 was wrong. Also, the transition matrix is (p_{1j},p_{2j}....), not (p_{k1},p_{k2},....).

  • @ARIZABEST
    @ARIZABEST16 күн бұрын

    Can someone explains me whats the difference of the stochastic processes number 2 and 3 defined at minute 4:30 ? Thank you so much

  • @sarahheddouche8024
    @sarahheddouche80248 жыл бұрын

    where can i find articles talk about "estimating Heston's model using MCMC"

  • @ninmarwarda5154
    @ninmarwarda5154 Жыл бұрын

    Around 32 mins time mark, why f(B) = 1 and f(-A) = 0? Thanks for the help.

  • @carolinaaldana5205
    @carolinaaldana52056 жыл бұрын

    Thanks a lot!!! Very good teacher :)

  • @TroubleMakery
    @TroubleMakery2 жыл бұрын

    Anyone know some way to get the solutions for the assignments?

  • @AReasonableName
    @AReasonableName3 жыл бұрын

    I'm confused about the machine working/broken example. At 0:49:09 I believe it should be [1 0]*A^3650 = [p q]. Then for eigenvector at 1:17:40 it should be A(transpose)*[v1,v2] = [v1,v2], as you can see he modified the matrix from A to A transpose. With the way it is shown here p, q should have different meaning.

  • @mathisdifficult666

    @mathisdifficult666

    2 жыл бұрын

    i understand now😂 the matrix A at 0:49:09 is wrong😂

  • @satvikp.s2688

    @satvikp.s2688

    Жыл бұрын

    Yeah I was having this exact same confusion, what you've said seems to be perfectly right, now it all makes sense to me. Thanks a lot!

  • @yassinekened3138
    @yassinekened31389 жыл бұрын

    Where can I find the 4th lecture?

  • @mitocw

    @mitocw

    9 жыл бұрын

    ***** Lecture 4 was not recorded. The topic was "Matrix Primer". See the MIT OpenCourseWare site for more course information and materials at ocw.mit.edu/18-S096F13

  • @yassinekened3138

    @yassinekened3138

    9 жыл бұрын

    Ok, thanks!

  • @kenichimori8533
    @kenichimori85334 жыл бұрын

    確率方程式=Stochastic Processes I

  • @Juoa794
    @Juoa7948 ай бұрын

    Isn’t discrete the same as continuous, at the limit?

  • @user-ox4bj4zx8z
    @user-ox4bj4zx8z7 жыл бұрын

    where is lecture 4 and 22 ?

  • @JIA1122
    @JIA11222 ай бұрын

    17:15 if the variance is t, how's std equal to square root of t. Isn't supposed to be just 1 since you'd divide variance with t first?

  • @shubhamsumanvishwakarma7113
    @shubhamsumanvishwakarma7113 Жыл бұрын

    47:52 Shouldn't we premultiply here,. i.e [1 0](A^3650) = [p q] pre-multiply (with [1,0] as 1x2 vector) instead of post-multiply.

  • @cmarkoz
    @cmarkoz5 жыл бұрын

    Very clear!

  • @adwoayeboah1537
    @adwoayeboah15378 жыл бұрын

    This is a good video. just that there is a little mistake under the transition matrix. With the matrix provided, the last entry under the first column should have been P subscript 3m and not 2m.

  • @RandomPerson-pp7ti

    @RandomPerson-pp7ti

    Жыл бұрын

    I believe it should have been m1.

  • @nkuduuchevictor7824
    @nkuduuchevictor78242 жыл бұрын

    WOW... THANKS FOR THIS....

  • @Nikita.mourya
    @Nikita.mourya3 жыл бұрын

    Plz... suggest the book for stochastic process

  • @dicksonh
    @dicksonh8 жыл бұрын

    at 18:41, what is the probability of bounded by square root t and minus square root t?

  • @PRAKHALGOYAL

    @PRAKHALGOYAL

    7 жыл бұрын

    I guess, this is because of standard deviation being sqrt(t) and mean being zero. So mostly the variable will be between sqrt(t) and -sqrt(t). Though the probability should be 68%(in case of 1 standard deviation) but he had mentioned it to be within 100 standard deviations. So it should be close to 100% (meaning x mostly be between sqrt(t) and -sqrt(t).

  • @bobbob-wq4kj
    @bobbob-wq4kj6 жыл бұрын

    32:30... that f(k) formula thing...how would that be solved?

  • @Krahltan

    @Krahltan

    5 жыл бұрын

    It is a homogeneous equation. So we only look for a solution to the homogeneous equation. Let w(k) = b*w(k-1) then w(k) = b^k * w(0) (by plugging it into itself k times). Plugging this into the equation and dividing through by w(0) we get 0.5*b^2 -b - 0.5 = 0. Solving this we get only a single root b = 1. So we try a simpler form w(k) = Ck + D. Using the boundary conditions w(100) = 1 and w(-50) = 0 we get C = 1/150 and D = 1/3. The original question asked for w(0) = C*0 + D = D = 1/3. Search google for "Solving linear recurrence relations". Note that if the probability of going up is not the same as going down, we may not end up with a single root and the trial form of w(k) would not be so simple.

  • @pycool7595
    @pycool75954 жыл бұрын

    Shouldn't it be [1 0] * A^3650 = [p q] ?

  • @kaydenwoodsmusic

    @kaydenwoodsmusic

    3 жыл бұрын

    I believe it has to do with the eigenvector relationship... Av = (lamba)(v)

  • @remlatzargonix1329
    @remlatzargonix13294 жыл бұрын

    Could these process also be index by both time and space (or location) where location may be derived from GIS Co-ordinates? So they could be used for spatial data analysis or spatio-temporal data analysis.

  • @LeCoolCroco

    @LeCoolCroco

    4 жыл бұрын

    Remlat Zargonix yes, but space is just another discrete space variable

  • @Grassmpl
    @Grassmpl6 жыл бұрын

    At time 47:30, the interpretation of p,q are incorrect, unless A is transposed

  • @Grassmpl

    @Grassmpl

    6 жыл бұрын

    Its a bit too late to flip at time 51:55. Students have already copied down incorrect notes.

  • @sjx2321

    @sjx2321

    6 жыл бұрын

    Yes, I think this wouldn't have happened if he wrote down the matrix in form of the conditional probabilities, like P(w|f) before filling out A with numbers.

  • @hereNtheregaming

    @hereNtheregaming

    3 жыл бұрын

    Can you explain that with a little bit more information?

  • @nickfleming3719

    @nickfleming3719

    3 жыл бұрын

    @@Grassmpl Here my mind is getting fucked because the probabilities across the rows are supposed to add to 1, then he switches them to columns out of nowhere and I'm like where was that even supposed to happen?

  • @biliatersinaga720
    @biliatersinaga7209 жыл бұрын

    ank you for lecture

  • @kbisht3680
    @kbisht36803 жыл бұрын

    this guy is a genius

  • @nirmalkumarsingh1092
    @nirmalkumarsingh10925 жыл бұрын

    At time before 44:40 he said random walk does not have finite set.. But he earlier said that the values are limited under a curve with standard deviation of root t.? Anyone please help

  • @HUEHUEUHEPony

    @HUEHUEUHEPony

    3 ай бұрын

    But not finite

  • @dharmiknaik1772
    @dharmiknaik1772 Жыл бұрын

    16:53, how does the variance equal t?

  • @Grassmpl
    @Grassmpl6 жыл бұрын

    at the very last corollary, what is the T from the theorem thats applied to the corollary ?

  • @Grassmpl

    @Grassmpl

    6 жыл бұрын

    Personally I argue that this T doesn't exist, so the thm cant be used. Although P(tau = infinity) = 0, we still have that for all natural number k, P(tau>k) >0, even though this probability can be really small as k gets large. So whatever T you pick, I can always argue against your claim using k=T+1.

  • @Rannosaurus
    @Rannosaurus2 жыл бұрын

    I think it should be N(0, 1/4) at 17:13