solving an infinite differential equation
Chalk found Smol Math Man pacing back and forth. "what's wrong Michael? Cat got your tongue?" said Chalk in a pompous manner. "This differential equation...it's...it's infinite...I don't know if I can solve it." Chalk looked Michael in the eye, "I believe in you, Michael. You can solve it." Then the differential equation swallowed the smol math man whole. Did Michael escape? Find out at the end of the credits of the video after this one.
🌟Support the channel🌟
Patreon: / michaelpennmath
Merch: teespring.com/stores/michael-...
My amazon shop: www.amazon.com/shop/michaelpenn
🟢 Discord: / discord
🌟my other channels🌟
mathmajor: / @mathmajor
pennpav podcast: / @thepennpavpodcast7878
🌟My Links🌟
Personal Website: www.michael-penn.net
Instagram: / melp2718
Twitter: / michaelpennmath
Randolph College Math: www.randolphcollege.edu/mathem...
Research Gate profile: www.researchgate.net/profile/...
Google Scholar profile: scholar.google.com/citations?...
🌟How I make Thumbnails🌟
Canva: partner.canva.com/c/3036853/6...
Color Pallet: coolors.co/?ref=61d217df7d705...
🌟Suggest a problem🌟
forms.gle/ea7Pw7HcKePGB4my5
Пікірлер: 405
Props to the editor of these videos for adding the best video descriptions on KZread
@MichaelPennMath
Жыл бұрын
Awww thank you very much! that means a lot to me. -Stephanie MP Editor
@danyilpoliakov8445
Жыл бұрын
Don't you dare to like Editors reply one more time. It is nice as it is😅
@jonasdaverio9369
Жыл бұрын
@@danyilpoliakov8445 It's still holding
@jongyon7192p
Жыл бұрын
An infinite differential equation SCP that becomes a bear and eats you
@Errenium
Жыл бұрын
nice pfp
Arguably the first method is also sketchy! I was always taught that that recursive method of dealing with infinite sums is dubious unless you can prove it converges another way afterwards. In this case convergence and equality is very easy to show, but that method can fail pretty badly for not-obviously-divergent divergent sums.
@TaladrisKpop
Жыл бұрын
Yes, for example, you can get the infamous 1+2+4+8+16+...=-1 or 1-1+1-1+1+...=1/2
@thomasdalton1508
Жыл бұрын
Yes, if you are going to use that kind of method you really should check the solution actually works. In this case, you'll get 1/2+1/4+1/8+... which does converge and converges to 1, which is exactly what we need.
@Owen_loves_Butters
Жыл бұрын
Yep. Hence why you'll find videos online claiming 1+2+3+4+5...=-1/12, or 1+2+4+8+16...=-1 (both are nonsense results because you're trying to assign a value to a series that doesn't have one)
@gauthierruberti8065
Жыл бұрын
Thank you for your comment, I was having that same doubt but I didn't remember if the first method was or wasn't allowed
@plasmaballin
Жыл бұрын
This is correct. However, the solution obtained in the video can easily be shown to converge, so it is valid.
Hi, Michael! This is a great problem. You can see that the original does have infinitely many solutions (well, let's say candidates for solutions) by making a different choice of where to start the infinite sum on the right hand side. For instance, with y = y' + y'' + y''' + y^(4)..., instead move y' and y'' to the left hand side to obtain: y - y' - y'' = y''' + y^(4) = D^2(y'+y''+y''') = y'' Thus the solutions to y - y' - 2y'' = 0 are also solutions to the infinite order differential equation. We recover e^(x/2) as a solution but also obtain a "new" one: e^(-x). However, the infinite sum of derivatives here doesn't converge. By an analogous argument, it looks like the solutions to y - y' - y'' - ... - 2y^(n) = 0 for a positive integer n might solve the infinite order differential equation -- assuming the infinite sum of derivatives converges.
Answer to the question about the finite version: If y=y'+y''+...+y^(n) and substitute y=e^(kx), we get 1=k+k²+...+k^n, so 1=((1-k^(n+1))/(1-k))-1. This can be rearranged to k^(n+1)+2k-1=0. In the limit as n->infinity, we can see that we must restrict |k|≤1. Furthermore, it's obvious k≠0, so 0
@fmaykot
Жыл бұрын
I'm afraid the limiting procedure in case 2 is a bit more subtle than that. You did not take into account the fact that both r and θ can (and in fact do) depend on n. If θ ~ α/(n+1) as n -> inf, for example, then R ~ 1 as n -> inf and α = 2*pi*m for integers 0
The sketchy solution is similar to using the Laplace transform.
@nafrost2787
Жыл бұрын
I think using a Laplace transform is a slightily better solution, because it justfies treating the derivative operator as a number in the geometric series formula, because (if I remember things correctly) in the s domain the derivative operator is a number. Using Laplace transform also, if not solve completely, then at least, simplify the ODE's given in the end of the video, to polynomial equations that can be solved numerically, and it also helps explain why there is only one solution to the ODE of the infinite degree, even though in every finite case, there are n soltuions. This comes from the fact that a power series can have any number of roots, even though the nth partial sum, have n roots (it is a polynomial of nth degree), for example exp doesn't have any roots, even complex ones, and of course sin and cos have an infinite number of roots.
This was such a fun one. You're absolutely killing it man
Some information I found on the question: solving the differential equation for finite amount of terms is the same as solving the equation 1=sum_{j=1}^n a^j, where I used y=exp(a*x) as a trial function. When I plot all the solutions for a large n, the solutions lie on the unit circle in the complex plane, except for one point. The point that is supposed to be at a=1 lies at a=1/2. This would mean that when we take the limit as n goes to infinity all the points on the unit circle would somehow "cancel out" and the point at a=1/2 would remain.
@oni8337
Жыл бұрын
how could i have forgotten about complex number branches
Regarding the missing infinity of solutions, one way of seeing where they go seems to be as follows. Differential equations of the form y = y'+y''+...+y(n) (y(n) is the nth derivative, not to be confused with y evaluated at n) are known to have solutions that are linear combinations of e^(ax), and we need to find the right "a". There are n "a" values. However, only one of them has |a|1. At least this is what it seems from playing with Wolfram Alpha up to n = 20. The problem is that y(n) = (a^n) y. Since |y|>0, if |a|>1, the value of y(n) diverges as n goes to infinity, whatever x is in y(x). Therefore, these solutions are not well-behaved, and we need to set their coefficient to zero in the general solution (linear combination of e^(ax), otherwise y is not defined). I guess there is a way to prove that only one of the roots has |a|
@whatthehelliswrongwithyou
Жыл бұрын
but isnt y diverges if a>0, not a > 1? Also leaving only non divergent solutions is a great argument in physics, but here they are still solutions, nothing bad about divergence at infinity. at least that's what i think, might be wrong
@whatthehelliswrongwithyou
Жыл бұрын
oh, the sum of derivatives doesn't converge at fixed x, the its a problem
@user-sk5zz5cq9y
Жыл бұрын
@@whatthehelliswrongwithyou yes y diverges as x aproaches infinity if a is positive, he was talking about the existance of the solution
@ManuelFortin
Жыл бұрын
@@whatthehelliswrongwithyou Yes, that's what I meant. Sorry for the late reply.
@martinkuffer5643
Жыл бұрын
We know the Cs are the roots of the characteristic polynomial of the equation. There are n roots (counting multiplicity) of a polynomial of degree n and thus n solutions. In the new equation this still holds, but now you have a "polynomial of infinite degree" i.e. an non-polynomic analytic function. These can have any number of roots (by the procedure you shown, where the roots go to infinity as you add terms to the series), and thus there can be any number of solutions to our original equation :)
Here's one simplification to the last set of differential equations : y+y'+y'' + ... + y^(n) = y^(n+1) + y^(n+2) + .... --- (1) Adding y'+y''+ ... + y^(n) to both sides we get : y+2(y'+y''+ ... + y^(n)) = y' + y'' + .... --- (2) Differentiating (1) then adding y'+...+y^(n+1) to both sides we get : 2(y'+y''+...+y^(n+1)) = y' + y'' + .... --(3) Comparing (2) and (3) we get : y=2y^(n+1) which matches with the start of the problem. If y=Ce^(ax), we can find that a is the (n+1)'th root of 1/2. I wonder if there are other solutions too !
@danielrettich3083
Жыл бұрын
I really liked the "sketchy" method, probably because I'm a physicist xD, and thus tried it on this generalized form of the problem. And it actually leads to the same simplified differential equation you got, namely y=2y^(n+1), which I find absolutely amazing
@PleegWat
Жыл бұрын
@@danielrettich3083 Same here. Remember to include all n+1 (complex) branches of (n+1)√2 to get all solutions.
@weeblol4050
5 ай бұрын
good job
I think a really good idea for a follow-up video would be an explanation of why we don't have infinitely many linearly independent functions that solve the equation. Or perhaps they do exist, and that could be shown. I've noticed that when substituting in these infinitely recursive relationships, we often lose generality. For example, for the function y=x^x^x^x^x... we can do a similar substitution as we did in the video and find that y=x^y, which produces many solutions but only for 1/e
I think for the general problem at 9:07, you can just apply the first method, so you y+y’+... y(n)=y(n+1)+(y+y’+...y(n))’. When you expand the derivative, everything except y(n+1) and y cancel out, so you get y=2*y(n+1). From there it’s relatively straightforward, and you get y=C*e^(x/(2^(1/(n+1))*e^(2*pi*i*m/(n+1)))) for a real number C and an integer m. That means that you actually have n+1 families in general, so the full solution is a linear combination of these.
@rohitashwaKundu91
Жыл бұрын
Yes, I have done the same thing but isn't the solution coming as y=Ce^(x/(2^(1/n)))?
@mathieuaurousseau100
8 ай бұрын
@@rohitashwaKundu91 It should be y=Ce^(ax) where a^(n+1)=1 (with C a complex number, I don't know why, they said real) and the number such as a^(n+1)=1 are the 2*m*pi/(n+1) with m integer between 0 and n (included)
Lovely problem! And lovely follow up question ^^. Something really aesthetically pleasing in this problem. Maybe it has to do with the perceived difficulty of solving it, ending in a really nice and simple solution. Lovely.
That perfect infinity symbol at 4:45 touched my soul
Ahh, three seconds in, "The trivial solution works beautifully."
@cara-seyun
Жыл бұрын
0 = 0 + 0 + 0 + 0…
For one of the follow-on questions there is a cute result which pops up. f=f'+...+f(n) when n is congruent to 1 mod 4. In that case you can use a sine function because the other derivatives cancel themselves out. I was looking for ways to fit this self-canceling concept into the other finite equations, but I have been unsuccessful.
@10:15 y + y' ≠ y'' + y''' "But I'll let you do it as homework" 😆😆
@weeblol4050
5 ай бұрын
trivial y + y' = y' + 2y''
@Horinius
26 күн бұрын
@@weeblol4050 No, it is not. I don't know how you got the y + y' = y' + 2y'' My comment actually told viewers that Michael made a mistake at @10:15. The correct answer should be y + y' = 2 y'' + 2 y'''
@weeblol4050
26 күн бұрын
@@Horinius y + y' = y''+(y''+ y'''+...)' = y''+(y+ y')' = y'' + y' + y'' = y'+2y'' If you can find a mistake it would be really helpful
@weeblol4050
26 күн бұрын
@@Horinius but yours also works y + y' = y'' + y''' + (y'' + y'''...)''=2y''+2y''' Lets observe 2x^2 - 1=0 and 2x^3 + 2x^2 - x - 1 = (2x^2-1)(x+1)=0. now lets observe y + y' = y'' + y''' + y^(IV) + (y'' + y''' +...)'''=y'' + 2y''' + 2y^(IV) and 2x^4 + 2x^3 + x^2 - x-1=(2x^3 + 2x^2 - x -1)(x+1) - 2x^3 + x=2(x^2-1/2)(x+1)^2 - 2x^3 + x=0 so there are god knows how many solutions ( oo ). Some of the solutions are y(x)= Ae^(-x) + Be^(x/sqrt(2)) + Ce^(-x/sqrt(2)). So you are correct in some way you found also the solution that I wrote with a constant A I found only 2
@weeblol4050
26 күн бұрын
@@HoriniusLets also check y = y' + y'' + (y' + y''+...)'' =y' + 2y'' , 2x^2+x-1 = (2x-1)(x+1) so here also is the e^(-x) a solution so 3:32 is also incomplete. Just for sanity lets check y = y' + y'' + y''' + (y' + y'' + y''' +...)'''= y' + y'' + 2y''' and 2x^3 + x^2 + x -1=(2x^2 + x-1)(x+1)-x^2+x=0 so this one doesnt work and yields even more solutions god I dont want to check anymore this is cursed. I guess it is to be expected from an infinite order differential equasion to have infinitly many solutions
Well, that is totally unexpected to separate the differential operator💀
@kitochizxik5786
Жыл бұрын
Hi Kurisu
If you decide to associate the 3rd derivative and so on, you get that's the 2nd derivative of y, in which you get y = y' + y'' + y'', and you get the family of solutions y = c_1e^(x/2) +c_2e^(-x). So we do get infinite families of solutions, but it's a matter of where we associate. If we start with the 4th derivative, we'll get 3 solutions as we have y in terms of the first, second, and third derivative. And so on.
@patato5555
Жыл бұрын
You can take this a bit further by noting the characteristic polynomial of keeping the first n derivatives will factor as (r-1/2)(1+r+r^2+…+r^(n-1)). In general, y=ce^(rx) where r=1/2 or r is a root of 1+x^2+…+r^n for some n. Of course, there could be more solutions than these.
@mizarimomochi4378
Жыл бұрын
@@patato5555 I agree. Except they'd be roots of 2x^n + x^(n - 1) + ... + x - 1 if I'm not mistaken.
@patato5555
Жыл бұрын
@@mizarimomochi4378 if you set the expression equal to 0, divide by 1/2 and then factor out the r-1/2 they will be equivalent.
@mizarimomochi4378
Жыл бұрын
@patato5555 Sorry, I didn't notice the first time. My bad.
@patato5555
Жыл бұрын
@@mizarimomochi4378 No worries!
Like everytime when using algebraic manipulations with series (or more generally, limits), one should carefully check about the convergence. Without it, the first method only shows that, IF a solution exists, then it has to be of the form y=Ce^(x/2)
@honourabledoctoredwinmoria3126
Жыл бұрын
It's a fair point, but Y(n) of Ce^ax = (a^n)Ce^ax. So what we actually have here on the RHS is a geometric series (1/2 + 1/4 + 1/8...)Ce^(x/2), and on the left: Ce^(x/2). They equal each other if and only if that geometric series converges to 1, and of course it does. It's a valid solution, and I suspect it is the only valid solution. There are other apparent solutions, but they do not actually converge.
@TaladrisKpop
Жыл бұрын
@@honourabledoctoredwinmoria3126 Yes, convergence is not difficult to check, but it shouldn't be left out
@broccoloodle
Жыл бұрын
Well, you first assume a solution exists, you find all solutions, then later on you remove all solutions that do not converge. I find nothing wrong about that logic
@TaladrisKpop
Жыл бұрын
@Khanh Nguyen Ngoc Did I say the opposite? But where in the video do they eliminate the divergent solutions? If not done, the solution of the problem is incomplete.
@broccoloodle
Жыл бұрын
@@TaladrisKpop I think verifying the solutions not diverging is too obvious that Michael chose not to show it on the video. What he wanted to deliver to us is actually the second way and triggering our curiosity on additional problems in the video.
Did Michael escape? Will he be able to cut his way out of the belly of beast with only the Heaviside operator? Stay tuned, viewers! 😮
I love these videos for two reasons: one, the insight on the maths itself, two, the insight on how to cleanly draw the symbols!
I love the sketchy proof!!! Operator analysis looks so wild without context though. Like, that whole segment around 5:30 is crazy. If I saw (1 - D)^-1 as a high school student I would be mindblown, my teacher wouldn’t be able to hear the end of it
For the follow-up questions, you can bracket the first one as (y + y') = (y'' + y''') + (y^(4) + y^(5)) + ..., and therefore defining z = y + y' this becomes z = z'' + z^(4) + ..., so the differential equation can be solved in two steps. This generalizes to the n case by defining z = y + y' + ... + y^(n) so that the DE can be rewritten as z = z^(n+1) + z^(2n+2) + ..., which by the same method used in the first half can simplify to z = 2z^(n+1). Then you get a sum of exponentials in the complex roots of 1/2 and throw that mess into the RHS of y + y' + ... + y^(n) = z. So y(x) will ultimately be a sum of complex exponentials but I imagine the coefficients would get messy fairly quickly. Edit: changed n to n+1 in the RHS of the rewritten equation, I had counted that wrong. Edit 2: actually not that bad, check replies.
@aceofhearts37
Жыл бұрын
So, actually not that messy. From now on I'll use Σ to mean the sum from k=0 to k=n. The solution to z = 2z^(n+1) is a function of the form z(x) = Σ (A_k)exp[(λ_k)x], where the A_k are any complex numbers and λ_k = [(1/2)^(n+1)] exp(2kπi/(n+1)) is one of the (n+1)st roots of 1/2. Therefore, the solution to y + ... + y^(n) = z will have a homogeneous part (a sum of exponentials involving the roots of 1 + λ + ... + λ^n = 0) and a particular solution, which we can assume has the form z(x) = Σ (B_k)exp[(λ_k)x], for some coefficients B_k that we have to compute. By comparing with the RHS we get (1+λ_k+...+λ_k^n)B_k = A_k, which by the partial sum of a geometric series and λ_k^(n+1) = 1/2 simplifies to B_k = 2A_k(1-λ_k). Since A_k can be chosen to be any complex number, B_k is also any complex number since 2(1-λ_k) is always nonzero. Then if we want real solutions we can pick the B_k to be complex conjugates as needed.
@Joe-nh9fy
Жыл бұрын
@@aceofhearts37 This is what I worked out as well. Well actually I got y = 2y^(n+1) instead of z. I get this by using the original equation, and a second equation which is the derivative of the first equation. Solve for y^(1) in both equations. Then set those expression equal to each other and solve for y. But I believe your general function is the solution for y
@matteopriotto5131
Жыл бұрын
@@aceofhearts37 lambda_k should be {(1/2)^[1/(n+1)]}exp(2k(pi)i/(n+1)) I think
@aceofhearts37
Жыл бұрын
@@matteopriotto5131 You're right, good catch.
@matteopriotto5131
Жыл бұрын
@@aceofhearts37 glad I helped
"Okay. Nice." 😂😂❤❤ love it every time I hear that.
LOL. The sum(D^n)=D/(1-D) operator expression is so cool. It won't surprise me, if the manipulations you did can make perfect sense in some formal way.
For the truncated version: y=y'+y''+y'''+...+y(n) let r be a root of x+x^2+x^3+...+x^n=1. Then it is easy to show that y=exp(rx) is a solution to the truncated equation. Since there are n such roots this gives you the basis of the expected n dimensional solution space: exp(r_1 x), exp(r_2 x), ...,exp(r_n) x Now the hand-wavey part : as n approaches infinity, the equation x+x^2+...+x^n=1 approaches x/(1-x)=1 which has the unique solution x=1/2 as found in the video. Not really satisfying. I feel there is a nicer geometric argument, but I don't see it as of now.
@alexsokolov8009
Жыл бұрын
You can simplify your characteristic equation using formula for sum of geometric series: (x^(n+1) - x) / (x - 1) = 1 which is the same as x^(n+1) - 2*x + 1 = 0, x != 1 It is easy to show that the function f(x) = x^(n+1) - 2*x + 1 has exactly 2 real roots for odd n and 3 real roots for even n. Excluding x=1 will give us 1 or 2 real solutions depending on parity of n. I guess these observations show that an infinite equation from the video has no more than 2 real solutions. However, there are complex solutions, which should also be considered
Very thought provoking. I honestly found the "sketchy" solution very sketchy - I didn't understand the manipulations of the D operator.
Jesus christ you came at the right time i love yooooooouuuuuu i needed this desperately
To answer your question Mr Penn i think that having one solution is a consequence of the analytical property of the solution and having an infinite sum forces the coefficient (a_k) in the analytical expression to be defined uniquely. Thank you for your amazing videos.
10:09 That is most definitely wrong. I think, it must be y + y' = y' + 2y''
@krisbrandenberger544
Жыл бұрын
No. y+y'=2(y"+y''') from doing something similar with the goal equation.
@petersievert6830
Жыл бұрын
@@krisbrandenberger544 Well, I am not wrong, I dare say. your equation is correct as well though. You cut off beginning after y''' and made the rest into (y+y')'' , while I did after y'' and made the rest into (y+y')' Honestly my equation seems much more futile to get to a solution though.
I find absolutely game changing the fact that applying the geometric series worked 😮😮
I immediately thought of the "sketchy" solution with D as a linear operator 😆. When the characteristic "polynomial" is actually not a polynomial because it lacks a finite degree, then usually there's some formula that can be applied to its coefficients (otherwise, how would you define it?). In that case, my hunch is that there's some manipulation that can be performed along the lines of techniques used with generating functions and recursive sequences that will produce a diffeq having an order equal to the degree of the formula.
@PeterBarnes2
Жыл бұрын
I prefer using a slightly more direct approach to using linear operators. [1]y = [1/(1-D_x) - 1]y {|y'/y| (This is equivalent to the given equation, in terms of Differential Operators, with the condition (which might not be necessary) coming from 1/1-s having a pole at s=1. This pole should manifest as divergence in certain exponential solutions, namely those with parameter 's' (from e^sx) outside the radius of convergence of this 'definition of 1/1-s.' I say it 'should' manifest this way, but this theory is not developed enough to be certain of the divergence, at least to my knowledge. Fortunately the final solution satisfies this condition anyway, so it is not repeated.) 0 = [1/(1-D_x) - 2]y (Moving terms between sides of the equation, as both operators are operating on the same term 'y.') 1/(1-s) - 2 = 0 (The exponential solutions of any (there is a theorem I've discovered, more or less, to this generalization from polynomials to any function, indeed) Constant-Coefficient Linear DE are found by using the characteristic equation to find the eigenfunctions of the form e^sx, with s the characteristic equation's independent variable.) 1 - 2(1-s) = 0 -1+2s = 0, s=1/2 (Just algebra, here. Having solved for 's,' e^sx are our eigenfunctions, thus:) y = Ce^(x/2) Really a very short and simple approach. Now, if you want a more difficult approach, you can use the fact that [1/(s-D_x)] is a variation of the Laplace transform, remembering that [e^(bD_x)]f(x) = f(x+b) and int{0, inf} e^-at dt = 1/a and then you can try to solve the resulting integral equation. It's a good bit of fun, and certainly possible, if a little unnecessary in this problem. [Edit: I did this without watching the video first. My mistake, it's almost exactly as presented! Oh well...]
@ilonachan
Жыл бұрын
What's really great here is that we don't actually need to get all that convoluted to get rid of the sketchiness, and just not do the step with the weird "function division" thing. While we often write the geometric formula as that ratio, its derivation works in any ring if we just skip that final simplification! So with our present ring of linear functors, where addition is adding the results, multiplication is chained application, and division is not generally defined, we can still just skip directly from the (1)y=(sum)y description to the (1-D)y=Dy statement. ...although, does D^(n+1) "converge" in some meaningful way? that'd be required for the infinite case, right? the finite case ofc just gives us a relatively simple degree n+1 differential equation, but I forget how exactly those are solved rn...
@PeterBarnes2
Жыл бұрын
@@ilonachan x^n doesn't converge over all x. The domain for D^n to converge over is the space of functions. That's a pretty broad domain, so I prefer to stay within the complex meromorphic functions. (Which, despite including complex functions, is much more restrictive and well-behaved.) I'm pretty sure of these two things: One of these extended differential operators f(D_x) converges for an exponential function e^sx if and only if the function f(s) converges at 's.' As well, polynomials converge if f(0) converges, and polynomials times exponentials P(x)e^sx converge when e^sx converges. This much I'm fairly confident about. Further, other functions than exponentials or polynomials converge for a given differential operator depending on how the function is expressed. For example, a taylor series may diverge on its terms alone, but an exponential times a taylor series may converge absolutely, even when the exponential times the series equals the original series. More than that, integral expressions of some function might converge or diverge if they contain exponential terms that remain inside or go outside, respectively, the domain of convergence of the differential operator. This much is actually given (I think) by the previous thing. I have no idea about functions which are in no way expressed as exponentials or polynomials. Not just regarding their convergence under various differential operators, but even how to evaluate them. There is something which can, theoretically, help. Functions of the derivative applied to functions of the variable can be reversed: [f(D_x)] (g(x)*y(x)) = [[g(D_z + s)]{z=D_x} f(z)]{s=x} (y(x)) It's messy, but cleans up when y=1: [f(D_x)] g(x) = [g(D_z + x)]{z=0} f(z) This allows you to evaluate some expressions more easily. Because it's easy to evaluate exponentials of derivative operators (e^bD is the shift operator by 'b'), and polynomials are basically given (D^p is the pth derivative operator for p a natural number) you can basically evaluate any differential operator on functions expressed in terms of exponentials and polynomials. This works when the exponentials or polynomials are under an integral, or in a sum, or up a tree, anything! (By 'up a tree' I'm not actually referring to anything specific. For example, I don't mean towers of exponentials: I am still working on exponentials of polynomials e^(x^p), as they do not behave at all. [e^e^D]y=0 might be the DE for which the gamma function is the solution. Or maybe not, it's hard to tell. Maybe with a minus sign somewhere, but then it doesn't work, it's rather confusing, actually.) The fact that exponentials behave better than polynomials motivates me to try and express one in terms of the other. So far I've found one expression which requires a limit, which isn't satisfactory. I've looked at distributions (a generalization of functions), and found a way of getting to it from what are basically derivatives of the sign() function. This, interestingly, gives the exact same result with the limit and everything. I've looked at expressing the logarithm, which also gives the same exact result. Maybe thinking from polylogarithms, or something else entirely? Very uncertain.
@sirlight-ljij
Жыл бұрын
D is an unbounded operator, so the geometric series requires some assumptions to be made for it to converge
@PennyAfNorberg
Жыл бұрын
@@sirlight-ljij I guess that why the soloution was schecty, and i start thinking how to check that |D|
I would love to see what happens when you choose different constants for the different derivatives, e.g. y = sum {from k=1 to inf} 1/k y^{(k)} Also it would be fun to plug some crazy sequence as constants. I.e. define a_n to be the nth digit of pi and calculate y = sum a_n y^k
Hey, Michael! So for the general case of the follow up question, we would have: y+y'+...+y^(n)=2*(y+y'+...+y^(n))^(n+1)
I would've never looked at that and gone "wow that's a geometric series!" Haha
Elite thumbnail
Here's an operator ordering issue. You have to prove D commutes with 1/(1-D) before acting on both LHS and RHS an 1-D. (1-D)y=((1-D)D(1-D)^(-1))y it truly is.
@jamiewalker329
Жыл бұрын
Err, that's trivial, the commutator of any function of an operator with any other function of that same operator is 0. Non trivial commutation relations come from operators being distinct, or distinct components of vector operators.
@reeeeeplease1178
Жыл бұрын
You can "factor" a D out from the series *to the right* and then use the geometric series trick to avoid this problem
@jiantaoxiao2481
Жыл бұрын
@@jamiewalker329 yes. You are right. [f(D), g(D)]=0
@jiantaoxiao2481
Жыл бұрын
@@reeeeeplease1178 yes. Thanks.
@jiantaoxiao2481
Жыл бұрын
f and g has D^n as basis and D^n's coefficient should be constant.
i think some insight for the question at 7:23 is that the differential equation with finite number of terms n corresponds to a characteristic polynomial of degree n that has n roots, whereas the infinite one's polynomial is a power series which has a single root
As a general solution to the problem around 9:00 : For n terms on the left, the functions satisfying the equation are y = C * e ^ ( ( (1/2) ^ (1/n) ) * x )
How would be if you have and alternating infinite sum of derivatives with different coefficients?
Extending the case with finite n to solutions of the form y=e^(a x) you get 1=a+a^2+...+a^n. In the limit as n->\infty you get a=e^(i\phi), where 0
For a finite number of terms can you use formula for the sum of a finite geometric series and manipulate the equation that way?
When you started the "sketchy solution" I thought that you were going to start grouping from later in the equation, something like noting that y=y′+y″+(terms of the original expansion)″ and then getting the spurious solution family y=ce^−x, which if back-substituted results in basically saying that Grandi's series converges to −1; related to that, if you group it off after the nth derivative, you get an equation with characteristic polynomial 2r^n+r^(n−1)+r^(n−2)+…+r^2+r−1, which factors as (2r−1)(r^(n−1)+r^(n−2)+…+r^2+r+1), and the zeroes are ½ and the roots of unity other than 1, corresponding to spurious solutions equating 1 to the sum of a divergent series with terms that oscillate around the unit circle.
How do you know the sum on the right hand side converges? If you are working with a domain of real numbers for y the sum should diverge if x is positive, which makes me feel like this is a sort of Ramanujan-tier cheat code solution. Of course I still think it means something, just not the whole picture…if we take y(0)=0 then the Laplace transform will converge for |s|
I did it in a less elegant way: Since this is a homogeneous differential equation with constant coefficients, you assume the solution is in the form of ce^(rx). Differentiating this solution and diving by ce^(rx) (it can never be 0) you get 1=r+r^2+r^3... adding 1 to both sides gives you 2=1+r+r^r^3...=1/(1-r) (|r|
how can we be sure about the convergence of (1/1-D)Y ? How is it even defined?
Hi, Michael. For the general differential equation, I am getting two solutions. Either y can be ce^x or it can be a polynomial of degree (n+1) with the coefficient of the highest power being 0.5/(n+1)!.
Can someone explain the reasoning behind the sum of the powers of the differential operator converging? Doesn't seem intuitive for me.
The question I thought of as soon as I saw it was: y = y'/1! + y''/2! + y'''/3! + ... So a Taylor-series-looking differential equation. Possibly an application of your "what's exp(D)" result from another video?
@Kapomafioso
Жыл бұрын
I also thought about that and how the argument shifts when exp(D) is applied. Then the equation essentially becomes: f(x+1) = f(x), which is a functional equation for any periodic function with period 1, instead of a differential equation. Infinite series of derivatives be weird and exotic like that. Sometimes it's not a differential equation at all, despite looking like one.
It doesn't seem that the result is the standard result from the geometric series: (1-x^n)/(1-x) which goes to 1/(1-x) when x infinity. Is it different for operators? Also, what does it mean for D < 1?
The differential equation essentially becomes 1=1/2+1/4+1/8+1/16...
How could you solve y = sum(d^i y /dx^i) where the sum is taken over only prime indices i? i.e., the RHS is the sum of prime-th derivatives of y
Do you think it would be possible to use the Fourier transform to solve this?
So... will there be some follow-up videos? Or we are just left with these questions that will never be answered??
I don’t know if this has been answered or not already, but one way to look at the non-existence is via the Fourier transform (a favorite for constant coefficient linear ODE). After some manipulation, you can see that the solution must solve \Lambda^{n+1} =2\Lambda -1. Now suppose n goes off to infinity. We break up looking for roots into three options: the modulus of lambda is greater than, equal to or less than one. In the greater than case, we cannot solve this as the left hand side is much much bigger than the right. In the equal to, the left hand side does not have a limit, so what do we even mean! In the less than case, the term tends to 0, so 2\Lambda -1 = 0 which recovers our start. Heres a follow up: is there a distribution of solutions around the unit circle that this approach’s? Is there a meaningful “Distribution of other oscillatory solutions at infinity “ ? Great video! It’s fun to see the resolvent pop up in the sketchy side!
That swas a very interesting way to solve the problem.
How do you know, that |D| < 1 for convergence?
That was sick!
I haven't worked this out, but I see a common element between the infinite differential problem and the finite problem. The solution to the infinite differential can infact be written as a linear combination of functions y_i, if you were to expand the exponential Cexp(-x/2) as a taylor series. My suspicion as that the solution to the finite differential version of this would just be the n-term taylor expansion of the exponential solution. But I can't be sure with out working it out.
For the case y + y' + ... + y(n) = y(n+1) + ... you get, by the sketchy solution, y + y' + ... + y(n) = (D^[n+1]/(1-D)) y (1-D)(y+y'+...+y(n) = y(n+1) Notice that the LHS telescopes, giving only y - y(n+1) = y(n+1) or in other words, y(n+1) = y/2 which has the solution set y = C exp[x/α] where α = 2^[1/(n+1)] * exp[ikπ/(n+1)] for all integers k such that 0 ≤ k ≤ n
@lunstee
Жыл бұрын
Careful with the telescoping; it only works correctly on the RHS infinite series when abs(D)
Given the geometric operator (partial)sums has a (1-D)^-1 in the denominator, take a look at what happens if you apply (1-D) to both sides of the equations: In the question, you get y - y' = y' - y^(n+1) or y = y'' + y^(n+1). Looking at this as a matrix system of differential equation, you can solve this to get the n linearly independent solutions. In the follow-up, you go from y+y'+...+y^(k) = y^(k+1) +... to y-y^(k+1) = y^(k+1). But this is just y=2y^(k+1), which can also be solved as a system of equations C_0 e^(r_0 x) + ...+ C_k e^(r_k x). Afterwards you would still need to show these constructed solution is actually solve the original system.
i love how you can also easily build even and odd parts of this equation using the solution that satisfy y_even = [y(x)+y(-x)]/2 = y''_even + y""_even and so on
A nice (and seemingly related) parallel: The polynomial 1 = sum_{j=1}^n x^j is a degree n polynomial and so has n (possibly complex) solutions. But when we take the infinite sum, 1 = sum_{j>=1} x^j = (e^x-1) for |x| we only get one solution, not infinitely many.
10:14 Wait, what happened? He just completely ignored the remainder of D4y to DNy. If y + D1y = D2y + D3y + D4y + … + DNy, then why why just entirely drop the 4th derivative etc ????
@stewartcopeland4950
Жыл бұрын
it's more like y + y' = 2 * (y'' + y''')
@CISMarinho
Жыл бұрын
As @stewart said: y’’ + y’’’ + y⁽⁴⁾ + y⁽⁵⁾ +… = (y+y’+y’’ + y’’’ + )’’ = (y+y’ +(y+y’) )’’ = 2(y+y’)’’ = 2(y’’ + y’’’)
The 'sketchy' approach is probably made a bit more formal by taking the Laplace transform of both sides. The result is then that Y = (s / (1 - s)) Y, and the solution follows multiplying through by (1 - s) and taking the inverse transform. This also permits us to consider solutions to y + y' + ... y(n) = y(n + 1) + ..., (where y(k) is the kth derivative of y) since we would have: (1 - s ^ (n + 1)) / (1 - s) Y = (s ^ (n + 1)) / (1 - s) Y Rearranging, Y = 2 s ^ (n + 1) Y and transforming back: y = 2 y(n + 1) The resulting basis of n+1 functions is z_k ^ x for k = 0..n where z_k are the n+1 complex roots of 1/2 (a real basis also exists). The case solved in this video was n = 0. There are two assumptions made here. First, that the solution y has a Laplace transform and, second, that the resulting geometric series converges (i.e., |s| = 1 for which s + s ^ 2 + ... = 1.
On the follow-up question, in the video you shifted the equal sign to the nth sum. Now, can we do this indefinitely, shifting the equal sign to the right to somehow "inverting" the sum of derivatives? y + y' + ... + y^(n) = y^(n+1) + y^(n+2) + ..... to maybe lim m -> infinity { y + y' + ... + y^(m-1) = y^(m) ..... } ?
When you move the equals sign around you don’t actually change the problem much. If our cut off is y+y’+…+y^(n) = y^(n+1) + … Then let g=y+y’+…+y^(n), and rewrite the RHS in terms of derivatives of g.
Brilliant!
Off the top of my head my guess was exp(x/2)
@skvortsovalexey
Жыл бұрын
C*exp(x/2)
@kennethvalbjoern
26 күн бұрын
Me too. It's 25+ years since I did a differential equation, so I missed the c.
The first method I thought I was neat. I used geometric series but I didn't see a need to go through all that operator business. Something we learned in diffeq is that any linear differential equation system with constant coefficients will have solutions of the form A*exp(mx). And thus making this substitution into the equation we get 1=m+m^2+m^3.... The right hand side is very close to a geometric series which has the sum: 1+r+r^2+r^3...=1/(1-r), so if we subtract 1 from both sides we get r/(1-r)=r+r^2+r^3... So we sub this into our equation we get 1=m/(1-m) The only value that gives us a solution is m=1/2. Thus the solution is y=C*exp(1/2*x)
How do you know that series converges? It its a linear operator. That is not an obvious assumption
Love that thumbnail, lol
That's just a frequency domain transform and a reverse, right? For the sketchy part You'd just have to worry about convergence before using it
Note that we have a similar case for ordinary algebraic equations: the equation 1+x+x^2/2+..+x^n/n!=0 has n complex solutions, but if we take the limit we get an equation with no solutions.
Thank you sir, really helpful 🙏🇮🇳
The answer to the question is simple. Look for a trial solution y=exp(mx), and you'll end up with a polynomial equation. Demonstrating that there are a finite number of solutions. You can't do this for an infinite series. My first thought was to take Fourier transforms.
C can also be complex. Since all of the terms are positive (except y), the vast majority of the characteristic equation roots are complex and the solutions oscillate. The infinite case has an infinite series as its characteristic equation and all of the coefficients (except a_0=-1) are +1. This infinite set of complex roots may well provide a corresponding infinite set of linearly independent solutions, but I suspect that very few will be useful.
On the first follow-up question y + y' = y{2} + y{3} + y{4} + ... : Taking the second derivative on both sides we get: y{2} + y{3} = y{4} + y{5} + y{6} + ... and hence: y + y' = 2 * (y{2} + y{3}) (this factor 2 was missing in the video) By substituting z for y + y' we get z'' = 1/2 * z and therefor a solution z = c * e^(x/sqrt(2)). A simple real solution that solves the substitution and therefor the original equation is y = c * e^(x/sqrt(2)).
@59de44955ebd
Жыл бұрын
Concerning the general equation y + y{1} + ... + y{n} = y{n+1} + ..., if we substitute z for y + y{1} + ... + y{n}, we get z{n+1} = 1/2 * z, and y = c * e^(x/(2^(1/(n+1))) is always a (trivial) solution.
This is refreshing to learn around exposure to the Ricatti equation.
I love the operational method. Heaviside would approve.
How do we know that the infinite sum converges?
For the finite sum, I get Cexp(ax) as a solution where a is a solution to the polynomial 2a(1-a^n)-1=0 I get this by noting y'=y''+y'''+...+y[n+1] so we have y=2y'-y[n+1] if you use y=Cexp(ax) then you get Cexp(ax)=2aCexp(ax)-a^(n+1)Cexp(ax) or... 1=2a-a^(n+1)=2a(1-a^n) In the limit as n goes to infinity, it requires |a|
I have a pretty wavehandy explanation for the uniqueness of the solution, for something more precise you might need to start thinking harder about what functions are we talking about. So for finite n you would solve the equation by substitution y=Exp(Ax). The characteristic equation is 1-2A+A^(n+1)=0 (where you should discard A=1). It's easy to see that in the limit n goes to infinity theres unique solution |A|
wait what? you can do those operations on operators?
Are there any physical problems where this kind of differential equation appears in physics or higher-order math theory?
Oh, I've got one! what about y = y'' + y''' + y(5) + ... where the primes are all prime (2,3,5,7, etc). Will that question wrap back to the Reimann Zeta function?
I had a fantasy image in my head that looks like this: " (derivative[sqrt(10)timesover] y) + (derivative[10timesover] y) + (derivative[10sqrt(10)timesover] y) + (derivative[100timesover] y) + ... " and soon enough I knew it was time to try this video. These videos are good for those that do and don't listen alike. I'm sure you probably prefer the people that do or are more likely to listen; just wanted to let you know that I thought of you and/or your channel in a sincere way. Also, I think ?/2 shows up in your video because of the way the inherent limit would work as the tally marks approach infinity in 3 or more different ways. I'm not a calculus expert. That is just what I think.
9:20 Is there a nice solution. My answer is "yes". Just try a function of the form C*exp(k*x). You get the equation: 1+k+k^2+...+k^n=k^(n+1)+k^(n+2)+k^(n+3)... Take k^(n+1) as a common factor from the right side. 1+k+k^2...+k^n=k^(n+1)*(1+k+k^2...) Apply the formula for the geometric sum to the left and that of the geometric series to the right. (1-k^(n+1))/(1-k)=k^(n+1)/(1-k) Cancel out the common denominator and rearrange to get the following equation. k^(n+1)=1/2 The solutions for k are just (1/2)^(1/(n+1)) times the appropriate roots of unity. Technically, I've not proven that there aren't solutions that aren't of exponential form but that seems pretty intuitive.
Is it not simpler to take derivative of the equation (y' = y" + y''' + ...), then subtract these two equations to get y - y' = y', most terms vanishing, thus y' = (1/2) y, y = C e^(x/2)? Convergence remains to prove but should be quite simple. Does this show that there can't be another family of solutions? (The trivial case of y = 0 (constant) is covered by C = 0)
Hey Michael, for the first method - doesn't the sum law for derivatives only hold for finite sums? This method seems like it needs further justification.
Differentiation is so weird that it kinda just acts normally even when you treat it as fractions or exponents
Is there a justification that the geometric series formula "seems to work" with a differential operator ?
@DTDTish
Жыл бұрын
Not a mathematician, but my guess is that it is linear We can also just plug in y=Ae^(kx) like we do for all constant coefficient linear ODEs, so we have y=y' + y'' + ... Gives us the characteristic equation 1=k+k^2+... And use geometric sum from there. This basically does the same thing as the linear operator method, but a bit more simple (adding numbers instead of operators)
@guerom00
Жыл бұрын
@@DTDTish yeah... Somehow, i don't have a problem with an object like exp(D) cause this series has an infinite radius of convergence. Here, i try to wrap my head around what a finite radius of convergence for this series means when applied to differential operators :)
I have never heard someone say "and so on and so forth" before
For class II, using the same method as first used, y+y' = y''+D(y+y')=2y''+y'; so y=2y'' with solution y=Cexp(x/\sqrt 2)+Dexp(-x/sqrt2) . The two independent parts arise because we implicitly involve the second derivative. Note the exponent factors 1, -1 are square roots of 1. The next class produces y=2y''' with solution y = Cexp(x/(2^1/3))+Dexp([]x/(2^1/3))+Eexp([]x/2^1/3) with [] the other cube roots of 1: -1/2+-\sqrt3/2 . Three independent parts due to a third derivative. And so forth ...
@garyknight8966
Жыл бұрын
Oops .. the last [] factors I meant to be complex: -1/2+- i\sqrt3 /2 (of course). So these involve trigonometric functions (the even or odd components of exp (i \theta) )
The similarities between this diffeq and the power series of exp() is interesting.
Well I got the initial solution with realizing that an exponential function will result a sum of a geometric sequence converging to 1 that’s how I got the 1/2… for there I realized that any parameter smaller than 1 would make a converging geometric sum and you can just subtract whatever the sequence converges to and add 1 to balance the equation (the constant will disappear after the first derivative
why does that D manipulation work?
We can also just plug in y=Ae^(kx) like we do for all constant coefficient linear ODEs, so we have y=y' + y'' + ... Gives us the characteristic equation 1=k+k^2+... We know that the geometric sum is 1+k+k^2 = 1/(1-k), which is the RHS plus 1. So we have 1 =1/(1-k) -1 We get K=1/2 Or Y y=Ae^(k/2) The video did something very similar, but with operators
2nd solution: A) Why can we assume D represents a square linear transformation such that its power series makes sense? B) How can we justify that the geometric series transformation (for |base| < 1) is valid for such matrices?
@DavidSavinainen
Жыл бұрын
This is precisely why he called it a sketchy method
My first intuition of the solution (as someone who doesn't like to think too much) was "what if it was some exponential function whose power was a series that converged to 1 on the range 1 to infinity?"