Tensor Calculus 13: Gradient vs "d" operator (exterior derivative/differential)

Tensors for Beginners 16 video on Raising/Lowering Tensor Indexes: • Tensors for Beginners ...
(This was re-uploaded with a correction. I originally wrote out the gradient in spherical coordinates incorrectly.)

Пікірлер: 150

  • @Gismho
    @Gismho2 жыл бұрын

    Yet another FIVE STAR explanation. Thank you. Extremely instructive, most interesting and very well presented with good diagrams.

  • @virati
    @virati5 жыл бұрын

    You've really got a gift. Let us know if we can support you somehow, would be great to do our part to keep these going!

  • @eigenchris

    @eigenchris

    5 жыл бұрын

    Thanks. I am strongly considering making a Patreon. If I do, I'll upload a video announcement.

  • @kansuerdem2799
    @kansuerdem27994 жыл бұрын

    What can i say ? thank you like the rest ? ... no no ... I have never learned so much in a short time... I am starting to believe that I am really smart... :) You are an "eigen-value "of teaching

  • @moardehali
    @moardehali11 ай бұрын

    Perhaps the greatest teachings on tensors. How superior these videos are compared to tensor videos from universities such as Stanford or MIT.

  • @alberto1854
    @alberto18543 жыл бұрын

    The best virtual class I have ever watched. Your entire course is just fantastic. Thanks a lot for sharing your deep understanding of such quite abstract concepts.

  • @beoptimistic5853

    @beoptimistic5853

    3 жыл бұрын

    kzread.info/dash/bejne/ioV9xNBrZ8e1mqg.html 💐💐

  • @twistedsector
    @twistedsector4 жыл бұрын

    this is some god-tier teaching right here

  • @beoptimistic5853

    @beoptimistic5853

    3 жыл бұрын

    kzread.info/dash/bejne/ioV9xNBrZ8e1mqg.html 💐💐

  • @user-vm9zt6tm3h
    @user-vm9zt6tm3h5 жыл бұрын

    The journey to the planet of tensors is still going on smoothly. Houston , i think we do not have problem. The captain is the best. Roger and out.

  • @adityaprasad465

    @adityaprasad465

    4 жыл бұрын

    LOL. I think you mean "over and out" though.

  • @josedanielbazanmanzano9607
    @josedanielbazanmanzano96074 жыл бұрын

    You´re among the greatest of edutubers my man; congrats for this wonderful series

  • @beoptimistic5853

    @beoptimistic5853

    3 жыл бұрын

    kzread.info/dash/bejne/ioV9xNBrZ8e1mqg.html 💐💐

  • @fernandogarciacortez4911
    @fernandogarciacortez49113 жыл бұрын

    What a great video indeed. I have learned a fair share of differential geometry up to this point, but all my relativity books lack an explanation/clarification on this subject that is the differential operator. I thought I was going to be fine leaving one of my books aside (Tensors, differential forms, and Variational principles by Lovelock) since I was already reading Kreysizg's book on differential geometry, but they each have their strengths. As mentioned in this video, I looked for the video on Raising and Lowering indices and ended up making notes for: - What are covectors (Tensors for beginners 4) - Tensor product (Tensors for beginners 15) - Lowering/Raising (Tensors for beginners 16) - Differential forms are covectors (Tensor calculus 6) - Covector field components (Tensor calculus 7) - Covector field transformation rules (Tensor calculus 8) - Integration with differential forms (Tensor calculus 9) - And finally, the 13th video on the tensor calculus playlist which made me come here, I will finally be able to take proper notes on it. Thanks a LOT, Eigenchris. I will surely go back and check out the other videos later, they are such a great companion to normal textbooks. Between the color-coded letters, ease of explanation, just perfect. Rethinking/Redefining old concepts is what makes these more advanced subjects a bit more complicated. We were told X thing has this and that. Your 'motivations' are awesome man. I'm sure many professors lack this understanding of concepts. What a gift to your viewers.

  • @luckyang1
    @luckyang15 жыл бұрын

    Perfect ! As every single video you made on the subject. I have never read anything so clear and detailed about the relationship of the different operators.

  • @philwatson698

    @philwatson698

    5 жыл бұрын

    Loved this too. Great ! Sorry - my public comment button won't work so I have had to put this as a reply to someone else's - sorry.

  • @chimetimepaprika
    @chimetimepaprika4 жыл бұрын

    Don't get me wrong; I like Khan and 3B1B and many others, but EigenChris has my favorite teaching and visuals style.

  • @beoptimistic5853

    @beoptimistic5853

    3 жыл бұрын

    kzread.info/dash/bejne/ioV9xNBrZ8e1mqg.html 💐💐

  • @mtach5509
    @mtach55092 жыл бұрын

    I LOVE YOUR LECTURE YOU ARE VERY GOOD TEACHER AND ALSO SHOW DEEP UNDERSTANDING

  • @animalationstories
    @animalationstories5 жыл бұрын

    i cant thank you enough, you are a gem

  • @hushaia8754
    @hushaia87543 жыл бұрын

    Excellent videos! These videos really clear the fog and I'm a Math/Physics teacher (I have never studied GR). I agree with Diego Alonso's comment!

  • @operatorenabla8398
    @operatorenabla83983 жыл бұрын

    This is just incredibly clear

  • @xueqiang-michaelpan9606
    @xueqiang-michaelpan96063 жыл бұрын

    I feel I finally understand Gradient. Thank you so much!

  • @beoptimistic5853

    @beoptimistic5853

    3 жыл бұрын

    kzread.info/dash/bejne/ioV9xNBrZ8e1mqg.html 💐💐

  • @gamesmathandmusic
    @gamesmathandmusic Жыл бұрын

    accidentally hit dislike but corrected to a like. love the series

  • @CoffeeofMPSWG101
    @CoffeeofMPSWG1013 ай бұрын

    well balance class , thank you. hope there are more soon.

  • @TmyLV
    @TmyLV5 жыл бұрын

    great tensor lessons

  • @anthonysegers01
    @anthonysegers014 жыл бұрын

    Beautiful! Thank You.

  • @danlii
    @danlii2 жыл бұрын

    Hi! I want to say I love your videos. One question: in video 11 you said that the metric tensor in polar coordinates is the identity when one uses the normalised definition of the theta basis vector. So, shouldn’t that mean that the gradient in polar coordinates should look the same as in cartesian coordinates when one uses the normalised definition of the theta basis vector?

  • @eigenchris

    @eigenchris

    2 жыл бұрын

    If you start artificially normalizing basis vectors to length 1, you can no longer use the assumption that "basis vectors = tangent vectors along coordinate curves". So you can no longer expand a vector using multivariable chain rule without adding extra "fudge" factors. The "r" factor is one of the 'fudge factors' you would need to add when expanding vectors in a basis like this. So the formula ends up being the same.

  • @robertprince1900
    @robertprince1900 Жыл бұрын

    I think confusion arises because you look at the basis and components separately to see if co or contravariant transformation applies where as your students are defining the whole VECTOR as one or the other depending on how the basis transforms. You also use covariant vector field for what look like a scalar, df, but as defined by piercings it must be a “convector” since it eats a vector, but it is strange it’s basis are scalars too.

  • @mjackstewart
    @mjackstewart3 жыл бұрын

    You’re doing the Lord’s work with these series! Bravo! I do have one question. I’m getting better at canceling the superscripts and subscripts, but I don’t immediately recognize when I’ve got the Kronecker delta. In which situations does this happen? Or am I lacking an intuitive knowledge to see when this is actually happening?

  • @eigenchris

    @eigenchris

    3 жыл бұрын

    Is there a particular example you can point to in this video or another video where it's not obvious to you? I just think of the Kronecker delta as the "identity matrix". It's what you get when you multiply a matrix with its inverse. (or in Einstein notation, when you sum matrix components with its inverse matrix components.)

  • @mjackstewart

    @mjackstewart

    3 жыл бұрын

    @@eigenchris It’s at 19:07 and, ironically, where you multiply the vector metric tensor with the covariant metric tensor. I get that those should produce the identity matrix, or I guess, more properly, the Kronecker delta. However, I’m struggling to understand how the g(small)ij subscripts and the g(Fraktur)jk superscripts merge to form the Kronecker delta with ik subscripts and superscripts. Superficially, this occurs because of cancellation. I’m struggling with the intuition that the i column of the vector forms the identity matrix with the k row of the covector. Is what I’m making any sense? I’m really rusty with my linear algebra, and tensors are new to me-especially Einstein notation. Also, is there a video you recommend on the subscript/superscript cancelation rules for Einstein notation? Thank you, Jedi master Obi Wan! You ARE my only hope! And thank you so infinitely much for responding after all this time!

  • @eigenchris

    @eigenchris

    3 жыл бұрын

    @@mjackstewart I think my "Tensors for Beginners 16" is where I introduce the idea of raising/lowering indices, and the inverse metric. The g with upper indices is DEFINED as the tensor that will sum with lower-index-g to give the kronecker delta. In the language of matrices, the g-upper matrix is the inverse of the g-lower matrix. It might be good to review the formula for the inverse of a 2x2 matrix if you're not familiar with that (just google "2x2 matrix inverse" to find the formula) and maybe as an exercise: invent 3-4 matrices, calculate their inverses, and multiply them by the original to see that they give the identity matrix.

  • @mtach5509
    @mtach5509 Жыл бұрын

    Another way to see the duality relate to gradient - time the position vector , which gives the direction along the position vector or along the covector - ie the gradient . but i think it most use withe refer to direction of diffrential position vector ie dx and dy - as it is also a unit vector and the meaning - again in the eye of the beholder - gradient (number) in the unit dx dy vector.

  • @michalbotor
    @michalbotor5 жыл бұрын

    beautiful!

  • @khalidibnemasood202
    @khalidibnemasood2025 жыл бұрын

    Hi, at 10:19, you are writing df = (del_f/del_ci)*dcj. Is it correct? Should not the indices in the c should match? That is I think, it should be df = (del_f/del_ci)*dci. And, in any way, your videos are awesome. Are you planning to make a series for differential geometry? Thanks for your good and hard work.

  • @khalidibnemasood202

    @khalidibnemasood202

    5 жыл бұрын

    Ah, I see that corrected at 10:42. I was just confused. Thanks

  • @robertprince1900
    @robertprince1900 Жыл бұрын

    (Also thanks for great content!!)

  • @harrisonbennett7122
    @harrisonbennett7122 Жыл бұрын

    Excellent

  • @orchoose
    @orchoose3 жыл бұрын

    I was studying using Gravitation by MTW and it can get rly confusing. Imo they use gradient in sense that its ''GENERAL'' gradient and in flat space levi civita connections are zero and cov. derivative becomes gradient. In the same sense as relativistic equations chnage to newtonian for low speeds.

  • @drlangattx3dotnet
    @drlangattx3dotnet4 жыл бұрын

    At 7:28 the equation has v dot something = vector components times metric tensor times dual basis covectors. But doesn't the dot product yield a scalar?

  • @JgM-ie5jy
    @JgM-ie5jy5 жыл бұрын

    This lecture seemed less exciting than the previous one. Even though it is less inspiring, you managed to put in an absolute gem : reconnecting with the traditional way of defining the gradient as a linear combination of partial derivatives. This is yet a vivid example of teaching instead of mere telling.

  • @drlangattx3dotnet
    @drlangattx3dotnet4 жыл бұрын

    At 9:03 you say this is the formula for the directional derivative. Do you mean for the components of the directional derivative?

  • @m.isaacdone5615
    @m.isaacdone56155 жыл бұрын

    Thanks bro !

  • @lt4376
    @lt43763 жыл бұрын

    15:00 Big point here, in fact I write my normalized basis/unit vectors with a ‘hat’ to emphasize their magnitude is 1, as opposed to an unnormalized basis/unit vector.

  • @mtach5509
    @mtach5509 Жыл бұрын

    the most important property of covector that it is align 90 degree to its pair vector - bothe start from the coordinate origin - (0,0)

  • @Mysoi123
    @Mysoi123 Жыл бұрын

    Thanks!

  • @hotchmery
    @hotchmery Жыл бұрын

    I think there's a mistake at 10:44, the starred equation should have repeated indices instead of i and j. Great video, thanks!

  • @robertprince1900
    @robertprince1900 Жыл бұрын

    Typically the gradient is covariant ( unless you want to use metric to make it contravarient) and you dot it with vector to get scalar df/ds, or just df if you want to leave out denominator. It seems like you are taking the df part, and simply defining an operation df(v) where you plot out level df sets and count piercings to get the same answer as gradf dot v. I’m not sure what that accomplishes or am I missing something? Is it just a reimagining using “piercings” instead of “ most in direction of gradient” to envision result?

  • @robertforster8984
    @robertforster89843 жыл бұрын

    You are amazing. Do you have a patreon page where I can make a donation?

  • @eigenchris

    @eigenchris

    3 жыл бұрын

    I have a ko-fi page here: ko-fi.com/eigenchris Thanks!

  • @vajis4716
    @vajis4716 Жыл бұрын

    11:40 Can I simply divide the equation by metric tensor instead of multipliing by inverse metric tensor? Is it possible to do such adjustment by Einstein summation notation like by normal equations from elementary and high school? Thanks.

  • @eigenchris

    @eigenchris

    Жыл бұрын

    It's not that simple. You have to remember the Einstein notation represents a sum (in 3D, you could write it out as a sum of 3 individual terms, each with a different metric component). Since each term in the sum has a different metric component, you can't just "divide" to get ride of them. You need to use the special rule with the inverse metric.

  • @gaiuspliniussecundus1455
    @gaiuspliniussecundus1455 Жыл бұрын

    Great videos, and courses in fact. Any good books on tensor calculus for beginners? A companion to these videos?

  • @eigenchris

    @eigenchris

    Жыл бұрын

    Sorry, but I can't recommend any. I learned from lots of random articles and pages.

  • @KuroboshiHadar
    @KuroboshiHadar2 жыл бұрын

    Hello, this might be lost because of how long it's been since the vid was posted but I'm a little confused with your use of notation... Like, previously it was more or less defined that del/del(x) (for example) is a basis vector in the cartesian coord system (in the x direction) and that dx is a basis covector field in the x direction. Yet, in this video, you use d(something) as covector basis, but e(something) as vector basis, instead of the usual del/del(something)... Are those two the same and you used e(something) for the sake of clarity or is there some difference I'm missing?

  • @eigenchris

    @eigenchris

    2 жыл бұрын

    ∂/∂x and e_x are different notations for the same thing in tensor calculus. Does that clear things up?

  • @KuroboshiHadar

    @KuroboshiHadar

    2 жыл бұрын

    @@eigenchris It does, thank you! Surprising you still replied after all this time, thank you very much =D

  • @observer137
    @observer1374 жыл бұрын

    I find an asymmetry that makes me uncomfortable and confusing. At 16:18, why del f has metric tensor component df does not?

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    df is a covector field. It can be written as a linear combination of basis covector fields like dx and dy usong chain rule. No metric tensor is needed. To convert covector components into vector components, we need the metric tensor components to do the conversion. This is true for all covectoe/vector pairs. df and del f are just one example.

  • @andrewmorehead3704
    @andrewmorehead37042 жыл бұрын

    Does this have to do with Riesz Representation in Linear Algebra?

  • @eigenchris

    @eigenchris

    2 жыл бұрын

    Sorry, but I don't know what that is.

  • @dsaun777
    @dsaun77711 ай бұрын

    Is this differential operator, d, different from the differentials used in the line element ds^2=g(dx,dx)?

  • @eigenchris

    @eigenchris

    11 ай бұрын

    I'm honestly not sure what the "d" in "ds^2" really means. I think it's mostly a notational shortcut, telling you that you can replace ds with sqrt(g_uv x^u x^v). However you can interpret the "d" in most integrals to indicate integration over a differential form. I think I show this in video 9 or 10.

  • @armannikraftar1977
    @armannikraftar19775 жыл бұрын

    Amazing video again, just wanted to point out that at 10:20 , df should equal (partial f/partial c^i)*d(c^i) instead of (partial f/partial c^i)*d(c^j). if not, then i have no idea what we're doing here :D.

  • @Salmanul_

    @Salmanul_

    3 жыл бұрын

    Yes you're correct

  • @tomaszkutek
    @tomaszkutek3 жыл бұрын

    at 13:44 partial(f)/partial(c^k) are gradient components but are covariant. However vector components should be contravariant.

  • @eigenchris

    @eigenchris

    3 жыл бұрын

    If we want to be technically correct, we should leave that kronecker delta in there, because both the partial(f)/partial(c^k) and the e_k are covaraint, so we need a 2-contravariant kronecker delta to make the einstein summation made sense. However, I wanted to show the link with what you'd normally see in a calc 2 or 3 class, so I abused notation somewhat and cancelled the j indices.

  • @tomaszkutek

    @tomaszkutek

    3 жыл бұрын

    @@eigenchris thank you for explanation

  • @gtf753
    @gtf7534 жыл бұрын

    Great👏👏

  • @beoptimistic5853

    @beoptimistic5853

    3 жыл бұрын

    kzread.info/dash/bejne/ioV9xNBrZ8e1mqg.html 💐💐

  • @anthonyymm511
    @anthonyymm5112 жыл бұрын

    I usually write “del f” to mean the 1-form written here as df to stay consist with the notation for covariant derivatives. That way the hessian is written “del del f” instead of the uglier “del df”. When I want to talk about the vector instead of the 1-form I just write “grad f”.

  • @ChienandKun
    @ChienandKun4 жыл бұрын

    Hi there, I'm really enjoying your channel. Excuse this amateur question, but I'm wondering: What is the relationship between Helmholtz's decomposition theorem and the classification of Covariant and Contravariant vectors? One might assume that as a gradient is considered to be a covariant vector (designated with 'upper' indices), and that as you've shown, these vectors are products of acting on scalar fields, and cannot be 'raised' to a higher level tensor by the same operation. Contravariant vectors then, would be assumed to be 'sinusoidal' in nature, since they can be 'curled' into a 2nd rank tensor. My confusion here is that 1. I've never seen anyone make this association explicitly. 2. If, indeed according to Helmholtz every vector consists of a superposition of these two types of vectors, there does not seem to be a way to add covariant vectors to contravariant ones (with the same indices), in tensor notation. Thanks, sorry again for what might be a silly question.

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    I never actually covered Helmholtz decomposition in school. Wikipedias it's about writing a vector field as a sum of a gradient plus a curl. Is that right? My understanding is that this is a statement purely about vectors (aka contravariant vectors), and doesn't relate to covectors. Adding vectors and covectors is not something you can really do, as they live in different vector spaces. I feel I have not answered your question. Can you be more specific about how you think covectors are related to the Helmholtz decomposition?

  • @ChienandKun

    @ChienandKun

    4 жыл бұрын

    @@eigenchris Hi, thanks for the reply. In a lot of the material I've read, a gradient is given as an example of a 1-form, and is expressed with upper indices in the tensor notation (like you're doing here). The fact that the page you sited on gradients, only used the word 'vector', does not necessarily indicate it meant 'contravariant vector', since that distinction isn't usually made when del operations are introduced, or in the realm of 3-D vector calculus in general, really. I think I'm trying to close that gap with this identification I'm assuming here. Helmholtz simply says that there are two kinds of vectors. The symmetry of these two kinds of vectors is on display within Maxwell's equations. The static electric field cannot be curled, but does have a divergence, whereas the magnetic field can be curled but has zero divergence. Helmholtz's theorem simply says that any vector is a combination of these two kinds of vectors. Now, since gradients are associated with always carrying upper indices, it's my assumption that lower indices are reserved for the other type of vectors. This should probably mean too, that if one lowers the indices of a gradient, it should be possible to curl the resulting (contravariant) vector. Adding these two types of vectors is helpful in fixing gauges. The Lorentz gauge involves adding an arbitrary gradient to the magnetic vector potential. But, as I said,and you confirmed, if we are to consider the magnetic vector potential to be contravariant, and the gradient to be covariant, there would be no way to add them in tensor notation. It does seem that vectors in an orthogonal (or orthonormal) basis, are super positions of covariant and contravariant vectors, since they are 'equal'. Although, they are not equal in the sense that the vectors of the contravariant frame can be operated upon through the curl, and the covariant frame cannot. That's as far as I've thought this through lol, thanks for indulging me.

  • @ChienandKun

    @ChienandKun

    4 жыл бұрын

    *Correction. According to Wiki, Covectors are denoted by lower indices, and Contravectors are denoted by upper ones. That is unless you're referring to basis vectors. A lot of the studying I've done has been in Geometric Algebra, and I guess to be different, they switched this convention. This doesn't really effect my question regarding the nature of the two sorts of vectors though, I just wanted to point this out in order to mitigate confusion. Sorry.

  • @ChienandKun

    @ChienandKun

    4 жыл бұрын

    The article does sort of make my point for me though. By listing velocity, acceleration and jerk as contravariant vectors. All of these vectors may be curled, and have no divergence, as I suspected. The magnetic vector potential is analogous to the velocity, and the acceleration is analogous to the dynamic electric field. In electrodynamics, the two electric fields, static (covariant) and dynamic (contravariant) would be added together. This leads me even more strongly to identify covectors with gradients and contravectors with 'sinusoidal' vectors. en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    @@ChienandKun I'm sort of losing you again, when you suggest that static fields are covariant and dynamic fields are contravariant. Also not sure what you mean by "sinusoidal vectors". Let me take a step back... as you mentioned, when a student first studies vector calculus, they learn div, grad, curl, and they also only learn about vectors (no mention of covectors). All of their calculations focus on 3D space with vectors only. This is important because the cross product and curl operations only make sense in 3D space. The Helmholtz decomposition forumla is also only true for 3D space, since it involves the curl. So if you're looking for the standard undergrad vector calculus interpretation of the Helmholtz formula, forget about covectors. Helmholtz decomposition is a statement about vector fields only, and it only works in 3D space. The output of the gradient is a 3D vector field, and the output of the curl is a 3D vector field. The sum of the two is another 3D vector field. No need to worry about covectors at all. Now, as a student studies more math, they will need to generalize vector calculus to higher dimensions. This means that they need to abandon the ideas of "curl" and "cross product", as they don't make sense in 4 dimensions or higher. In order to express the idea of "curls" and "rotations", instead of using the cross product, they will use the wedge product from exterior algebra. The difference between the electric field and magnetic field is not about covariant/contravariant vectors. Instead it is about the part of the exterior algebra that they live in. You might be interested in reading this article to learn more (particularly the parts about geometric algebra and differential forms): en.wikipedia.org/wiki/Mathematical_descriptions_of_the_electromagnetic_field I hope I haven't confused you too much, but in short, I don't think you should try to think of E and B in terms of covariant/contravariant vectors. In a first E&M course, they are just vector fields, full stop. In more advanced E&M courses, they can be treated differently (with exterior algebra).

  • @stevenhawkins9962
    @stevenhawkins99624 жыл бұрын

    at approx. 10min 40s, the equation at the bottom of the screen, df=partial df/partial dc^j =…… why is the quotient partial dc^j and not partial dc^i ? I've guessed they are the same thing in this equation but I wanted to check

  • @active285

    @active285

    3 жыл бұрын

    "c" is the chosen dual basis, so dc^j is just the corresponding 1-form, In "normal" Cartesian coordinates you might know the notation dx^1, ..., dx^n, then it just follows by the definition of the exterior derivative applied to a function f that df = ∂_i f dx^i (with Einstein notation). so here df = ∂f/∂c^j dc^j.

  • @beoptimistic5853

    @beoptimistic5853

    3 жыл бұрын

    kzread.info/dash/bejne/ioV9xNBrZ8e1mqg.html 💐💐

  • @qatuqatsi8503
    @qatuqatsi8503 Жыл бұрын

    Hey at 10:41 is it meant to be ∂f/∂c^j rather than ∂f/∂c^i

  • @eigenchris

    @eigenchris

    Жыл бұрын

    Yes, my bad. That's a typo.

  • @MTB_Nephi
    @MTB_Nephi5 жыл бұрын

    Please more Computing examples

  • @eigenchris

    @eigenchris

    5 жыл бұрын

    There's an example of computing the gradient in the next video (video 14).

  • @user-si1zn3ir7x
    @user-si1zn3ir7x3 жыл бұрын

    I understand gradient now, but is there a way to define divergence and curl?

  • @eigenchris

    @eigenchris

    3 жыл бұрын

    It requires more study, and learning what the "hodge dual" (star operator) is. Maybe wikipedia can get you started? en.wikipedia.org/wiki/Exterior_derivative#Invariant_formulations_of_operators_in_vector_calculus en.wikipedia.org/wiki/Hodge_star_operator

  • @user-si1zn3ir7x

    @user-si1zn3ir7x

    3 жыл бұрын

    @@eigenchris Thankyou!

  • @astronomianova797
    @astronomianova7972 жыл бұрын

    I don't think the notation description is quite right (I think the math is fine): The gradient is exactly what MTW's Gravitation defines it to be, a one-form (lower index). It is naturally a one-form (covariant) by its definition. The exterior derivative uses the same notation because a gradient is one example of the more general category of exterior derivative. A gradient takes a 0-form to a 1-form. What takes a 1-form to a 2-form or 2-form to a 3-form? The exterior derivative. So what is the del operator that everyone calls the gradient (only in 3-D vector calculus)? It is the gradient (simply defined as the partial of some field, f, with respect to some coordinates; covariant) raised to contravariant by contracting with the metric. So what is being done here is fine except what he's actually doing is contracting again with the metric to lower the index without mentioning the only way to get a gradient with an upper index (gradient vector) is to contract it with a metric first. (Said another way: The del operator is not defined in this video. If it was you would see it must be a partial derivative, with lower index, then raised by the metric to get an upper index.) Edit: yet another way: he should have just started with the equation he ends up with around 12:00 Final note: in relativity you don't use the del operator for the gradient because that is commonly used for something else; the covariant derivative.

  • @drlangattx3dotnet
    @drlangattx3dotnet4 жыл бұрын

    Is there another term for "v dot something" ? That term sounds a bit strange as a mathematical term.

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    Not that I'm aware of.

  • @thomasclark7493

    @thomasclark7493

    4 жыл бұрын

    Its an operator, and it belongs in an operator space. This particular operator is linear and is the dual to the vector v. When say I dual, I mean quite simply, that by applying a simple process to the operator, you can obtain the original vector v, and only v. It is in this way that the dual covector belongs to the vector v, and is the unique linear operator which is also the dual to v. In much the same way that every point in R^2 can encode a vector, every point in R^2 can also encode a unique operator which takes vectors and gives a scalar. This operator can be obtained by applying the hodge star operator to the vector, and vice versa. In G3, this is represented by taking the dot product of the dual of a bivector with a vector, although this is only one way to represent this.

  • @PM-4564
    @PM-45642 жыл бұрын

    I'm confused because I've read in multiple places that gradient is a covariant vector, which would seem to indicate that Nabla(F) = df/dx_i * e^i, where e^i is the contravariant basis (contravariant basis for covariant vector). But now I read on wikipedia that gradient uses the covariant basis, which would seem to indicate that it's a contravariant vector... If a gradient is a vector field, wouldn't it use the covariant basis so that it's vector field is contravariant? Not sure why I keep reading it's a covariant vector.

  • @eigenchris

    @eigenchris

    2 жыл бұрын

    I tried to explain this at the beginning, but difference sources use the word "gradient" to mean different things. Sometimes it's the covector field "df" (covariant), and sometimes it's the vector field "∇f" (contravariant).

  • @PM-4564

    @PM-4564

    2 жыл бұрын

    @@eigenchris This just occurred to me: If ∇f is contravariant, then ∇f = (df/dx)e_x + (df/dy)e_y. But e_x = d/dx, so ∇f = (df/dx)(d/dx) + (df/dy)(d/dy)... strange. Is that technically correct? because it looks strange to have two sets of (d/dx) in the expression. (And thanks for the clarification that ∇f = contravariant).

  • @eigenchris

    @eigenchris

    2 жыл бұрын

    @@PM-4564 ∇f is not equal to (df/dx)e_x + (df/dy)e_y. This is only true in the special case of cartesian coordinates. The correct formula is ∇f = g^ij (df/dx^i)(∂/∂x^j), involving the inverse metric. The components are g^ij (df/dx^i), which are contravariant. In cartesian coordinates, g^ij is the identity matrix, which is why we get the first formula above.

  • @PM-4564

    @PM-4564

    2 жыл бұрын

    @@eigenchris Yeah sorry I should have said assuming cartesian coordinates. Thanks for the reply - and thanks for this series.

  • @erikstephens6370
    @erikstephens6370 Жыл бұрын

    10:36: I think that j by the dcj should be an i.

  • @longsarith8106
    @longsarith8106 Жыл бұрын

    Excuse me, teacher. what is the difference between total derivative and exterior derivative?

  • @eigenchris

    @eigenchris

    Жыл бұрын

    The "total derivative" treats "dx" as meaning "a little bit of x". I don't think it has a very formal meaning (as least, as far as I know). The exterior derivative treats "dx" as a covector whose stacks match up with the level sets of the "x" values throughout space. The formulas look the same, but their meaning is different.

  • @darkinferno4687
    @darkinferno46875 жыл бұрын

    when will u do a video about christoffel symbols?

  • @eigenchris

    @eigenchris

    5 жыл бұрын

    First one will be out this week. There will be at least 5 videos that deal with Christoffel symbols.

  • @mtach5509
    @mtach5509 Жыл бұрын

    I thnk evry vector could be covector of another vector which is given by linearr map of this vectr to anotehr vector and then the original vector become covector of this linear map vector

  • @deepbayes6808
    @deepbayes68084 жыл бұрын

    Have you considered editing Wikipedia pages?

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    I haven't, but there are several pages that use differing terminology, and the math/physics community doesn't really have a consensus on what's right, so I'm not sure what I'd write.

  • @deepbayes6808

    @deepbayes6808

    4 жыл бұрын

    @@eigenchris if I want to read a book or lecture note that's maximally consistent with your notation and definition what should I read?

  • @temp8420
    @temp8420 Жыл бұрын

    In your notes you say df transforms like a contravariant object but here you say it's a covariant object - maybe you are saying the covariant object has contravariant basis vectors which is true but I can't unpick the naming

  • @eigenchris

    @eigenchris

    Жыл бұрын

    df is a covector... basis covectors transform contravariantly, but covector components transform covariantly. Whenever you change coordinates, the covariant/contravariant changes balance out so that the object remains invariant.

  • @temp8420

    @temp8420

    Жыл бұрын

    Many thanks for the reply - if I expand du into du^i * e^i where the second index is up and the first is down then I can get what you are suggesting. Does that sound right - don't know how you find the time to get the notes so detailed and reply to comments. Thanks again

  • @temp8420

    @temp8420

    Жыл бұрын

    Unfortunately when I first learned this it was the old style component notation and I find the new notation very confusing - it used to be a tangent was always a vector or contravariant and gradient gave a covariant object.

  • @eigenchris

    @eigenchris

    Жыл бұрын

    Sorry, your reply got marked as "spam" so I didn't see it until now. I'm not sure if it's because your username is "Temp"? Maybe not, but I'm not sure why else it happened. Possibly something to consider when leaving comments on future videos. When we expand "du", we get du = (∂u/∂x^i) dx^i, where (∂u/∂x^i) has a lower index beneath the fraction line (covariant) and dx^i has an upper index (contravariant). Tangents are usually interpreted as vectors. The "gradient" of a function is something different--it is not produced from a curve; instead it is produced from a scalar field. The "df" level sets are a covector field and the "∇f" gradient is a vector field.

  • @temp8420

    @temp8420

    Жыл бұрын

    @@eigenchris many thanks - it's making more sense now. Having someone responding is incredibly supportive and helpful

  • @brk1953
    @brk19532 жыл бұрын

    It seems to good but long to deal with tensors in this way but i dont like the concept covector Gradient is still a vector and df is an invariant scalar .what you try to do is to write the components of gradient as covariant components by multiplying metric tensor with the partial derfative of scalar field with respect to a certain coordinate good luck BASSEM FROM QATAR

  • @jeremiahlee6335
    @jeremiahlee6335 Жыл бұрын

    Your convention is the same as in Michael Spivak

  • @smftrsddvjiou6443
    @smftrsddvjiou6443Ай бұрын

    Interesting. For menwas df so far a number.

  • @oslier3633
    @oslier36334 жыл бұрын

    Now I see why the differential is independent of coordinate system.

  • @beoptimistic5853

    @beoptimistic5853

    3 жыл бұрын

    kzread.info/dash/bejne/ioV9xNBrZ8e1mqg.html 💐💐

  • @klam77
    @klam774 жыл бұрын

    Ahh...! Brilliant. This is why they (Google) "do" Neural Nets with TENSORS. It would be super brilliant if you could do a video on tensors applied to neural nets representation and optimization (training).

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    I only have a passing familiarity with machine learning and neural nets, but my understanding is that there isn't much in common with the "tensors" used in machine learning and the tensors used in physics/relativity. For ML/NN, I think tensors are simply arrays of data and there isn't much emphasis on any coordinate system changes, coordinate transformations, or geometry. I know TensorFlow is a popular NN coding library, but I doubt it has much in common with this video series. You can feel free to correct me if I'm wrong.

  • @klam77

    @klam77

    4 жыл бұрын

    ​@@eigenchris Hi, yes, this is precisely the understanding I am trying to gain. Especially where "gradient descent" is concerned and it is taught as the process of trying to align the "weight vector" appropriately with the input vector via the dot product to achieve the trained output, etc. Unfortunately, I was never taught Tensors in school, and so I am digging through, slowly with your good videos. But i took a flying leap to the gradient video here. I will go back to the beginner ones. (PS: It scares me that Google would call their routine using the term "tensor" as a cool marketing thing).

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    I wouldn't say it was just a marketing thing. It's just that computer programmers and physicists sometimes use the same words to mean slightly different things. Tensors is one of these words. If you want a good introduction to machine learning (with Neural Nets, gradient descent, and more), check out professor Andrew Ng's playlist. It is on both KZread and Coursera.

  • @klam77

    @klam77

    4 жыл бұрын

    @@eigenchris Indeed, I suspect you are right. Still, I will try and absorb this tensor material to see if it provides any closed form analytic expression of backpropagation methods, applied to nested sigmoid functions of vector dot products (which is essentially what a NN is in analytic form). But I mostly realize you're right: if tensors (in the physics sense) were applicable to NN, we would have heard of it by now! Cheers to you, thanks.

  • @kimchi_taco
    @kimchi_taco9 ай бұрын

    Mind blown 🤯 why STEM degree don't teach this to students? 😭

  • @pferrel
    @pferrel9 ай бұрын

    I'm still confused about what a covector and dual space actually are. Is a dual space the space of all functionals that can be associated with V or all "dot functionals" associated with vectors in V? Is the dual space a set of functions? When you called covectors column vectors this confused me because I'm not sure what the difference is between row and column vectors other than they fit into the linear algebra rules and functions/operators in different ways. Is a dual space just a set of rules that can be applied to V? Here you call df a covector, leading me to think covectors and dual space define functionals, df being one. 3:43 But now I don't get why covectors can be seen as a stack or level sets. If you pick a particular function, I get that the output could be seen as level sets, is this what is meant?

  • @eigenchris

    @eigenchris

    9 ай бұрын

    The dual space V* is defined as the set of all linear maps that take vectors from V and output scalars. Members of the dual space go by many different names, such as "dual vectors", "covectors", "linear functionals", and "bra vectors". When it comes to rows and columns, by convention I write vector components (contravariant) as columns and I write covector components (covariant) as rows. Have you wanted my "tensors for beginners #4" video? I go over how to draw the level sets for linear functionals. They always end up being a stack of equally-spaced planes.

  • @pferrel

    @pferrel

    9 ай бұрын

    @@eigenchris Thanks, I'm starting to grok this. Yes I watched #4 several times and it's finally sinking in. I get the parallel stack analogy but any particular stack must be for a particular functional, not all (covectors) at once so I'll now be able to see the generalization (I hope :-))

  • @mastershooter64
    @mastershooter64 Жыл бұрын

    This "Dee" f you're talking about sounds a lot like the total derivative

  • @eigenchris

    @eigenchris

    Жыл бұрын

    Yeah, the formula looks basically the same. Tensor calculus takes symbols from ordinary calculus and re-interprets them to have different meanings. In ordinary calculus, "dx" loosely means "a small change in x", but in tensor calculus, the "d" is re-interpreted to be an operator called the "exterior derivative".

  • @Schraiber
    @Schraiber2 жыл бұрын

    Wow this one was a mind fuck

  • @EmrDmr0
    @EmrDmr02 жыл бұрын

    df: covector (1-form) del(f): vector V: vector g( , )= bilinear form Although g( delf , V ) = def(f) . V and df(V) = del(f) . V, What's the difference between g( delf , V ) and df(V)? Thank you!

  • @viliml2763

    @viliml2763

    Жыл бұрын

    There is no difference. g( delf , V ) = def(f) and del(f) = df(V) means g( delf , V ) = df(V) by transitivity of equality. They are the same scalar.

  • @drlangattx3dotnet
    @drlangattx3dotnet4 жыл бұрын

    "d c ^j " What the heck is that? Lost me there. How is "d anything" a basis vector?

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    "c" in this case can be any coordinate system. For example in the cartrsian coordinate system dc^1 is dx and dc^2 is dy. You might want to watch videos 6-8 in this series on covector fields / differential forms to understand how dx and dy are a covector basis.

  • @shuewingtam6210
    @shuewingtam62102 жыл бұрын

    At 10:44 written mistake c sup j ñot c sup i

  • @danielkrajnik3817
    @danielkrajnik38173 жыл бұрын

    tl;dr 17:57

  • @mtach5509
    @mtach5509 Жыл бұрын

    you wrong, Gradient of field number function - f is actualy a covector or dual or 1-form - depend on the context , but dx dy and dz etc.. are al relate to position vector - from coordinate origin - thus multipile gradient ( as i said - covector ) with dx and dy give a dot or inner product - ie number which is the size of the vector in the direction of the position vector , if they are align - cos fee = 1 then this is the maximum vector size - ie gradient is maximum . if the gradient = 0 that it is tangent to position vector that intersect it at point p ( common point for gradient and position vector ) and the gradient dot dx dy is equal 0 . never the less gradient for itself - just by definition can be consider vector for itself - hence the duality - never the less the impotance and the use of gradient come after multiple it with dx and dy - so it is better always to refer to gradient as c o v e c t o r .

  • @drlangattx3dotnet
    @drlangattx3dotnet4 жыл бұрын

    At 7:28 the equation has v dot something = vector components times metric tensor times dual basis covectors. But doesn't the dot product yield a scalar?

  • @eigenchris

    @eigenchris

    4 жыл бұрын

    Dot product with an empty slot is a covector (which is why it's written using the covector/epsilon basis). We get a scalar after we put in a vector input.

  • @drlangattx3dotnet

    @drlangattx3dotnet

    4 жыл бұрын

    @@eigenchris Now I understand. Thanks very much for patience with my questions. I did review lec 16. Now I will plug away. Just a hobbyist 40 years removed from college math classes.

  • @Xbox360SlimFan
    @Xbox360SlimFan5 жыл бұрын

    So basically the total differential is a covariant derivative?

  • @eigenchris

    @eigenchris

    5 жыл бұрын

    I'm not totally sure what you mean by that. When I think of the covariant derivative, I normally think of expressions involving Christoffel symbols.

  • @Naverb

    @Naverb

    5 жыл бұрын

    Yes. We start with our manifold, embed it in R^n, and note the Covariant derivative is essentially the orthogonal projection of the "total derivative" we know and love in R^n onto our surface (in the sense that the tangent bundle TS for our manifold S is merely the orthogonal projection of TR^n, which makes sense because tangent bundles are vector spaces). OK, so now to answer your question: if our manifold is R^n itself, the "orthogonal projection" is just the identity map, so we find the Covariant derivative really is the total derivative when working in R^n!