The deeper meaning of matrix transpose

100k Q&A: forms.gle/dHnWwszzfHUqFKny7
Transpose isn’t just swapping rows and columns - it’s more about changing perspective to get the same measurements. By understanding the general idea of transpose of a linear map, we can use it to visualise transpose much more directly. We will also heavily rely on the concept of covectors, and touch lightly on metric tensors in special/general relativity, and adjoints in quantum mechanics.
As far as I know, this way of visualisation of transpose is original. Most people use SVD (singular value decomposition) for such visualisation, but I think it is much less direct than this one, and also SVD is mostly used for numerical methods, so it feels somewhat unnatural to use a numerical method to explain linear transformations (though, of course, SVD is extremely useful). Please let me know if you know that other people have this specific visualisation.
The concept I am introducing here is usually called a “pullback” (and actually the original linear transformation would be called “pushforward”), but as said in the video, another way of thinking about transpose is the notion of “adjoint”.
Notes:
(1) I am calling covectors a “measuring device”, not only because the level set representation of covectors looks like a ruler when you take a strip of the plane, but also because of its connections with quantum mechanics. A “bra” in quantum mechanics is a covector, and can be thought of as a “measurement”, in the sense of “how likely will you measure that state” (sort of).
(2) I deliberately don’t use row vectors to describe covectors, not only because this only works in finite-dimensional spaces, but also because it is awkward for the ordering when we say a transpose matrix acts on the covector. We usually apply transformations on the *left*, but if you treat the covector as a row vector, you have to act the transpose matrix on the *right*.
(3) You can do the sort of “exercise” to verify this visualisation of transpose for all (non-singular) matrices, but I think the algebra is slightly too tedious. This is the reason why I spent a lot of time talking about the big picture of transpose - to make the explanation as natural as possible.
Further reading:
*GENERAL*
(a) Transpose of a linear map (Wikipedia)
en.wikipedia.org/wiki/Transpo...
(b) Vector space not isomorphic to its dual (for infinite-dimensional vector spaces):
math.stackexchange.com/questi...
math.stackexchange.com/questi...
*RELATIVITY*
(a) Metric / inverse metric as the vector-covector correspondence: en.wikipedia.org/wiki/Raising...
en.wikipedia.org/wiki/Minkows...
*ADJOINT*
(a) Inner product (the prerequisite of even defining adjoints, the analog of dot products in Euclidean space): en.wikipedia.org/wiki/Inner_p...
(b) Adjoints (another way of thinking about transposes, but I think this is mostly about the complex analogue of transpose): en.wikipedia.org/wiki/Hermiti...
(c) Reisz representation theorem (more relevant to adjoints, but in regards to the statement that “we choose certain covectors to act on”: here, it is the “continuous” dual, very relevant in QM): en.wikipedia.org/wiki/Riesz_r...
(d) Self-adjoint operators (Hermitian operators in QM, but also useful in Sturm-Liouville theory in ODEs):
en.wikipedia.org/wiki/Self-ad...
Video chapters:
00:00 Introduction
00:56 Chapter 1: The big picture
04:29 Chapter 2: Visualizing covectors
09:32 Chapter 3: Visualizing transpose
16:18 Two other examples of transpose
19:51 Chapter 4: Subtleties (special relativity?)
Other than commenting on the video, you are very welcome to fill in a Google form linked below, which helps me make better videos by catering for your math levels:
forms.gle/QJ29hocF9uQAyZyH6
If you want to know more interesting Mathematics, stay tuned for the next video!
SUBSCRIBE and see you in the next video!
If you are wondering how I made all these videos, even though it is stylistically similar to 3Blue1Brown, I don't use his animation engine Manim, but I will probably reveal how I did it in a potential subscriber milestone, so do subscribe!
Social media:
Facebook: / mathemaniacyt
Instagram: / _mathemaniac_
Twitter: / mathemaniacyt
Patreon: / mathemaniac (support if you want to and can afford to!)
Merch: mathemaniac.myspreadshop.co.uk
Ko-fi: ko-fi.com/mathemaniac [for one-time support]
For my contact email, check my About page on a PC.
See you next time!

Пікірлер: 231

  • @mathemaniac
    @mathemaniac Жыл бұрын

    The 100k Q and A form is here: forms.gle/dHnWwszzfHUqFKny7 Please consider sharing, liking, commenting on the video - this is probably one of my favorite videos on my channel. By the way, if you are wondering why (A^T)^T = A, this will require the double duals, or co-covectors and the "canonical isomorphism". Essentially, the co-covectors are machines that measures covectors, and they can be thought of as vectors themselves. The explanation is not that visual, and so I am probably not making videos on the double duals.

  • @mrenemo3974

    @mrenemo3974

    10 ай бұрын

    Standard linear algebra course should start from a very nice amateur 😂definition of vector as a co-covector - (1,0) tensor. Besides jokes if you could give some visual intuition for a general tensor definition - it would be cool !)

  • @rgbill5948
    @rgbill5948 Жыл бұрын

    3b1b linear algebra series saved my life during college, but oh boy do i wish this have seen this video sooner

  • @scharpmeister

    @scharpmeister

    Жыл бұрын

    the 3b1b series was great but definitely left me wanting more

  • @butwhoasked1821

    @butwhoasked1821

    Жыл бұрын

    It was amazing but crucially left out transposition, SVD and QR decomposition

  • @guccihorsepiss2406

    @guccihorsepiss2406

    10 ай бұрын

    3b1b should be watching while learning each topic from your textbook or other video. Their main focus is granting you more intution and understanding. Not fully learning a concept.

  • @benjioffdsv

    @benjioffdsv

    3 ай бұрын

    @@guccihorsepiss2406 definitely. Don’t rely only on his videos but they are great. They did help me a lot in understanding

  • @samkirkiles6747
    @samkirkiles67475 ай бұрын

    This makes it a little more complicated then it actually is. Here it is simply - Every vector space V has a dual space V* - Elements of the dual space are called dual vectors or co-vectors. When you think of dual space, just think of this as the space of row vectors, while the original space is the space of column vectors. - Every linear map from V -> W induces a linear map between the dual spaces W* -> V* called the transpose. - Every finite dimensional vector space is isomorphic to its dual, so we can really view the transpose as going from W -> V One more point: Row vectors eat column vectors and return a scalar. In R^2, you can visualize row vectors as equally spaced stacks of planes with one through the origin, and the application of the row vector to the column vector is the number of planes that the column vector pierces. This is the intuition the video is trying to get at.

  • @StefanoTrevisani

    @StefanoTrevisani

    3 ай бұрын

    Thanks for clarifying! I had a big "sigh" while the chosen way to present the topic was the "physics-inspired" one, when the algebraic one would have been much easier like you wrote. Of course visualization is important, but sometimes it just clobbers things up IMHO.

  • @briorg1225

    @briorg1225

    3 ай бұрын

    I think there’s something worth to be mentioned about the second to last point. In general vector spaces, the isomorphism between a vector space and its dual is not natural. One would need to specify a basis for the vector space. Only when the vector space is endowed with an inner product is the isomorphism natural.

  • @goldeer7129

    @goldeer7129

    3 ай бұрын

    I would definitely call that a "simple but doesn't explain anything". It doesn't convey any meaning, you don't even say how these things exist, but more importantly why we would care about doing such things. That's probably how classic textbook math presents it, but it doesn't explain much, that's why visual explainatory math videos by the likes of him or 3B1B are amazing, they actually explain the intuition behind these ideas.

  • @JthElement

    @JthElement

    2 ай бұрын

    I agree. This was quite useless.

  • @osbourn5772

    @osbourn5772

    2 ай бұрын

    @@JthElement Eh, it really depends on the person. I actually found this comment easier to understand than the video, but different people prefer different methods, so I'm glad both of them exist. The problem to me is that I have to keep translating the geometric explanation to the algebraic one while watching the video. The problem is that this concept is just too complex for a geometric explanation to really explain it well, (at least for me, you may feel differently). And just so it's clear, I appreciated the video, but it made more sense after I read this comment. For example, the explanation of why A^T * A = I for orthogonal matrices was really insightful.

  • @orik737
    @orik737 Жыл бұрын

    This video is EXACTLY what I've been looking for! Please don't stop making these!

  • @inciaradible7144
    @inciaradible7144 Жыл бұрын

    This was actually very refreshing and a great way to visualise what a transpose really is, even if there are limitations.

  • @pratik9056

    @pratik9056

    Жыл бұрын

    What limitations?

  • @cobrametaliks490
    @cobrametaliks490 Жыл бұрын

    Could you please make a video about tensors? I don't like the usual introduction to it, and I feel that you could give a more intuitive and tangible approach. Thank you for your work, it is amazing. The 13:30 got me real good 😄

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    I will have to when I do general relativity some time in the future, but really tensor should not be described as "transforms like a tensor".

  • @yevgeniygorbachev5152

    @yevgeniygorbachev5152

    Жыл бұрын

    Eigenchris's explanations are the best I've found. I'm excited for Mathemaniac's take, though

  • @TC159

    @TC159

    Жыл бұрын

    Thing about tensors is that depending on your area certain ways are better. I can only understand tensors as elements of a tensor space

  • @nolanfaught6974

    @nolanfaught6974

    Жыл бұрын

    The main issue with learning about tensors is that the word means different things in different fields, so when you look for information on tensors you receive conflicting preliminaries and definitions. In physical science, tensors are considered a "generalization" of vectors in the sense that they have more indices, but their mathematical properties are identical to (mathematical) vectors. For mathematicians, contravariant tensors are best considered as the dual of multilinear forms: for a vector space V and field F, - A multilinear k-form or (covariant) k-tensor is a function f: Vᵏ -> F that varies linearly with respect to each of its components - A contravariant k-tensor is a member of Vᵏ, often considered as the input to a multilinear k-form (or covariant tensor). In mathematics, a tensor is *almost always* a covariant tensor, while in physical science, it is *always* a contravariant tensor.

  • @thallesaraujo7814

    @thallesaraujo7814

    Жыл бұрын

    Check out the Tensor Calculus series by Prof Pavel Grinfeld on his YT channel MathTheBeautiful. I deal with tensors since 2011 and seeing his lectures was totally eye-opening. I've never ever seen such a clear and didactic but also deep and rigorous approach. In all I do there's a clear distinction between before and after I came across his channel. Only now I know what Einstein meant by "horse of true mathematics" in a letter he wrote to Levi-Civita (as the tale tells, at least). I cannot recommend Prof Grinfeld's channel enough.

  • @dcterr1
    @dcterr1 Жыл бұрын

    I've always had a hard time visualizing matrices as vector transformations, but you do a very good job with that here. You also do a terrific job of explaining the transpose as an inverse vector transformation rather than as a new matrix. Great video!

  • @julianaharmatiuk2662
    @julianaharmatiuk26628 ай бұрын

    it's interesting how I'm currently studying 4 topics and this video managed to get into all of them! Badass

  • @nangld
    @nangld Жыл бұрын

    In Haskell matrix transpose is called `zip`, because of how it unzips a list of M N-tuples into a N-tuple of N lists of scalars. Very useful in general programming, even if you never think about it being linear algebra.

  • @geometry_manim
    @geometry_manim Жыл бұрын

    Msterpiece! Thank you for this huge work with the topic and animations!

  • @yashrawat9409
    @yashrawat9409 Жыл бұрын

    Brilliant video Deep insights in Linear Algebra ( Reminds of Essence of Linear Algebra series on 3B1B)

  • @hambonesmithsonian8085
    @hambonesmithsonian8085 Жыл бұрын

    Oh boy you have no idea how much I’ve needed a video about this exact series of topics. Combining this with differential forms + knot theory and we about to *transcend.*

  • @jacb2997
    @jacb2997 Жыл бұрын

    Keep up the great work Mathemaniac as an early undergrad your content is at a perfect level and it’s so interesting different interpretations of linear algebra from those I’ve already seen.

  • @roybean9983

    @roybean9983

    Жыл бұрын

    Try and look up a category theory approach if you want to learn a new perspective. There you could for example learn that what he this video is giving intuition for is the way the dual functor maps lineare maps to maps in the dual space. In general it is a very abstract viewpoint and can be incredibly beuatiful if you understand it.

  • @jacb2997

    @jacb2997

    Жыл бұрын

    @@roybean9983 I intend to eventually

  • @SLopez981
    @SLopez9815 ай бұрын

    I commend you for a simple and overlooked process and revealing the hidden intricacies as to why it happens to look like ‘just switching rows and columns’. There’s alway a reason we do something in math and when we are first introduced to new concepts it are often give the how before we are given the why. Keep up the good work and please continue on this trend of explaining the hidden reasons why we some procedures that are surely taken for granted

  • @vishrutpandya3257
    @vishrutpandya3257 Жыл бұрын

    It feels so satisfying after understanding the idea of transpose.. Keep up the amazing work. Also, are you planning on explaining Adjoint in the similar way? It would be a great help!!

  • @sh0ejin
    @sh0ejin Жыл бұрын

    glad i could understand this stuff. 3b1b really helps! amazing stuff!

  • @xrhsthsuserxrhsths
    @xrhsthsuserxrhsths Жыл бұрын

    This is amazing! I didn't expect this to help in understanding rigid monoidal categories! AMAZING!!!

  • @decreasing_entropy3003
    @decreasing_entropy3003 Жыл бұрын

    This was just a bit difficult to grasp at first, but then I got the feel for it. Linear Algebra never stops asking more and more questions, and no wonder why I like it so much. A video on metrics would really help, considering the fact that I have to tame Special Relativity next semester.

  • @godfreypigott
    @godfreypigott Жыл бұрын

    Could you do a video on visualising the transformations performed by (A transpose times A) and (A times A transpose), and perhaps extend that to the general projection matrix?

  • @koun.informatique5074
    @koun.informatique5074 Жыл бұрын

    I have been searching for an explanation about transposé for à decade, thank you so much 😁

  • @cmilkau
    @cmilkau Жыл бұрын

    There is a difference between transpose and adjoint that should not be swept under the rug. It all boils down to the inner product. If the inner product is just multiply and sum, then yes, transpose is adjoint. If it's more complicated though, the adjoint follows suit.

  • @btnt5209

    @btnt5209

    Жыл бұрын

    To be more specific about the "more complicated", for example, the dot product in the complex vector space is not just a multiply and sum, since you have to take conjugates. But in this case, the adjoint is the conjugate transpose, so not too different.

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    This is why I said something like you have to restrict the space of covectors the "transpose" acts on. That refers to covectors of the form , where you can directly make a 1-1 correspondence with the vectors, even though in infinite dimensional spaces, there are covectors not of this specific form. In fact, the whole last chapter is actually about that inner product! (actually, more generally, a symmetric bilinear form) I just did not call it that way, and said something about the "pairing" between vectors and covectors, which is only doable with some notion of inner product anyway (or generally, a symmetric bilinear form).

  • @cmilkau

    @cmilkau

    Жыл бұрын

    @@mathemaniac Well I was referring to covectors of this form specifically, just with a different inner product. I didn't even think about the crazier stuff in infinite dimensions! I love these deep things lurking in corners of seemingly mundane stuff.

  • @cmilkau

    @cmilkau

    Жыл бұрын

    @@btnt5209 That's not coincidental, you can derive the adjoint from the inner product and it will follow its form. For instance, if = x^T Hy, then A* = H^-1 A^T H.

  • @christophercrawford2883

    @christophercrawford2883

    Жыл бұрын

    Agreed. Of all you want is the transpose, stay with duals (as row vectors) and not their adjoints: w=vA Then just transpose the whole equation. But better: keep the rows and just multiply backwards.

  • @yogibrijkumar
    @yogibrijkumar Жыл бұрын

    The concept presentation is novel and wonderful. Deep insight. I did not get the opportunity to learn these concepts in my student and research scholar days. A great service to workers in basic sciences. We pay our sincere gratitude to 3b1b.

  • @nihanth9145
    @nihanth9145 Жыл бұрын

    The time when i am looking for all resources available on internet for covectors , i am learning Special and General relativity kind of in transitioning between them , this video helped me🥰

  • @hendrik574
    @hendrik574 Жыл бұрын

    Great! Please more about matrix operations and the idea behind it! :)

  • @raulyazbeck7425
    @raulyazbeck7425 Жыл бұрын

    Just wonderful! Thanks

  • @zhuolovesmath7483
    @zhuolovesmath7483 Жыл бұрын

    I’m glad that you didn’t get frustrated by the low view of videos! Keep going man I love your math videos

  • @vinvic1578

    @vinvic1578

    Жыл бұрын

    Low views ? He's getting at least 100 k per video that's incredible for a math channel

  • @zhuolovesmath7483

    @zhuolovesmath7483

    Жыл бұрын

    @@vinvic1578 He once posted something in the "community" go check it out

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    Well, the frustration is still there: the main thing is I am, at least algorithmically, discouraged to experiment with new things. But I just put down the frustration a little bit, and return to what I'm always doing.

  • @vinvic1578

    @vinvic1578

    Жыл бұрын

    @@zhuolovesmath7483 ohh I see thanks for the information, apologies i didn't know

  • @zhuolovesmath7483

    @zhuolovesmath7483

    Жыл бұрын

    @@mathemaniac Always support you!

  • @princekha4540
    @princekha4540 Жыл бұрын

    I have enjoyed it and learn new things today. Thank you ❤️

  • @raagamparmar5602
    @raagamparmar5602 Жыл бұрын

    Can you do more videos on Matrices and Vectors? This video was awesome!

  • @pigeonapology9816
    @pigeonapology98169 ай бұрын

    My heart still lies with the view of the transpose that comes out of categorical quantum mechanics (somewhat akin to this one, given the emphasis on the notion of "measurement"), but this one is very nice (and nearly as intuitive) as well. Great video!

  • @tirimula
    @tirimula8 ай бұрын

    Extraordinary explanation

  • @phx__7
    @phx__72 ай бұрын

    this is amazing. please make more math concepts. thankyou!!

  • @TheBrainn
    @TheBrainnАй бұрын

    to me linearity and its duality between vector spaces is the most mesmerizing thing in all of math- especially because it is the motivation behind so many more complex entities like differential equations and laplace transforms which rely on those principles for their unshakeable power and accuracy for everything from interpolation to analysis

  • @BarackObamaJedi
    @BarackObamaJedi Жыл бұрын

    the borrowed scale metaphor is just brilliant

  • @johnyeap7133
    @johnyeap7133 Жыл бұрын

    Takes some effort to understand but really insightful. Thanks 😊:)

  • @CrypticPulsar
    @CrypticPulsar Жыл бұрын

    I’ve been studying Lin Alg for a little while now but I never quite saw transpose that way.. this was incredibly refreshing.. and for a moment I thought you were going to bring Eigenvalues and Eigenvectors into the fold, but you didn’t (even though we both know they were hiding there in plain sight now, didn’t they? 😉)

  • @ycombinator765
    @ycombinator765 Жыл бұрын

    exactly what I needed

  • @blinded6502
    @blinded6502 Жыл бұрын

    It's also interesting to consider cases, when matrices linearly map vectors to bivectors and back. Such as when dealing with transformation of a force into a torque (in 2d, or in 4d, since amount of bivector components are different there from amount of vector components). I've seen transpose of such matrix being used to figure out effective mass from the inertia tensor and position to which force is applied (1/effectiveMass = transpose(toTorque) × inertia × toTorque), where inertia is just a symmetric matrix (aka orthogonal resizer) that turns torque bivector into a resized angular velocity bivector. ToTorque represents geometric algebra wedge of a force-vector by offset-vector. And inverse effective mass just shows how much of tangential velocity unit force imparts onto the object (ignoring linear velocity). Anyway, sorry for the unclear monologue; I'm just too tired after a long day and I wanted to share a little piece of cool info I recently learned.

  • @threemr01
    @threemr01 Жыл бұрын

    Very good video! First one I see from you, but you earned my subscription. I’ll have to rewatch a couple of times because I’m a little rusty on the abstract side of Linear Algebra and I’m sure I’m missing some important subtleties. I loved the effort you put into the “big picture” chapter! If I may, I’d like to make a couple suggestions: - Master “the pause”, by allowing a few seconds when you deliver punchlines, to allow us to digest what was just said. Your rhythm overall is great, but I felt you just kept talking non-stop at those moments. (I’m sure that’s also a consequence of my rusty L.A.) - In your intro, please mention the “prerequisites” and the target audience. These concepts are deceptively simple at first glance, but it became clear I was missing something as the video progressed, due to my somewhat forgotten command of precise definitions. Of course, the lack of knowledge is entirely my fault, but I didn’t realize I needed it before starting watching the video (after all, I thought I knew perfectly well what transposes were, lol). Keep up the good work!

  • @vicenley
    @vicenley Жыл бұрын

    that presentation is really good. how could you do that? I mean is there a special program you use? thanks

  • @ryanj748
    @ryanj748 Жыл бұрын

    Wow. I understood this topic before, but the ideas never really felt intuitive. This video is amazing.

  • @methandtopology
    @methandtopology10 ай бұрын

    *NATURAL EXPLANATION OF COVECTORS AND TRANSPOSITION FOLLOWS* The title caught my curiosity but, even though I tried to follow the arguments, they felt very prolonged and awkward,. I was ultimately left confused. I believe this is the second video I watch from this channel. The other one, on Galois theory, was a good outline but I found this to obfuscate a very simple concept. *Less technical paragraph after this* If I'm not mistaken, that concept would be the contravariant functor that maps vector spaces to their dual spaces and linear maps to their dual maps. If we denote by A the linear map V -> W, then the dual map A*: W* -> V* arises by taking covector a in W* (the dual of W) to covector b=W*a in V* with the definition b(v)=a(Av) for vector v in V. Put simply, you tell the image in V* of a covector a in W* to do on each vector v in V what a does on the image Av in W. This is a very natural construction and by representing covectors as column vectors then writing out the equation for each component of b, it can be quickly derived that A*=A^T. But this is also the most confusing thing I found in this video: that both vectors and covectors are represented as column vectors. But apples are apples and bananas are bananas. If we just represent covectors as row vectors then covector (a b) acting on vector (x y)^T is just the usual matrix multiplication (a b)(x y)^T=ax+by. Then transformations are represented from right multiplication, that is, bA*(which is actually =bA as shown later). This is clear because if A is an m x n matrix that maps n dim vectors, the m dimensional covectors should be transformed to n dimensional, and the only way to do that with an m x n matrix is by multiplying from the right. Now look how simply transposition appears (using the notation from above): bA=(A^Tb^T)^T. That simply says if we have row vector representation of covectors and matrix multiplication from the right, if we insist on writing covectors as column vectors (like in this video) then matrix multiplication from the left of the transpose gives us the mapped covector in column vector form, and transposing that back to row vector gives the covector in its natural representation. So it is derivable from the definition of matrix multiplication and transposition.

  • @mathemaniac

    @mathemaniac

    10 ай бұрын

    As I showed (albeit briefly) in the video, it is a very deliberate choice to represent covectors as column vectors. The reason is we should not ever be considering covectors as row vectors, even if it **seems** simpler to understand. Even you have said that you can only see that transpose connection by "representing covectors as column vectors". Most people are more comfortable with applying transformations on the left, and it makes more sense to do so: after all, we apply functions on the left: x -> f(x). It makes much less sense to apply transformation on the right, and you have to resort to how matrix multiplication works technically (using dimensions and stuff), rather than the **intuition** given by matrix multiplication, which is a function! Plus, representing as rows means somehow you have to use (AB)^T = B^TA^T. This is actually intuitive if you think of the bathroom scale analogy in the video, and treating A and B as linear transformations. However, you are using it as (Ab)^T = b^T A^T, where b is a column vector, and unless you are thinking of b as a linear transformation, the intuition breaks down, and you are treating (AB)^T = B^TA^T as something purely algebraic, rather than having an intuition for it. Put simply, I don't think there is any intuition for v --> vA, for v a row vector, other than "oh it works, look at the dimensions, they match!", which isn't really an intuition. On the other hand, thinking of covectors as column vectors b, then b --> A^T b, and it would be "yes! This is the linear transformation I am familiar with!". I think the reason why it feels confusing is simply that most people are only thinking of matrices as a bunch of numbers with a specified rule for multiplication, rather than actually thinking it as a transformation, or having an actual intuition for it. Sure, that's fair enough actually, and you can think of it just purely algebraically, which again is fair enough in some applications, but the point of the video is to give the intuition of what transpose / covectors are actually doing when you have the intuition that matrices are linear transformations, and only using that intuition and **never** computations.

  • @methandtopology

    @methandtopology

    10 ай бұрын

    @@mathemaniac You are right, it seems like pure algebraic manipulation. Let me try to define precisely what column and row vectors, as well as matrices, represent. Then transposition will be defined as a very simple operation and everything will make intuitive sense, should I succeed. So bear with me. Let m x n matrices be representations of a linear transformation such that composition is matrix multiplication. Note that I do not specify what it transforms to. If you think about it, a covector is a linear transformation from a vector space V to the vector space R (or any other field). But the dual of the dual of a vector space is that vector space (finite dimensions, ofc). So a vector is also a linear transformation from a dual space (of covectors) to (the covector space) R. So it makes sense to treat both vectors and covectors as matrices (i.e. transformations). Let us arbitrarily decide vectors are the n x 1 matrices, and covectors the 1 x m matrices. This just decides how we interpret each direction in an expression of matrices. Then ABv is the composition f(g(v)) for a vector, but aAB is the composition g*(f*(a)), where f(v)=Av and the "dual" (will define in a bit) f*(a)=aA. In terms of matrix multiplication, an m x n A is a linear transformation of n dim vectors to m dim vectors if we multiply from the left. But it is clear that A gives rise to a transformation of m dim covectors to n dim covectors when we multiply by A from the right. Let's call this the dual of A. But we already have a term called the dual of a linear transformation as I defined in the original comment, given by the functor from the category of vector spaces to dual spaces. So am I abusing the term dual by using it in two different senses? No, because they are the same! It's really easy to show, almost too easy: Let v in V, w in W, b in V* and a in W*. We construct the dual of the map represented by A by letting for any v b(v)=a(Av). In the matrix notation where vectors and covectors are also matrices, we want a matrix B such that (aB)(v)=a(Av). From associativity of matrix multiplication, it's really easy to see that B=A! That is, (aA)v=a(Av). We did not have to do anything! The dual of left multiplication is right multiplication. It is just a matter of interpretation of the matrix expressions. But interpreting column vectors as vectors is arbitrary. We could have called them covectors and the row vectors just vectors. It only changes how we interpret matrices and the direction of matrix expressions. And when we want to flip what they mean, we can devise an operation for this. And let's call this magical operation 'transposition.' It works perfectly because the two directions are perfectly symmetric. And this is because which one we call a vector and which one a covector is also perfectly arbitrary in finite dimensions. What do you think? Does this give intuitive meaning to the purely algebraic manipulations now?

  • @accountname1047
    @accountname1047 Жыл бұрын

    Great video, well done

  • @AJ-et3vf
    @AJ-et3vf8 ай бұрын

    Great video. Thank you

  • @blinded6502
    @blinded6502 Жыл бұрын

    Off-diagonal matrix coefficients can be also kinda thought of as bivector components btw

  • @gowrissshanker9109
    @gowrissshanker9109 Жыл бұрын

    Awesome vedio.....Waiting for your special and general relativity vedios....

  • @peperomero4603
    @peperomero4603Ай бұрын

    18:00 the fact that when refering to 'rotational' matrixes the transpose is the inverse implies that QtQ = I (if Q is orthogonal) because orthogonal matrixes can be viewed as rotations. Really great video, greetings from Spain

  • @anangelsdiaries
    @anangelsdiaries Жыл бұрын

    I will have to rewatch this video a few times.

  • @mistervallus185
    @mistervallus185 Жыл бұрын

    I’ve always learnt transpose as rotation along the diagonal. Even in high school.

  • @Filup
    @Filup Жыл бұрын

    These past two semesters at uni, we covered the book "Linear algebra done right", by Sheldon Axler (it is free, btw. def recommend it!). As suggested in the preface to the teacher at the beginning of the book, we skipped over the section detailing quotients and duality due to the limited time span. This concept of covectors and transposition gives off the vibes of adjoint, which I had trouble grasping. The mathematics is really interesting, especially in the context of inner products. But it is very difficult to digest. This is a good video and will undoubtedly benefit others in the future. I love algebra and hope to see more of it! Edit: I picked up my text, and I am thinking of the matrix conjugate transpose, which is also related to the adjoint. For a linear mapping T, with its adjoint T*, there exists a matrix M which is the matrix representation of T (to and from some bases). E.g, if we perform a change of basis from B to C, then the matrix representation of T is written M(T; B, C). Then, the matrix conjugate transpose is M(T*; C, B). This can be found on page 208 of Axler's third edition book.

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    The reason why you think of adjoint as conjugate transpose is only because in complex spaces, the usual inner product is = u(dagger)v. In real spaces, the adjoint is exactly the transpose!

  • @Filup

    @Filup

    Жыл бұрын

    @@mathemaniac Thanks for that! I will certainly give the section in textbook a read when I have the time. It is really interesting stuff!

  • @jacb2997

    @jacb2997

    Жыл бұрын

    We used all of axler except quotients in our second sem first year course. I thought the dual spaces chapter was the most fascinating and strongly recommend it. I have summarised it in the my reply to you elsewhere though. It was interesting watching this video as the scales analogy screamed dual map to me but the later video screamed adjoint map. From my other comment the matrix for the dual map with respect to the canonical dual basis is always the transpose even under C whilst the adjoint map’s matrix as you said is conjugate transpose

  • @Filup

    @Filup

    Жыл бұрын

    @@jacb2997 Yeah, same with us! Some of the later parts of the book are skipped for brevity, but we go through to SVD decomposition. I believe my uni introduces quotients in our standalone algebra course, but I am not too sure. I will certainly be giving the chapter a read after my end-of-semester exams! Thanks for your replies. They have been great

  • @bee8017
    @bee8017 Жыл бұрын

    eigenchis has a great series on this sorta stuff

  • @andrescalvo4386
    @andrescalvo4386 Жыл бұрын

    Beautiful explanation 👌 I would love that 10years ago 😆 still fun to watch

  • @ObsidianParis
    @ObsidianParis Жыл бұрын

    "t" is not supposed to go BEFORE the matrix we want to transpose ?

  • @fibbooo1123
    @fibbooo1123 Жыл бұрын

    Great video!

  • @rahulm-ranjan3801
    @rahulm-ranjan3801 Жыл бұрын

    Who would have umagined I would be binge watching math videos on late weekend nights, they are addictive! Understanding is addiction - a good one though.

  • @mtate405
    @mtate405 Жыл бұрын

    perfect timing

  • @iamtraditi4075
    @iamtraditi4075 Жыл бұрын

    Thanks, King :)

  • @MathsSciencePhilosophy
    @MathsSciencePhilosophy6 ай бұрын

    1:05 to 1:52 best explanation of duality in linear algebra.❤

  • @christophercrawford2883
    @christophercrawford28837 ай бұрын

    Nice geometric interpretation of the transpose. Two comments: 1) you can get the components of the one form directly from the intercepts of the c=1 hyperplane. This can be used for a straightforward demonstration that the adjoint matrix is the transpose. 2) you mentioned linearity of the forms as functions, but they are also linear objects in the sense that they add like vectors. Both properties were used.

  • @physira7551
    @physira7551 Жыл бұрын

    I never understood tensors quite well, but in this video I smelled the tensors cooking up!, still don’t know where though!.

  • @tymekbraciszewski447
    @tymekbraciszewski447 Жыл бұрын

    Really great video! One technical recommendation though: think about adding some background music - it'll make it flow more smoothly :)

  • @programmer4047
    @programmer4047 Жыл бұрын

    Can you please make a video on the proof of Laplace Expansion formula

  • @programmer4047
    @programmer4047 Жыл бұрын

    14:44 How did you calculate the gap size to be 1/√2

  • @lupucaspas
    @lupucaspas3 ай бұрын

    Covectors are, put it simply, coordinates on a vector space: this is, in my opinion, the simplest way to think about them. In matrix notation, if vectors are given by column matrices then covectors are given by row matrices, so that transposition stablishes the duality between vectors and covectors v |-> v^T. One can readily deduce the meaning of the transpose matrix from this: (Av)^T = v^T A^T

  • @starcrosswongyl
    @starcrosswongyl7 ай бұрын

    what has the length of a covector got to do with density?

  • @satyakiguha415
    @satyakiguha41527 күн бұрын

    Wonderful

  • @Eknoma
    @Eknoma Жыл бұрын

    Yes, all of this comes directly from the transpose, so what do you mean?

  • @luuanhvu8209
    @luuanhvu8209 Жыл бұрын

    Can u make videos about Polya Enumeration Theorem ?

  • @windozetechinfo-sy7in
    @windozetechinfo-sy7in4 ай бұрын

    the covectors of a measuring machine seem to be a dot product, like so wTv. if v = Ax then we have wTAx = (ATw)Tx, and thus the AT is the transform for covectors.

  • @azap12
    @azap127 ай бұрын

    Really clarified why transposing the matrix of a linear map has to do with the matrix dual linear map!! Nice video well done

  • @gabrielnuetzi
    @gabrielnuetzi Жыл бұрын

    Very nice video about not so easy to grasp concepts: The content of this video and more on coordinate transform, dual spaces, transpose maps, kinematics with a rigorous full-concept-notation (rather than proof complete) for mechanics and robotics can be found in appendix D in DOI: 10.3929/ethz-a-010662262

  • @readingsteiner5760
    @readingsteiner57602 ай бұрын

    In my opinion, the explanations are a bit ambiguous, at least to me. I am not a native English speaker, I don't know how other people feel, but I think what confuses me the most is how you chose very long, compound sentences, which made it hard for me to follow. I also feel like I was not explained the terms like "co-vector", "measurement",... and sometimes I have no idea what is being talking about. I really appreciate the hard work you put into this video though.

  • @readingsteiner5760

    @readingsteiner5760

    2 ай бұрын

    Okay i just watched the video a few more times. I think i should have focused more on the visualization. It is a lot easier to understand with the animation.

  • @joshuaLiddicott
    @joshuaLiddicott Жыл бұрын

    I'd never considered that a transpose might be more than a mere "matrix reflection". Thanks for another enlightening video.

  • @officiallyaninja
    @officiallyaninja Жыл бұрын

    12:18 why do the basis vectors go back to their starting points? That's should only happen if you multiply by A^-1 right? Not A^T

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    The point of the transpose is to move the measuring device back to the original space, so that you can do the measurements in the original space. So you ask, what is the operation so that the measurements remain the same, but the basis vectors have not changed? That would be the inverse. As I said later on in the video, it only looks like inverse, but you have to treat those gridlines as covectors.

  • @isuckatthisgame
    @isuckatthisgame Жыл бұрын

    All these concepts would be easier to learn and memorize on college if professors have invested time and energy into visualizing things for students the way you did here.

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    The thing is professors have *many* other things to do! Even if they only teach, it is impossible to do this sort of visualisation every day!

  • @MrHaggyy

    @MrHaggyy

    Жыл бұрын

    Well Profs don't need to do those everyday. If a view like you, 3b1b, math sorcerer etc. cover most stuff you can just link the videos and get all the intuition you need. ^^ much better than the good old sentence: "that's easy too see" after you copied half a dozen blackboards in 90min 5x a day. I studied while videos like this were emerging and they are awesome to gain intuition and boost confidence in all the gaps a lecture leaves. Just recap the video and you are usually good to go to keep solving problems.

  • @fragileomniscience7647

    @fragileomniscience7647

    10 ай бұрын

    That's the job of the students to figure out. It's university, not school.

  • @thallesaraujo7814
    @thallesaraujo7814 Жыл бұрын

    Thanks for a video with great visuals. To add to it, I strongly recommend the Tensor Calculus series from the YT channel MathTheBeautiful by Prof Pavel Grinfeld. I've been studying these topics for many years and, to me, there is currently a plethora of notations and names that shuffle conventions from vector calculus, linear algebra, etc. - often making the topic very confusing. He employs a clear and unifying notation that it makes it VERY EASY to understand covectors, metrics, inverses, transposes... I cannot recommend it enough.

  • @irrelevant_noob
    @irrelevant_noob Жыл бұрын

    12:58 guess you made the script calling the 2nd vector "green" before deciding to use RGB for the 3-d axes (seen at 5:20 and later on) and thus changed to using cyan for that 2nd vector? :-)

  • @moodangelatx6580
    @moodangelatx65803 ай бұрын

    Love it

  • @user-do7kd8lp5r
    @user-do7kd8lp5r Жыл бұрын

    Tristan Needham is a legend!

  • @chrstfer2452
    @chrstfer24526 ай бұрын

    Im only 4 minutes in and ive done this kind of math for years and so far this has hit like 3 or 4 of those "ohhh" moments. Subbed.

  • @chrstfer2452

    @chrstfer2452

    6 ай бұрын

    And actually, pretty annoyed at myself for not having subbed before, ive watched (and remember watching!) a few of your videos before. Thanks for the content

  • @zhan-iy3ms
    @zhan-iy3ms4 ай бұрын

    The exact thing I've been looking for, the last 4 years through 😢

  • @MaxxTosh
    @MaxxTosh Жыл бұрын

    Do you have a brief explanation as to why there are more covectors than vectors in infinite dimensional spaces?

  • @christophercrawford2883

    @christophercrawford2883

    Жыл бұрын

    If you consider the space of square integrable functions (countably infinite, Hilbert space, for example, vectors of Fourier coefficients), the dual vectors include non-funtion distributions like the Dirac delta function. This is a dense, uncountable basis (one for each real number).

  • @beyond2781

    @beyond2781

    Жыл бұрын

    I believe this is the corollary of Hahn-Banach Theorem. Corollary is as follows; For a given m > 0 and a vector x_0 in the normed space, there exists a functional(covector) such that ||f|| = m and f(x_0) = m ||x_0|.

  • @apteropith
    @apteropith2 ай бұрын

    i have difficulty conceptually separating vectors and covectors when, in nearly all vector algebra i've used, there's been zero meaningful distinction between them in practice for one: with matrix notation, whether covectors are the "row vectors" or the "column vectors" is entirely arbitrary, and likewise with "bras" and "kets" in bra-ket notation - covectors with no corresponding vector didn't seem to come up in QM, for me, but maybe i missed them - but since the algebra doesn't seem to suggest an inherent distinction on its own, it feels like weird human artifice to impose it for another: when dealing with special relativity, you can bake the metric directly into your basis elements if you don't restrict yourself to standard linear algebra, removing the metric tensor from consideration entirely, and allowing every vector and its corresponding "covector" to be fundamentally equal - this was extra useful to me, because many applications of the metric tensor are very dependent on situational "covariance" and "contravariance", and these concepts were _never_ explained in a useful manner or clearly demonstrated within the algebra itself, so it just looked highly arbitrary at worst and fudged by-eye at best baking the metric directly into the basis also allowed me to discard the cross-product in favour of an operation that makes sense and works in more than 3 dimensions, but that's a different matter (also, i have a somewhat better grasp of covariance and contravariance now, because using explicit bases instead of implicit ones seems almost mandatory in genuinely demonstrating the covariant and contravariant parts of a "basis transformation", which it turns out, when done correctly, is less a transformation than it is a fun notational trick that transforms nothing except the way things are written down)

  • @apteropith

    @apteropith

    2 ай бұрын

    expanding slightly on the second point, about the metric tensor, now that i remember it slightly better: when taking a vector-gradient of the electromagnetic 4-potential, versus taking the divergence of the resulting electromagnetic tensor, the metric tensor would switch between "covariant" and "contravariant" forms (by inverting the tensor in some capacity, i think), and most everywhere treated this like it should be absolutely obvious when and why it is doing that - not to mention how unintuitive the missing minus-signs were for one of the forms when i construct an equivalent gradient operator using geometric algebra, with the metric baked into the basis elements (their "quadratic form"), i need only understand that the basis vectors in each term are actually _multiplicatively inverse*_ (not obvious in most explanations of the gradient): we don't have x̂ d/dx, but rather 1/x̂ d/dx, for example, and the necessary signs emerge naturally from the metric thereafter and neither do i need to do anything to that operator before also applying it to the resulting electromagnetic bivector (for its divergence): the elements handle the metric _inherently_ it's much cleaner, and its easier for me to understand the math because it feels like the math better understands itself *there're some details to this i have yet to work out, because i've been very depressed and possibly the only person on the planet who cares; if i am wrong in some capacity here, the easiest way for me to understand it might be to understand how (or whether!) the gradient operator can meaningfully exist for elementary null-vectors (that is, null-vectors that aren't just a sum of elements with opposing non-null metrics) - given a null-metric shouldn't have an invertible metric tensor either, i've been assuming it cannot

  • @user-ug3gq8bk9t
    @user-ug3gq8bk9t Жыл бұрын

    Thank you for making this video. Is there any methods understanding why detA=detA^t intuitively? I think it is possible if using the idea introduced in this video. But I have no more plan. 😢

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    You will have to think about what does det (A^T) means. For now, I am not sure if we can think about what that really means without a circular reasoning that leads to det A = det (A^T). I think the idea of an adjoint *might* help.

  • @eknight1364
    @eknight1364 Жыл бұрын

    Hello, I am a bit confused with your introduction, it looks like you are looking for a substitute to composition, which you do not mention at all. However i really like the column vector representation for linear forms as their matrix expressions involve the transpose of these columns for a dot product: let µ be our measurement, of vector X, t is the transposition, then µ(Y) = t(X) . Y and µ(alpha(Y)) = t(X) . A . Y = t(t(A) . X) . Y therefore t(A) . X is the vector of our "same measurement" in the starting space and the rank of A is not a limitation.

  • @borisborcic
    @borisborcic Жыл бұрын

    _Aren't the best transposes when they bring shivers to the diagonals of your matrix?_

  • @b43xoit
    @b43xoit Жыл бұрын

    Are we going to get Hermitian here?

  • @PasajeroDelToro
    @PasajeroDelToro Жыл бұрын

    On Earth, what is 'geographic north' (in general 3d unit vector form)? Also, what is 'magnetic north' (in gen 3d uvf)?

  • @MuhammadyusufK
    @MuhammadyusufK Жыл бұрын

    6:08 up&down or left&right?

  • @tszhanglau5747
    @tszhanglau5747 Жыл бұрын

    Mind blown

  • @DevRajyaguru-lx8pi
    @DevRajyaguru-lx8pi Жыл бұрын

    so convinced in first two minutes.....and also figured out what the key feature of the video is going to be like. Thank you so much!

  • @sparshjohri1109
    @sparshjohri1109 Жыл бұрын

    Is there a visual way of understanding the matrix of cofactors as well?

  • @christophercrawford2883

    @christophercrawford2883

    Жыл бұрын

    It decomposes the determinant into a contraction, giving A B = |A| I. The determinant can be visualized as the matrix acting on p-forms (cross or triple products). See Flanders for a beautiful introduction.

  • @narfwhals7843
    @narfwhals7843 Жыл бұрын

    You say in infinite dimensional vector spaces we have many more vectors than co-vectors. Is this true in general? I was under the impression that it is essentially arbitrary which space we consider the dual. And in infinite dimensional (complex) vector spaces we encounter another quirk. We find operators that have no Eigenvectors in either the space or the dual. They either have left or right eigenvectors. An example from quantum mechanics are the lowering operator, which only has right eigenvectors, and its adjoint, the raising operator, which only has left eigenvectors. This is a concept that I am having trouble understanding, as the characteristic polynomial should just give us the Eigenvalues and not care about left or right?

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    Many more *covectors* than vectors (the other way around). See the description for the explanation. There are more covectors than vectors precisely when the vector space is infinite-dimensional. How do you have the characteristic polynomial in infinite dimensions?

  • @narfwhals7843

    @narfwhals7843

    Жыл бұрын

    @@mathemaniac Again in QM we have the state space with elements |x> and the dual space with the

  • @mathemaniac

    @mathemaniac

    Жыл бұрын

    @@narfwhals7843 That's not all the duals in the infinite dimensional case. If we go back to the wavefunctions in 1 dimension (to make things a bit more concrete), then the function psi --> psi (0) is a perfectly good dual [it is linear, and it sends a wavefunction to a number], but you can't write it in terms of

  • @christophercrawford2883

    @christophercrawford2883

    Жыл бұрын

    The ladder operators are defective matrices in finite dimensional spaces (thus missing eigenvectors), but not in infinite spaces because it takes infinite operations to kill arbitrarily excited states, one level at a time. See the comment above on Hilbert spaces about th different sizes of spaces.

  • @intipatsa9776
    @intipatsa9776 Жыл бұрын

    i don't really like the neighbor's measuring device analogy. i feel like there should have been something more intuitive.

  • @jacob_90s

    @jacob_90s

    2 ай бұрын

    Yeah, that made no effing sense at all

  • @1495978707
    @1495978707 Жыл бұрын

    16:15 hmmm I think this is why robot kinematics uses Jacobean transpose as an approximation to inverse

  • @APaleDot

    @APaleDot

    Жыл бұрын

    18:57 is more to the point I think. In robotics you are generally dealing with rotations, in which case the transpose really _is_ the inverse.

  • @YitzhakDayan
    @YitzhakDayan Жыл бұрын

    A video on eigen values and vectora would be amazing, you are always thought how to calculate them and some formulas where they pop up say in statistics but their meaning is never explained well in my opinion.

  • @shridharsarraf2188
    @shridharsarraf21886 ай бұрын

    Bro my professor in 4th year of Engineering told us that even adjoint and adjugate of a matrix aren't the same thing. I mean he told the difference but i didn't understand. Make a video on that also

  • @Ivan-mp6ff
    @Ivan-mp6ff15 күн бұрын

    I always wondered why linear algebra cannot be illustrated using real life example with numbers that represent real objects such as how a tradsperson would make measurements and then transpose these measurements for other construction. For instance, a 2×2 matrix as two restaurant selling two different meals, and when it's transpose or its inverse is required, what are those changes in respect to the restaurants and the meals. Hope I have made myself reasonably clear. When too many math jargons are used, it may become lots of " hand wavings" and novice like me will get lost. Thank you.

  • @KangJangkrik
    @KangJangkrik Жыл бұрын

    You know what? 8ᵀ = ∞ If 8 is a matrix of pixels, instead of literal number

  • @christophercrawford2883
    @christophercrawford2883 Жыл бұрын

    Nice animations! And nice first part on 'moving the scale'. I never realized the connection between the adjoint and pull-back until this video. I hoped to see the dual vectors as rows, not columns, and expected a description of the relation between covevtor and dual to involve the metric, not as an afterthought. In fact, it was right there the whole time: ax+by. With both of these, you could already see the transpose in converting duals to vectors, and thus the transpose of operators also. Finally, this geometric interpretation could help clarify that the adjoint is only the transpose for the trivial metric (but always for duals)

  • @forheuristiclifeksh7836
    @forheuristiclifeksh78362 ай бұрын

    10:40 covector