Is the Future of Linear Algebra.. Random?

The machine learning consultancy: truetheta.io
Want to work together? See here: truetheta.io/about/#want-to-w...
"Randomization is arguably the most exciting and innovative idea to have hit linear algebra in a long time." - First line of the Blendenpik paper, H. Avron et al.
Follow up post: truetheta.io/concepts/linear-...
SOCIAL MEDIA
LinkedIn : / dj-rich-90b91753
Twitter : / duanejrich
Github: github.com/Duane321
SUPPORT
/ mutualinformation
SOURCES
Source [1] is the paper that caused me to create this video. [3], [7] and [8] provided a broad and technical view of randomization as a strategy for NLA. [9] and [12] informed me about the history of NLA. [2], [4], [5], [6], [10], [11], [13] and [14] provide concrete algorithms demonstrating the utility of randomization.
[1] Murray et al. Randomized Numerical Linear Algebra. arXiv:2302.11474v2 2023
[2] Melnichenko et al. CholeskyQR with Randomization and Pivoting for Tall Matrices (CQRRPT). arXiv:2311.08316v1 2023
[3] P. Drineas and M. Mahoney. RandNLA: Randomized Numerical Linear Algebra. Communications of the ACM. 2016
[4] N. Halko, P. Martinsson, and J. Tropp. Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions. arXiv:0909.4061v2 2010
[5] Tropp et al. Fixed Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data. NeurIPS Proceedings. 2017
[6] X. Meng, M. Saunders, and M. Mahoney. LSRN: A Parallel Iterative Solver for Strongly Over- Or Underdetermined Systems. SIAM 2014
[7] D. Woodruff. Sketching as a Tool for Numerical Linear Algebra. IBM Research Almaden. 2015
[8] M. Mahoney. Randomized Algorithms for Matrices and Data. arXiv:1104.5557v3. 2011
[9] G. Golub and H van der Vorst. Eigenvalue Computation in the 20th Century. Journal of Computational and Applied Mathematics. 2000
[10] J. Duersch and M. Gu. Randomized QR with Column Pivoting. arXiv:1509.06820v2 2017
[11] Erichson et al. Randomized Matrix Decompositions Using R. Journal of Statistical Software. 2019
[12] J. Gentle et al. Software for Numerical Linear Algebra. Springer. 2017
[13] H. Avron, P. Maymounkov, and S. Toledo. Blendenpik: Supercharging LAPACK's Least-Squares Solver. Siam. 2010
[14] M. Mahoney and P. Drineas. CUR Matrix Decompositions for Improved Data Analysis. Proceedings of the National Academy of Sciences. 2009
TIMESTAMPS
0:00 Significance of Numerical Linear Algebra (NLA)
1:35 The Paper
2:20 What is Linear Algebra?
5:57 What is Numerical Linear Algebra?
8:53 Some History
12:22 A Quick Tour of the Current Software Landscape
13:42 NLA Efficiency
16:06 Rand NLA's Efficiency
18:38 What is NLA doing (generally)?
20:11 Rand NLA Performance
26:24 What is NLA doing (a little less generally)?
31:30 A New Software Pillar
32:43 Why is Rand NLA Exceptional?
34:01 Follow Up Post and Thank You's

Пікірлер: 372

  • @charilaosmylonas5046
    @charilaosmylonas5046Ай бұрын

    Great video! I want to add a couple of references to what you mentioned in the video related to neural networks: 1. Ali Rahimi got the Neurips 2017 "test of time" award for a method called - Random kitchen sinks (kernel method with random features). 2. Choromansky (from Google) made a variation of this idea to alleviate the quadratic memory cost of self-attention in transformers (which also works like a charm - I tried it myself, and I'm still perplexed how it didn't become one of the main efficiency improvements for transformers.). Check "retrinking attention with performers". Thank you for the great work on the video - keep them coming please! :)

  • @howuhh8960

    @howuhh8960

    29 күн бұрын

    it didn't because all efficient variations have significantly worse performance on retrieval tasks (associative recall for example), as all recent papers demonstrated

  • @Arithryka

    @Arithryka

    25 күн бұрын

    The Quadratic Memory Cost of Self-Attention in Transformers is my new band name

  • @octavianova1300
    @octavianova1300Ай бұрын

    reminds me of that episode of veggie tales when larry was like "in the future, linear algebra will be randomly generated!"

  • @NoNameAtAll2

    @NoNameAtAll2

    Ай бұрын

    W E E D E A T E R

  • @rileymurray7437

    @rileymurray7437

    Ай бұрын

    Reminds you of what???

  • @jedediahjehoshaphat

    @jedediahjehoshaphat

    Ай бұрын

    xD

  • @Godfather-qr6ej

    @Godfather-qr6ej

    29 күн бұрын

    I thought it would be some nice science show, but it turns out to be some kids show : (

  • @notsojharedtroll23

    @notsojharedtroll23

    28 күн бұрын

    ​@@rileymurray7437 he means this video: kzread.info/dash/bejne/nGiEypJrl93Xnbw.htmlsi=wb2atwfoSQaefrjL

  • @BJ52091
    @BJ52091Ай бұрын

    As a mathematician specializing in probability and random processes, I approve this message. N thumbs up where N ranges between 2.01 and 1.99 with 99% confidence!

  • @Mutual_Information

    @Mutual_Information

    Ай бұрын

    Great to have you here!

  • @purungo

    @purungo

    28 күн бұрын

    So you're saying there's a 1 chance in roughly 10^16300 that you're giving him 3 thumbs up...

  • @frankjohnson123

    @frankjohnson123

    26 күн бұрын

    My brother in Christ, use a discrete probability distribution.

  • @nile6076

    @nile6076

    26 күн бұрын

    Only if you assume a normal distribution! ​@@purungo

  • @sylv256

    @sylv256

    24 күн бұрын

    Is this just one big late april fool's? What the hell

  • @laurenwrubleski7204
    @laurenwrubleski720429 күн бұрын

    As a developer at AMD I feel somewhat obligated to note we have an equivalent to cuBLAS called rocBLAS, as well as an interface layer hipBLAS designed to compile code to make use of either AMD or NVIDIA GPUs.

  • @sucim

    @sucim

    26 күн бұрын

    but can your cards train imagenet without crashing?

  • @389martijn

    @389martijn

    25 күн бұрын

    ​@@sucimsheeeeeeeeesh

  • @johnothwolo

    @johnothwolo

    25 күн бұрын

    Are you guys hiring?

  • @Zoragna

    @Zoragna

    25 күн бұрын

    OP forgot about BLAS being a standard so most implementations have been forgotten, it's weird to point at Nvidia

  • @cannaroe1213

    @cannaroe1213

    25 күн бұрын

    As an AMD customer who recently bought a 6950XT for €600, I am disappointed to learn rocBLAS is not supported on my outdated 2 year old hardware.

  • @TimL_
    @TimL_Ай бұрын

    The part about matrix multiplication reminded me of studying cache hit and miss patterns in university. Interesting video.

  • @charlesloeffler333
    @charlesloeffler33329 күн бұрын

    Another tidbit about LinPack: One of its major strengths at the time it was written was that all of its double precision algorithms were truly double precision. At that time other packages often had double precision calculations hidden within the single precision routines where as their double precision counter parts did not have quad-precision parts anywhere inside. The LinPack folks were extraordinarily concerned about numerical precision in all routines. It was a great package. It also provided the basis for Matlab

  • @scottmiller2591
    @scottmiller2591Ай бұрын

    Brunton, Kutz et al. in the paper you mentioned here "Randomized Matrix Decompositions using R," recommended in their paper using Nathan Halko's algo, developed at the CU Math department. B&K give some timing data, but the time and memory complexity were already computed by Halko, and he had implemented it in MATLAB for his paper - B&K ported it to R. Halko's paper from 2009 "FINDING STRUCTURE WITH RANDOMNESS: STOCHASTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS" laid this all out 7 years before the first draft of the B&K paper you referenced. Halko's office was a mile down the road from me at that time, and I implemented Python and R code based on his work (it was used in medical products, and my employer didn't let us publish). It does work quite well.

  • @Mutual_Information

    @Mutual_Information

    Ай бұрын

    Very cool! The more I researched this, the more I realized the subject was deeper (older too) than I had realized with the first few papers I read. It's interest to hear your on-the-ground experience of it, and I'm glad the video got your attention.

  • @ajarivas72

    @ajarivas72

    17 күн бұрын

    @@Mutual_Information Has anyone tried genetic algorithms instead of purely random approches? In my experience, genetic algorithms are 100 faster than Monte Carlo simulations to obtain an optimum.

  • @danielsantiagoaguilatorres9973
    @danielsantiagoaguilatorres9973Ай бұрын

    I'm writing a paper on a related topic. Didn't know about many of these papers, thanks for sharing! I really enjoyed your video

  • @richardyim8914
    @richardyim891427 күн бұрын

    Golub and Van Loan’s textbook is goated. I loved studying and learning numerical linear algebra for the first time in undergrad.

  • @pietheijn-vo1gt
    @pietheijn-vo1gtАй бұрын

    I have seen a very similar idea in compressed sensing. In compressed sensing we also use a randomized sampling matrix, because the errors can be considered as white noise. We can then use a denoising algorithm to recover the original data. In fact I know Philips MRI machines use this technique to speed up scans, because you have to take less pictures. Fascinating

  • @tamineabderrahmane248

    @tamineabderrahmane248

    29 күн бұрын

    random sampling to reconstruct the signal

  • @pietheijn-vo1gt

    @pietheijn-vo1gt

    28 күн бұрын

    @@tamineabderrahmane248... what?

  • @MrLonelyrager

    @MrLonelyrager

    26 күн бұрын

    Compressed sensing is also useful for wireless comunications. I studied its usage for sampling ultra wideband signals and indoor positioning. It only works accurately under certain sparsity assumptions. In MRI scans , their "fourier transform" can be considered sparse, then we can use l1 denoising algorithms to recover the original signal.

  • @pietheijn-vo1gt

    @pietheijn-vo1gt

    25 күн бұрын

    @@MrLonelyrager yes correct, that's exactly what I used. In the form of ISTA (iterative shrinkage and thresholding) algorithms and its many (deep-learning) derivatives

  • @zyansheep
    @zyansheepАй бұрын

    Dang, I absolutely love videos and articles that summarize the latest in a field of research and explain the concepts well!

  • @charlesity
    @charlesity24 күн бұрын

    As always this is BRILLIANT. I started following your videos since I saw the GP regression video. Great content! Thank you very much.

  • @makapaka8247
    @makapaka8247Ай бұрын

    I'm finally far enough in education to see how well made your stuff is. Super excited to see a new one from you. Thanks for expanding people's horizons!

  • @Mutual_Information

    @Mutual_Information

    Ай бұрын

    Glad to have you watching!

  • @KipIngram
    @KipIngram29 күн бұрын

    Fascinating. Thanks very much for filling us then on this.

  • @noahgsolomon
    @noahgsolomon27 күн бұрын

    You discussed all the priors incredibly well. I didn’t even understand the premise of random in this context and now I leave with a lot more. Keep it up man ur videos are the bomb

  • @marcegger7411
    @marcegger741126 күн бұрын

    Damn... your videos are getting beyond excellent!

  • @aleksszukovskis2074
    @aleksszukovskis207429 күн бұрын

    its always a pleasure to watch this channel

  • @bn8ws
    @bn8ws20 күн бұрын

    Outstanding content, instant sub. Keep up the good work!

  • @bluearctik3980
    @bluearctik398025 күн бұрын

    My first thought was "this is like journal club with DJ"! Great stuff - well researched and crisply delivered. More of this, if you please.

  • @AjaniTea
    @AjaniTea10 күн бұрын

    This is a world class video. Thanks for posting this and keep it up!

  • @piyushkumbhare5969
    @piyushkumbhare596927 күн бұрын

    This is a really well made video, nice!

  • @from_my_desk
    @from_my_desk27 күн бұрын

    thanks a ton! this was eye-opening 😊

  • @jondor654
    @jondor65426 күн бұрын

    Lovely type, great clarity .

  • @braineaterzombie3981
    @braineaterzombie398125 күн бұрын

    This is exactly what i needed. Subscribed

  • @deltaranged
    @deltarangedАй бұрын

    It feels like this video was made to match my exact interests LOL I've been interested in NLA for a while now, and I've recently studied more "traditional" randomized algorithms in uni for combinatorial tasks (e.g. Karger's Min-cut). It's interesting to see how they've recently made ways to combine the 2 paradigms. I'm excited to see where this field goes. Thanks for the video and for introducing me to the topic!

  • @Rockyzach88

    @Rockyzach88

    29 күн бұрын

    KZread has you in its palms. _laughs maniacally_

  • @Sino12

    @Sino12

    26 күн бұрын

    where do you study?

  • @Stephen_Kelley
    @Stephen_Kelley19 күн бұрын

    Excellent video, really well paced.

  • @JonathanPlasse
    @JonathanPlasse18 күн бұрын

    Awesome presentation, thank you!

  • @mgostIH
    @mgostIH29 күн бұрын

    I started reading this paper when you mentioned it on Twitter, forgot it was you who I got it from and was now so happy to see a video about it!

  • @Mutual_Information

    @Mutual_Information

    29 күн бұрын

    Yes! And good to see you here mgost

  • @JoeBurnett
    @JoeBurnett26 күн бұрын

    You are an amazing teacher! Thank you for explaining the topic in this manner. It really motivates me to continue learning about all things linear algebra!

  • @AlexGarel-xr9ri
    @AlexGarel-xr9ri23 күн бұрын

    Incredible video with very good animations and script. Thank you !

  • @tiwiatg2186
    @tiwiatg218610 күн бұрын

    Loving it loving it loving it!! Amazing video, amazing topic 👏

  • @billbez7465
    @billbez746510 күн бұрын

    Amazing video with great presentation. Thank you

  • @moisesbessalle
    @moisesbessalleАй бұрын

    Amazing video!

  • @gaussology
    @gaussology22 күн бұрын

    Wow, so much research went into this! It makes me even more motivated to read papers and produce videos 😀

  • @EkShunya
    @EkShunyaАй бұрын

    Been a while since ur last post thanks Please make more often I like what u make

  • @wiktorzdrojewski890
    @wiktorzdrojewski89027 күн бұрын

    this feels like a good presentation topic for numerical methods seminar

  • @tantzer6113
    @tantzer611329 күн бұрын

    I enjoyed this video. Thank you.

  • @ernestoherreralegorreta137
    @ernestoherreralegorreta13725 күн бұрын

    Amazing explanation of a complex topic! You've got yourself a new subscriber.

  • @Mutual_Information

    @Mutual_Information

    24 күн бұрын

    Glad to have you!

  • @Otakutaru
    @Otakutaru16 күн бұрын

    Adequate density of new information, and sublime narrative. Also, on point visuals

  • @pygmalionsrobot1896
    @pygmalionsrobot189629 күн бұрын

    Whoa - very cool stuff !!

  • @CyberBlaster-fu2dz
    @CyberBlaster-fu2dzАй бұрын

    Great video, thank you!

  • @vNCAwizard
    @vNCAwizard20 күн бұрын

    An excellent presentation.

  • @Pedritox0953
    @Pedritox0953Ай бұрын

    Great video!

  • @iamr0b0tx
    @iamr0b0txАй бұрын

    This is a really good video 💯

  • @lbgstzockt8493
    @lbgstzockt8493Ай бұрын

    Very good video on a very interesting topic. Who would have thought that there is this much to gain in such a commonly used piece of mathematics.

  • @oceannuclear
    @oceannuclear22 күн бұрын

    Oh my god, this forms a small part of my PhD thesis where I've been trying to understand LAPACK's advantage/disadvantage when it comes to inverting matrices. Having this video really helps me put things into contex! Thank you very much for making this!

  • @broccoli322
    @broccoli322Ай бұрын

    Great stuff

  • @antiguarocks
    @antiguarocks8 күн бұрын

    Reminds me of what my high school maths teacher said about being able to assess product quality on a production line with high accuracy by only sampling a few percent of the product items.

  • @michaeln.8185
    @michaeln.8185Ай бұрын

    Great video! Thank you for producing this!

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk29 күн бұрын

    Great video brother! 😍

  • @Mutual_Information

    @Mutual_Information

    29 күн бұрын

    Thank you MLST! You're among a rare bunch providing non-hyped or otherwise crazy takes on AI/ML, so it means a lot coming from you.

  • @DavidS-ji6qv
    @DavidS-ji6qv23 күн бұрын

    Phenomenal video

  • @EE-wo5ty
    @EE-wo5tyАй бұрын

    the quality on this editing is top notch, congratulations!!!

  • @hozaifas4811
    @hozaifas4811Ай бұрын

    We need more content creators like you ❤

  • @Mutual_Information

    @Mutual_Information

    Ай бұрын

    Thank you. These videos take awhile, so I wish I could upload more. But I'm confident I'll be doing KZread for a long time.

  • @hozaifas4811

    @hozaifas4811

    Ай бұрын

    @@Mutual_Information Well ,This news made my day !

  • @the_master_of_cramp
    @the_master_of_cramp29 күн бұрын

    Great and clear video! Makes me wanna study more numerical LA...combined with probability theory because it shows how likely inefficient many algorithms use currently are, and that randomized algorithms are usually insanely much faster, while being approximately correct. So those randomized algorithms basically can be used anywhere when we don't need to be 100% sure about the result (which is basically always, because our mathematical models are only approximations of what's going on in the world and thus are inaccurate anyways and as you mentioned, if data is used, it's noisy).

  • @chakrasamik
    @chakrasamik27 күн бұрын

    Excellent ❤

  • @ihatephysixs
    @ihatephysixs22 күн бұрын

    Awesome video

  • @scottmiller2591
    @scottmiller2591Ай бұрын

    This was a nice walk down memory lane for me, and a good introduction to the beginner. It's nice to see SWE getting interested in these techniques, which have a very long history (like solving finite elements with diffusion decades ago, and compressed sensing). I enjoyed your video. A few notes: It's useful to note that "random" projections started out as Gaussian, but it turns out very simple, in-memory, transformations let you use binary random numbers at high speed with little to no loss of accuracy. I think you had this in mind when talking about the random matrix S in sketch-and-solve. BLAS sounds like blast, but without the t. I'm sure there's people who pronounce it like blahs. Software engineers mangle the pronunciation of everything, including other SWE packages, looking at you, Ubuntu users. However the first pronunciation is the pronunciation I have always heard in the applied linear algebra field. FORTRAN doesn't end like "fortune," but rather ends with "tran," but maybe people pronounce "fortran" (uncapitalized) that way these days - IDK (see note above re: mangling; FORTRAN has been decapitalized since I started working with it). Cholesky starts with a hard "K" sound, which is the only pronunciation you'll ever hear in NLA and linear algebra. It certainly is the way Cholesky pronounced it. Me, I always pronounce Numpy to sound like lumpy just to tweak people, even though I know better ☺. I've always pronounced CQRRPT as "corrupt," too, but because that's what the acronym looks like (my eyes are bad). One way to explain how these work intuitively is to look at a PCA, similar to what you touched on with the illustration of covariance. If you know the rank is low, then there will be, say, k large PCA directions, and the rest will be small. If you perform random projection on the data, those large directions will almost certainly show up in your projections, with the remaining PCA directions certainly being no bigger than they were originally (projection is always non-expanding). This means the random projections will still contain large components of the strong PCA directions, and you only need to make sure you took enough random projections to avoid being unlucky enough to accidentally be very nearly normal with the strong PCA directions every time. The odds of you being unlucky go down with every random projection you add. You'd have to be very unlucky to take a photo of a stick from random directions, and have every photo of the stick be taken end-on. In most photos, it will look like a stick, not a point. Similarly, taking a photo of a piece of paper from random directions will look like a distorted rectangle, not a line segment It's one case where the curse of dimensionality is actually working in your favor - several random projections almost guarantees they won't all be projections to an object that's the thickness of the paper. I've been writing randomized algos for a long time (I have had arguments w engineers about how random SVD couldn't possibly work!), and love seeing random linear algebra libraries that are open and unit tested. I agree with your summary - a good algorithm is worth far more than good hardware. Looking forward to you tracking new developments in the future.

  • @Mutual_Information

    @Mutual_Information

    Ай бұрын

    This is the real test of a video. When an expert watches it and, with some small corrections, agrees that it gets the bulk of the message right. It's a reason I try to roll in an subject matter expert where I can. So I'm quite happy to have covered the topic appropriately in your view. (It's also a relief!) And I also wish I had thought of the analogy: "You'd have to be very unlucky to take a photo of a stick from random directions, and have every photo of the stick be taken end-on. In most photos, it will look like a stick, not a point." I would have included that if I had thought of it!

  • @scottmiller2591

    @scottmiller2591

    29 күн бұрын

    @@Mutual_Information Agree absolutely!

  • @rileyjohnmurray7568

    @rileyjohnmurray7568

    29 күн бұрын

    Jim Demmel and Jack Dongarra pronounced it "blahs" the last time I spoke with each of them. (~This morning and one month ago, respectively.) 😉

  • @Mutual_Information

    @Mutual_Information

    29 күн бұрын

    @@rileyjohnmurray7568 lol

  • @scottmiller2591

    @scottmiller2591

    29 күн бұрын

    @@rileyjohnmurray7568 I hope they perk up ☺

  • @TrungHieuTu
    @TrungHieuTu29 күн бұрын

    Very useful, thanks

  • @catcoder12
    @catcoder1229 күн бұрын

    anotha banger by DJ

  • @DawnOfTheComputer
    @DawnOfTheComputer15 күн бұрын

    The math presentation and explanation alone was worth a sub, let alone the interesting topic.

  • @StratosFair
    @StratosFair23 күн бұрын

    As a grad student in theoretical machine learning, I have to say i'm blown away by the quality of your content, please keep videos like these coming !

  • @ryanjkim
    @ryanjkimКүн бұрын

    Really great thank you.

  • @Geenimetsuri
    @Geenimetsuri6 сағат бұрын

    I understood this. Thank you, great education!

  • @Mutual_Information

    @Mutual_Information

    3 сағат бұрын

    That's a win!!

  • @General12th
    @General12thАй бұрын

    Hi DJ! I love improvements in algorithmic efficiency.

  • @metromap9618
    @metromap961829 күн бұрын

    great video!

  • @pr0crastinatr
    @pr0crastinatr26 күн бұрын

    Another neat explanation for why the randomized least-squares problem works is the Johnson-Lindenstrauss lemma. That lemma states that most vectors don't change length a lot when you multiply them by a random gaussian matrix, so the norm of S(Ax - b) is within (1-eps) to (1+eps) of the norm of Ax-b with high probability.

  • @johannguentherprzewalski
    @johannguentherprzewalski23 күн бұрын

    Very interesting content! I did find that the video felt longer than expected. I was intrigued by the thumbnail and the promise of at least 10x speed improvement. However, it took quite a while to get to the papers and even longer to get to the explanation. The history definitely deserves its own video and most chapters could be much shorter.

  • @minsookim-ql1he
    @minsookim-ql1he19 күн бұрын

    This is very interesting

  • @Ohmriginal722
    @Ohmriginal72229 күн бұрын

    Whenever randomness is involved you got me wanting to use Analogue processors for fast and low-power processing

  • @ShivaTD420
    @ShivaTD42018 күн бұрын

    If you take white noise. And put a filter on it. You can produce every note, because every tone and semi tone is in the noise.

  • @h.b.1285
    @h.b.128527 күн бұрын

    Excellent video! This topic is not easy for the layperson (admittedly, the layperson that likes Linear Algebra), but it was clearly and very well structured.

  • @wafikiri_
    @wafikiri_14 күн бұрын

    The first program I fed a computer was one I wrote in FORTRAN IV. It almost exhausted the memory capacity of the IBM machine, which was about 30 KBytes for the user (it used memory overloads, which we'd call banked memory today, in order to not exceed the available memory for programs).

  • @rainaldkoch9093
    @rainaldkoch909318 күн бұрын

    Danke!

  • @user-le1ho7sl7h
    @user-le1ho7sl7h3 күн бұрын

    I used one time random matrices for eigenvalue counts on intervals and it was amazing! Di Napoli, E., Polizzi, E., & Saad, Y. (2016). Efficient estimation of eigenvalue counts in an interval. Numerical Linear Algebra with Applications, 23(4), 674-692.

  • @maxheadrom3088
    @maxheadrom30887 күн бұрын

    Nice video! Nice channel! The complicated part isn't multiplying ... it's inverting!

  • @psl_schaefer
    @psl_schaefer29 күн бұрын

    As always great (very educative) content. I very much appreciate all the work you put into those videos!

  • @HelloWorlds__JTS
    @HelloWorlds__JTS12 күн бұрын

    Phenomenal! But I have one correction for (25:33): Full rank isn't restricted to square [invertible] matrices, it just means rank = min(m,n) rather than rank = k < min(m,n).

  • @pythonguytube
    @pythonguytube22 күн бұрын

    Worth pointing out that there is a modern sparse linear algebra package called GraphBLAS, that can be used not just for graphs (which generalize to sparse matrices) but also to any sparse matrix multiplication operation.

  • @nandanshettigar7261
    @nandanshettigar726119 күн бұрын

    Another beautiful global optima of priceless information to pull me out of my local tunnels :) Thank you as always

  • @rr00676
    @rr0067622 күн бұрын

    I've been hoping some advances in probabilistic numerics and random matrix theory bring PGM's some love. Computing matmuls/inverses every iteration of MCMC makes me sad :(. As expected, great video!

  • @RealUniquee
    @RealUniquee21 күн бұрын

    Simply phenomenal if it get implemented in deep learning framework

  • @u2b83
    @u2b8323 күн бұрын

    I tripped across the Integer relation algorithm at 15, when I wrote a calculator program to calculate how much change you put on the scale just based on the total weight. Thanks to this video (top 10 problems list), I finally know what that's called. Until now I called this the "primeness of unique coin weights" lol

  • @MariusKavaliauskas
    @MariusKavaliauskas29 күн бұрын

    Very informative video and I will be waiting for more. I am hooked on randomized linear algebra since Ewin Tang "dequantization" papers. I wonder if randomized algos will have huge impact on ML training performance (not just inference). I also wonder how will it compare in performance and accuracy: low-rank approximations of ML models vs randomized inference on full models.

  • @damondanieli
    @damondanieli29 күн бұрын

    Great video! One thing: “processor registers” not “registries”

  • @Mutual_Information

    @Mutual_Information

    29 күн бұрын

    I know.. lol damn it

  • @jamesedwards6173

    @jamesedwards6173

    24 күн бұрын

    lol, I caught that same thing.

  • @mohammedbelgoumri
    @mohammedbelgoumriАй бұрын

    No better way to start the day than with an MI upload 🥳

  • @Mutual_Information

    @Mutual_Information

    Ай бұрын

    Thank you, love hearing that!

  • @janni7439
    @janni743917 күн бұрын

    There are also other approaches where you choose for an epsilon and reduce complexity of the problem, like in hierarchical matrices

  • @nikita_x44
    @nikita_x4429 күн бұрын

    linearity @ 4:43 is diffirent linearity. linear functions in the sense of linear algebra must always pass through (0,0)

  • @sufyanali3992

    @sufyanali3992

    22 күн бұрын

    I thought so too, the 2D line shown on the right is an affine function, not a linear function in the rigorous sense.

  • @KepleroGT

    @KepleroGT

    13 күн бұрын

    Yep, otherwise the linearity of addition and multiplication which he just skipped over wouldn't apply and thus wouldn't be linear functions, or rather the correct term is linear map/transformation. Example: F(x,y,z) = (2x+y, 3y, z+5), (0,0,0) = F(0,0,0) is incorrect because F(0,0,0) = (0,0,5). The intent is to preserve the linearity of these operations so they can be applied similarly. If I want 2+2 or 2*2 I can't have 5

  • @RepChris
    @RepChrisКүн бұрын

    Of course i get this in my recommended a few days after my first numerical analysis lecture

  • @RepChris

    @RepChris

    Күн бұрын

    Which is a course i picked up (its semi-required) since it seems like a very useful thing to understand properly, even though i am not the best at advanced linear algebra and have PTSD from a previous professor and get a visceral reaction every time i see an epsilon, both of which are integral to most of the course

  • @user-qp2ps1bk3b
    @user-qp2ps1bk3b29 күн бұрын

    very nice!

  • @user-gv6fn6yt2u
    @user-gv6fn6yt2u22 күн бұрын

    it's really mind-blowing how random numbers can achieve something such fast

  • @Patashu
    @Patashu29 күн бұрын

    This has been my thought about deep learning for a while now - we build computers to be deterministic, but deep learning would run best on a different kind of computer that is lossy but as a tradeoff much more energy inefficient. This is a different take though (keep determinism, but instead deliberately code faster but lossy algorithms) that could also do the job,

  • @nigelrhodes4330
    @nigelrhodes433023 күн бұрын

    We can think of this as the "Battleship" problem, least number of guesses to take out a fleet, I was shown this in primary school when I was taught the value of estimation.

  • @jeffreychandler8418
    @jeffreychandler8418Ай бұрын

    This is something that I've really been wondering about at a general level. How to add pinches of randomness to improve inference, simulations, etc. I personally wonder how we could use it to improve model accuracy by specifically predicting error then building in a stochastic prediction, Might be a big change in ML and neural nets

  • @tanithrosenbaum
    @tanithrosenbaum25 күн бұрын

    "They're quite good" - Understatement of the decade 😄

  • @DocM221
    @DocM22117 күн бұрын

    I've been through some basic linear algebra courses, but really the covariance problem struck me as one obviousness to a statician. A statician would never go and sample everybody, they would first determine how accurate they needed to be in their certainty, and then go about sampling exactly the number of people that satisfies that equation. I actually had to do this in my job! I can totally see how this will be a great tool used with data prediction and maybe hardware accelerators to make MASSIVE gains. We are in for a huge wild ride! Thanks for the video!

  • @williamchen6099
    @williamchen609926 күн бұрын

    wow this is an amazingly well produced and scripted video and delivered perfectly, how long did it take you to plan and execute it?

  • @Mutual_Information

    @Mutual_Information

    26 күн бұрын

    I was working on it since November, mostly on the weekends and sometimes in the evenings. I'd guess it took me over 150 hours. The stages are reading research, script writing, creating the on screen animations, re-writing the script with feedback (e.g. from Riley here), shooting the video, editing it, adding music, cleaning it up, sharing the video for feedback. It takes a lot longer than I like to admit.

  • @cannaroe1213
    @cannaroe121325 күн бұрын

    Nearly 7 years ago when I was still a practicing geneticist, sequenced DNA would usually only be a few nucleotides long, maybe 50, and it would have to get mapped to a genome with billions of possible locations to test. The fastest algorithms ended up being used in the most published papers, so competition was pretty fierce to be the fastest. The gold standard was a deterministic program called BWA/Bowtie, but just before I left the field a new breed of non-deterministic aligners with mapping times orders of magnitude faster were developed, and it really split opinions. Different deterministic programs would give different results (i.e. they had noise/error too, even if they're consistent about it), so in many ways who cared if a program gave different results every time you ran it, particularly if you only intend to run it once... But there were other problems. You couldn't create definitive analyses anymore, you couldn't retrace someone else's steps, you couldn't rely on checksums, total nightmare. The "hidden structures" aspect of the paper was interesting, the structures are in the data, and how the algorithm interacts with the data, which as the programmer you don't have access to by definition - but you also kinda know all you need to know about it too. It feels very similar to making a good meme.

  • @baptiste-genest
    @baptiste-genestАй бұрын

    Great video ! I had a compressive sensing class this semester, it sure is a very interesting and promissing topic of reasearch ! But I'm not sure that video games would benefit a lot from it ? If I understood correctly, the gains are mostly at high dimension, while video games linear algebra is basically only 3D, do you have exemples ? Thanks again !

  • @Mutual_Information

    @Mutual_Information

    Ай бұрын

    Thank you! My take is that that’s only in a certain representation. E.g. when a dimension refers to a specific pixel, the dimensions are quite high.

  • @bccvtcorrea
    @bccvtcorrea29 күн бұрын

    uow. Great video

  • @midou6104
    @midou610417 күн бұрын

    Okay, objectively, that's the hardest thing in linear algebra I've ever seen.