2021's Biggest Breakthroughs in Math and Computer Science

Ғылым және технология

It was a big year. Researchers found a way to idealize deep neural networks using kernel machines-an important step toward opening these black boxes. There were major developments toward an answer about the nature of infinity. And a mathematician finally managed to model quantum gravity.
Read the articles in full at Quanta Magazine: www.quantamagazine.org/the-ye...
- VISIT our Website: www.quantamagazine.org
- LIKE us on Facebook: / quantanews
- FOLLOW us Twitter: / quantamagazine
Quanta Magazine is an editorially independent publication supported by the Simons Foundation www.simonsfoundation.org/

Пікірлер: 824

  • @QuantaScienceChannel
    @QuantaScienceChannel2 жыл бұрын

    Read the articles in full at Quanta Magazine: www.quantamagazine.org/the-year-in-math-and-computer-science-20211223/

  • @naturemc2

    @naturemc2

    2 жыл бұрын

    Your last few videos in this channel is killing it. Need it. Much need it ❤️

  • @zfyl

    @zfyl

    2 жыл бұрын

    I think the opposite. All i see here, is just mathematicians coming up with new approaches to existing problems (made by previous mathematicians) and publishing new approaches. These are not results, and i feel like these are practically useless. So sad to see, that the education system embraces pointless research in such overly sophisticated, yet never applied, fields of science! What a shame, as it happens on the background of the world in fires, looking for help...and what is give?...some over-engineered half solution for made up problems...

  • @antoniussugianto7973

    @antoniussugianto7973

    2 жыл бұрын

    Please Riemann hypothesis progress updates...

  • @EmperorZelos

    @EmperorZelos

    2 жыл бұрын

    Uh yeah no, I have to correct you. The continuum hypothesis is UNDECIDABLE in ZFC. Meaning there is no way to decide it. There is nothign to SOLVE there, there is nothing unanswered. It was resolved and understood many many decades ago. We KNOW it is independent and we cannot say c=Aleph_1. We can assume it axiomatically if we so want, or assume its negation and both are EQUALLY valid. What you're talking about here is adding an axiom to create a NEW axiomatic system where we CAN say it, but that does not mean it was "resolved" or anything because we already knew the answer.

  • @eeemotion

    @eeemotion

    2 жыл бұрын

    Thanks for sparing me the trouble of watching. As anything significant could be buried in such an annal. The only real breakthrough in lamestream science is how to get them to shield for a plasma environment while still thinking almost exclusively in terms of 'heat'. The almost being the novelty. Electricity still being a dirty word in space. Hence its smell at first described from the suits after a spacewalk as that of electric soldering was then peppered with burnt chicken and BBQ insinuations to make for the usual clumsy narrative reminiscent of the sticky tape on the supposed lunar landing module. Ah, who knows what's in the peel of an onion? It's a slow boil to get to the truth and for the cluttered cosmogony of the believers it seems all too much useless toil...

  • @ruchirkadam8510
    @ruchirkadam85102 жыл бұрын

    Man, loving these 'breakthrough' videos! It's feels fulfilling to see the progress being made! I mean, finally modelling quantum gravity? jeez!

  • @Djfmdotcom

    @Djfmdotcom

    2 жыл бұрын

    Same! I think in no small part it’s because we have all these KZread channels focusing on them! I’d much rather watch Videos about science, exploration and learning than MSM garbage that divides us. Science brings us together!

  • @v2ike6udik

    @v2ike6udik

    2 жыл бұрын

    BS. Gravity (as a separate force) is a hoax. It has been done for a reason.

  • @irs4486

    @irs4486

    2 жыл бұрын

    cringe bruh, stop commenting, ratio + yb better

  • @sublimejourney3384

    @sublimejourney3384

    2 жыл бұрын

    I love these videos too !!

  • @The.Golden.Door.

    @The.Golden.Door.

    2 жыл бұрын

    Quantum gravity is far more simpler to calculate than what modern day Physicists have known to be true.

  • @MargaretSpintz
    @MargaretSpintz2 жыл бұрын

    Slight correction. The infinite limit of shallow neural networks as kernel machines (specifically Gaussian processes) was established in 1994 (Radford Neal). This was updated for 'ReLU' non-linearities in 2009 (Cho & Saul). In 2017 Lee & Bahri showed this result could be extended to deep neural networks. Not sure this counts as "2021's biggest breakthrough", though it is a cool result, so happy to have it publicised. 👍

  • @PythonPlusPlus

    @PythonPlusPlus

    2 жыл бұрын

    I was thinking the same thing

  • @lexusmaxus

    @lexusmaxus

    2 жыл бұрын

    Since there are no physical infinite machines so there must be mathematical operators that eliminates these infinities?

  • @hayeder

    @hayeder

    2 жыл бұрын

    Was about to post something similar. The recent famous paper in this area is Jacot et al with the NTK in 2018. It’s also not clear to what extent this explains practice. Eg see the work of chizat and Bach on lazy training.

  • @ramkitty

    @ramkitty

    2 жыл бұрын

    @@lexusmaxus or is infinity an inversion in some way

  • @Ef554rgcc

    @Ef554rgcc

    2 жыл бұрын

    Obviously

  • @OneDayIMay91Bil
    @OneDayIMay91Bil2 жыл бұрын

    Glad to have been a contributing member to this field had my first peer reviewed paper published in IEEE this year :)

  • @kf10147

    @kf10147

    2 жыл бұрын

    Congratulations!

  • @thatkindcoder7510

    @thatkindcoder7510

    2 жыл бұрын

    What's the paper?

  • @zfyl

    @zfyl

    2 жыл бұрын

    Too bad ieee is just an international conglomerate of science paper resellers. I, and everybody else in this planet want to know why are you writing these papers, and what is you contributed progress. Sorry for the negative tone, and congrats to the publishing 😉

  • @sampadmohanty8573

    @sampadmohanty8573

    2 жыл бұрын

    @@zfyl Exactly. Why is everyone writing these papers? And if it is for advancement of science, why is it not accessible to the general public? Is science a business - it is but many intellectuals do not want to see it as such because they want to believe that they do it for "a bigger cause" while in reality they do it selfishly which accidentally sometimes might actually do good, without the original intent being so. Please do not point to Arxiv.

  • @dougaltolan3017

    @dougaltolan3017

    2 жыл бұрын

    @@sampadmohanty8573 don't you just have to pay for access?

  • @MarcelBornancin
    @MarcelBornancin2 жыл бұрын

    I appreaciate the efforts in trying to make these heavily technical subjects understandable to the general public. Thank you all : )

  • @primenumberbuster404
    @primenumberbuster4042 жыл бұрын

    Mathematics is like that wind your sail boat needs to move way ahead on your journey. This was so heart warming to watch. There is really a thin line between maths and magic! Thanks a lot Quanta Magazine for this beautiful summary! loved it!

  • @jackgallahan9669

    @jackgallahan9669

    2 жыл бұрын

    wtf

  • @criscrix3

    @criscrix3

    2 жыл бұрын

    Some bot stole your comment and slightly reworded it lmao

  • @michaelblankenau6598

    @michaelblankenau6598

    6 ай бұрын

    That's a funny looking cat .

  • @williamzame3708
    @williamzame37082 жыл бұрын

    Also: Aleph 1 is *by definition* the smallest cardinal bigger than Aleph 0. The question is whether the size of the continuum is Aleph 1 or something bigger ...

  • @alexantone5532

    @alexantone5532

    2 жыл бұрын

    The continuum of natural numbers?

  • @LeBartoshe

    @LeBartoshe

    2 жыл бұрын

    @@alexantone5532 Continuum is just a nickname for cardinality of real numbers.

  • @whataboutthis10

    @whataboutthis10

    2 жыл бұрын

    and the new result makes it seem it is less likely that continuum is aleph1, which was Cantor's guess that seemed the most plausible for many years

  • @EM-qr4kz

    @EM-qr4kz

    2 жыл бұрын

    if you take an infinite number of line segments one centimeter each..then you have an infinite line..this set of line segments are No = aleph 0 infinity..the line is one dimension object.. but! * if you take a square.. one square centimeter in size..the parallel straight sections that make this square up are infinite.. but the set of them is aleph 1 in size..and square in 2 dimension object.. could that be the key of dimentions ? especialy when we have fractals objects to describe?

  • @moerkx1304

    @moerkx1304

    2 жыл бұрын

    @@EM-qr4kz I'm not sure if you have some typos or I'm not exactly understanding what you're trying to say. But your analogy of a straight line being the natural numbers and then extending it to a square seems to me like Cantor's prove that the rational numbers are countable and hence of the same cardinality as the natural numbers.

  • @hansolo9892
    @hansolo98922 жыл бұрын

    I have been using these kernel vector spaces for QML recently and this is one of those mathemagics I honestly adore!

  • @WsciekleMleko

    @WsciekleMleko

    2 жыл бұрын

    Hi I could take 2 fists of shrooms and it still would have same sense to me as it does right now. Im glad You are happy tho.

  • @joshlewis575

    @joshlewis575

    2 жыл бұрын

    @@WsciekleMleko yeah but just a few years ago you could've ate 2 ounces in your example. That's some crazy advancement, only a matter of time

  • @RexGalilae

    @RexGalilae

    2 жыл бұрын

    Yo I worked on QML too back in college! I used to devour papers by Anatole Lilienfeld and Matthias Rupp coz of how interesting they were. Gaussian and Laplacian Kernels were the bread and butter of my Kernel Ridge Regression models and I was pleasantly surprised to see kernel vector spaces here lol It's one of the dark horses of ML

  • @midas2092
    @midas20922 жыл бұрын

    These videos last year introduced me to this channel, and yet I still have the same excitement when I see the new ones

  • @Levi_Ackerman_7
    @Levi_Ackerman_72 жыл бұрын

    We really love watching breakthrough in science and technology.

  • @gregparrott
    @gregparrott2 жыл бұрын

    Just discovered 'Quanta Magazine'. Your articles on Physics, Math and Biology are all top notch! Subscribed

  • @Epoch11
    @Epoch112 жыл бұрын

    These are really great and I hope you do more of these. Hopefully we don't have to wait till the end of the year, to get more videos which talk about breakthroughs.

  • @whataboutthis10

    @whataboutthis10

    2 жыл бұрын

    this lol, give us more breakthroughs!

  • @MrMann163
    @MrMann1632 жыл бұрын

    It's crazy how much stuff from uni started flowing back watching this. The fact that I can actually be able to understand all these complicated maths is crazy but exciting

  • @matthewtang1489

    @matthewtang1489

    2 жыл бұрын

    I was like. Damn... I know all of these ideas when I was watching it. I guess I can finally taste the fruits of my university education.

  • @MrMann163

    @MrMann163

    2 жыл бұрын

    @@matthewtang1489 They told me the quadratic formula would be important, but no one said I'd ever need to know set theory. Oh such ripe fruits .-.

  • @bolducfrancis
    @bolducfrancis2 жыл бұрын

    The animation at 5:12 is the last piece I needed to finally understand the diagonal proof. Thank you so much for this!

  • @Geosquare8128
    @Geosquare81282 жыл бұрын

    hadnt realized that svms were being applied to dnns like that

  • @alany4004

    @alany4004

    2 жыл бұрын

    Geosquare the GOAT

  • @marcelo55869

    @marcelo55869

    2 жыл бұрын

    Support Vector Machines is somehow equivalent to neural networks?? Who knew!?! I would love to see the proof. I might lack the fundamentals to understand everything but it might be interesting anyway...

  • @cyanimpostor6971

    @cyanimpostor6971

    2 жыл бұрын

    This has actually been around for 3 decades now. Since the 1990s in fact

  • @nabeelhasan6593

    @nabeelhasan6593

    2 жыл бұрын

    Thanks to RBF kernel

  • @varunnayyar3138

    @varunnayyar3138

    2 жыл бұрын

    yeah me too

  • @markusheimerl8735
    @markusheimerl87352 жыл бұрын

    Love these videos. Gotta say as much as I wow'ed at the bubbles around our supermassive black hole in the physics video, I just have a specially warm spot in my heart for mathematics :)

  • @zight123

    @zight123

    2 жыл бұрын

    same. I now jack about math, but its so fascinating.

  • @szymonbaranowski8184

    @szymonbaranowski8184

    Жыл бұрын

    You believe in black holes? Seriously?

  • @quentingallea166
    @quentingallea1662 жыл бұрын

    You know the channel is pretty good when you watch full length video while understanding about half of the content

  • @szymonbaranowski8184

    @szymonbaranowski8184

    Жыл бұрын

    No. It means it still sucks half of the time. And in this case i bet it sucks much more than a half. And it means it's useless to watch it since you end up in the same spot you started but fooled & getting more arrogant having an opposite feeling

  • @quentingallea166

    @quentingallea166

    Жыл бұрын

    @@szymonbaranowski8184 when I was a teenager, I was reading Hawking, Brian Green etc and understand maybe 10% the first time. I would read and read again the pages and chapter to understand more each time. The world is a complex place. As a scientific researcher, I face everyday this complexity. Over simplifying is possible and useful. Kurtzgesagt is a pretty neat example. However, in some cases, in my opinion, if you still want to go far, you can't explain it in 10min simply. But well, you are perfectly free to disagree .

  • @aayankhan6734
    @aayankhan67342 жыл бұрын

    one of the few joys of the end of the year is watching these types of video....loved it!

  • @AdlerMow
    @AdlerMow2 жыл бұрын

    Quanta Magazine is incredible! Their style make everything affordable to the interested layman and it grips, you can start with any video or article and see it by yourself! So thank you all Quanta Team and writers!

  • @saiparepally
    @saiparepally2 жыл бұрын

    I really hope you guys continue to publish these every year

  • @yakuzzi35
    @yakuzzi352 жыл бұрын

    that's what I love about maths lots of times something that started out as a game or a fun curiosity turns out to be extremely applicable and equivalent to something unpredictable decades later

  • @kevinvanhorn2193
    @kevinvanhorn21932 жыл бұрын

    Radford Neal explored this same idea of expanding the width of a neural net to infinity over a quarter-century ago, in his 1995 dissertation, Bayesian Learning for Neural Networks. He found that what you get is a Gaussian Process.

  • @zfyl

    @zfyl

    2 жыл бұрын

    is this single handedly makes all this breakthrough just a simple revisiting of an existing conclusion?

  • @Luizfernando-dm2rf

    @Luizfernando-dm2rf

    2 жыл бұрын

    the real MVP

  • @daviddodelson8870

    @daviddodelson8870

    2 жыл бұрын

    @Gergely Kovács: no. Neal's work dealt with neural networks with a single hidden layer, this breakthrough studies the limit of width for deep neural networks, i.e, many hidden layers.

  • @kevinvanhorn2193

    @kevinvanhorn2193

    2 жыл бұрын

    @@daviddodelson8870 Thanks for the clarification. Strange, though, that it took 25 years to take that next step.

  • @johnwick2018
    @johnwick20182 жыл бұрын

    I didn't understand a single thing but it is awesome.

  • @mathman274
    @mathman2742 жыл бұрын

    interesting, when I was in school, many decades ago, 'we' always had the idea that there's no reason something couldn't exist between aleph-0 (size of N) and aleph-1 (size of R) however, a "finger was neverput on it". There were wild speculations about fractal dimensions, but that was just a fashionable thing to look at , at the time. Interesting where this is going.

  • @ferdinandkraft857

    @ferdinandkraft857

    2 жыл бұрын

    This question was answered in 1964 by Paul Cohen and Kurt Gödel. The Continuum Hypothesis (CH) is _independent_ of Zermelo-Fraenkel axioms (plus the axiom of choice). In other words, standard mathematics cannot prove it nor it's negation. You can, however, extend standard mathematics to include the CH or some other axioms. David Asperó et al "breakthrough" doesn't use only standard math. They only proved the equivalent of two axioms that are known to imply one particular hypothesis that is incompatible with CH... The video is unfortunately very superficial and gives the false idea of an "answer" to a problem that, in my opinion, is already answered.

  • @mathman274

    @mathman274

    2 жыл бұрын

    well... the keyword 'H' being hypothesis of course there's also the "incompleteness theorem", and extending the "axioms" might lead to inconsistency. Indeed "standard math" can't touch it, however including CH might be a little too much. Maybe I was just too "classically" educated, but still... interesting, as was the video, i think.

  • @Noname-67

    @Noname-67

    2 жыл бұрын

    @@ferdinandkraft857 it independent from ZFC doesn't mean that it's neither true or false. Axiom of pairing, axiom of infinity, axiom of union, etc.. are all independent from each other and we all know they are true. If anything non standard is just a conventional there wouldn't be ZFC as we know it, only ZF. Gödel himself believed that the Continuum hypothesis was wrong, without prove nor disprove rigorously, we still can use logical deduction and reasoning to get a agreeable answer.

  • @viliml2763

    @viliml2763

    2 жыл бұрын

    @@Noname-67 "Axiom of pairing, axiom of infinity, axiom of union, etc.. are all independent from each other and we all know they are true." define "true" none of them describe the physical universe, there's no reason someone can't say they're false and work with that

  • @primorock8141
    @primorock81412 жыл бұрын

    It's crazy that we've been able to do so much with deep neural networks and we are only now starting to figure out how they work

  • @ajaykumar-ve5oq

    @ajaykumar-ve5oq

    2 жыл бұрын

    We made machines but we don't know they perform task? sounds counter intuitive

  • @jakomeister8159

    @jakomeister8159

    2 жыл бұрын

    Ever done a task that just works, you don’t know how, it just works? Yeah this is it. It’s actually pretty cool

  • @balazsh2

    @balazsh2

    2 жыл бұрын

    @@ajaykumar-ve5oq more like we can measure how well they perform tasks, so we don't care about the whys :) transparent statistical methods exist and are widely used, just for AI black box methods perform better most of the time

  • @jirrking3461

    @jirrking3461

    2 жыл бұрын

    this video is idiotic, since we do know how they work and we have been visualizing them for ages now

  • @Elrog3

    @Elrog3

    2 жыл бұрын

    Saying we don't know how neural networks work is a stretch to the same caliber of saying we don't know how cars work.

  • @AUniqueName
    @AUniqueName Жыл бұрын

    These videos are severely underrated- Thank you for the knowledge you share and hopefully millions of people will be watching these per week- It's so good for people to know about these things

  • @binman5753
    @binman57532 жыл бұрын

    Watching this and not understanding anything make these videos all the more magical 💫

  • @warpdrive9229
    @warpdrive92292 жыл бұрын

    I wait for this video eagerly every year! Much love from India :)

  • @KeertiGautam
    @KeertiGautam2 жыл бұрын

    I don't understand much but I feel happy that good science is happening. It means there's still some sense and logic in this world alive 😄

  • @AnthonyBecker9
    @AnthonyBecker92 жыл бұрын

    Hmm, I'm not sure how the neural net to kernel machine model is a breakthrough. Maybe that was left out. But the idea that a neural net divides data points with hyperplanes in high-D space goes back decades.

  • @PedroContipelli2

    @PedroContipelli2

    2 жыл бұрын

    Kernel machines are linear, whereas neural networks are, generally, non-linear. Showing that an infinite-width network can be reduced to linear essentially raises suspicion about whether finite neural networks can be simplified in some novel way as well. The consequences could be groundbreaking.

  • @satishkpradhan

    @satishkpradhan

    2 жыл бұрын

    @@PedroContipelli2 arent all layers of neural network just linear functions of the previous layer, so technically isnt it possible at some conditions a multi layer neural network can be a linear function.

  • @PedroContipelli2

    @PedroContipelli2

    2 жыл бұрын

    @@satishkpradhan The activation function of each layer (sigmoid, tanh, relu, etc) is usually where the non-linearity is introduced.

  • @lolgamez9171

    @lolgamez9171

    2 жыл бұрын

    @@PedroContipelli2 analog artificial intelligence

  • @joshuascholar3220

    @joshuascholar3220

    2 жыл бұрын

    I stopped at the "nobody knows how neural networks work" and "billions of hidden layers" sentence. MY GOD, why did they have some moron who has no idea what he's talking about write this? And another one read it? MY GOD.

  • @frankferdi1927
    @frankferdi1927 Жыл бұрын

    What I dislike is, that many videos, this one included at some points, reward before there is proof, stimulating excitement in the viewers. Generating publicity is important, I do know that.

  • @Pramerios
    @Pramerios2 жыл бұрын

    Bravo!! This was SUCH an awesome video! Definitely saving and coming back!

  • @warpdrive9229
    @warpdrive92292 жыл бұрын

    This was just awesome! See you guys next year again. Much love from India :)

  • @jordanweir7187
    @jordanweir71872 жыл бұрын

    I love how you guys don't leave out the gory details, thats what we all wanna see hehe, also great to have an update each year

  • @nichtrichtigrum
    @nichtrichtigrum2 жыл бұрын

    With only a high school maths background, I couldn't understand any of the concepts in the video. I'd be very happy if you could explain in more detail what a Liouville field actually is and what a free Gaussian field is and so on

  • @aniksamiurrahman6365
    @aniksamiurrahman63652 жыл бұрын

    What what what what what? Finally, such a result in continuum hypothesis! Unbelievable.

  • @KimTiger777
    @KimTiger7772 жыл бұрын

    Math is art as one needs creativity to arrive to new solutions. Big WOW!

  • @zfyl

    @zfyl

    2 жыл бұрын

    okay, this is actually a fair point totally agree

  • @Rotem_S

    @Rotem_S

    2 жыл бұрын

    Also because it's (sometimes) beautiful and can engage deeply

  • @bobsanders2145

    @bobsanders2145

    2 жыл бұрын

    That’s everything though not just math

  • @Irrazzo
    @Irrazzo2 жыл бұрын

    1:01 "What happens inside their billions of hidden layers". I think you confused layers with parameters, or weights, here. The largest GPT-3 version for instance has 96 layers and 175 billion parameters.

  • @shambhav9534

    @shambhav9534

    2 жыл бұрын

    Parameters are whatever the starting nodes pick up and layers are layers, right? Or are parameters the starting nodes themselves?

  • @Irrazzo

    @Irrazzo

    2 жыл бұрын

    @@shambhav9534 In a simple feed-forward neural network like a multilayer perceptron, you can represent a neuron / node by the equation y=h(w*x + b). x is what goes into the layer that neuron belongs to (if it's the first hidden layer, x is just an unchanged input feature vector), y is what goes out. w are the weights (all the edges) connecting all the neurons in the previous layer with the one in the current layer we're currently looking at, b is a bias. '*' is a dot product. h is a nonlinear activation function. The union of all weights and biases of all neurons between all the layers are the parameters which are learned during training.

  • @shambhav9534

    @shambhav9534

    2 жыл бұрын

    @@Irrazzo Okay I get it now.

  • @Irrazzo

    @Irrazzo

    2 жыл бұрын

    Just one more thing about layers: instead of thinking of layers in terms of the nodes of which they consist, you can also think of them in terms of the data that flows through your network (the x's and y's). Then, layers are different, increasingly abstract representations of your data, connected via transformations, or functions. And the complexity, the 'billions', are due to the enormous size of the function space of the overall function (transformation) which the network approximates by a series (or rather, composition) of functions which only slightly differ from one to the next.

  • @shambhav9534

    @shambhav9534

    2 жыл бұрын

    @@Irrazzo I understood nothing but I do think I understand layers. They're layers which modify the starting input and at the end that input becomes the output. I tried(just tried) to make a neural network back in the day, I think I know the basics.

  • @jman997700
    @jman9977002 жыл бұрын

    This is the best news I've heard all year. People want to know about the good news too.

  • @zfyl

    @zfyl

    2 жыл бұрын

    what good is about these things? whom this will benefit?

  • @nullbeyondo

    @nullbeyondo

    2 жыл бұрын

    @@zfyl If you want a really accurate answer, then It is "what" will this benefit which is mainly all of our technology. And only if they're used right, then they'd improve the quality of life overall; but no guarantee on human behavior.

  • @richardfredlund3802
    @richardfredlund3802 Жыл бұрын

    that equivalence between the infinite width NN's and Kernel machines is really a very surprising and interesting result.

  • @NovaWarrior77
    @NovaWarrior772 жыл бұрын

    these are awesome! I'm glad we don't just have to look back to textbooks to see cutting edge advances!

  • @miguelriesco466
    @miguelriesco4662 жыл бұрын

    Hey it was pretty nice! Just to clear things up, the continuum hypothesis is whether aleph 1 is the cardinality or size of the real numbers. By definition it is the smallest infinity greater than aleph 0.

  • @IvanGrozev

    @IvanGrozev

    2 жыл бұрын

    We dont know the size of set of real numbers, we just know its bigger the aleph0. It can be aleph1, aleph2 .... even can be monstrously big as aleph_omega_1 etc. And in current state of most widelly accepted axiomatization of mathematics called ZFC is impossible to sovle continuum hypothesis. One watching this video get the impression that real numbers are aleph1 in size which is not true.

  • @sweetspiderling

    @sweetspiderling

    2 жыл бұрын

    @@IvanGrozev yeah this video is all wrong.

  • @Psychonaut165
    @Psychonaut165 Жыл бұрын

    Out of all the science channels I understand nothing about this is one of my favorites

  • @dylanparker130
    @dylanparker1302 жыл бұрын

    I love these videos & QM's articles too!

  • @YouChube3
    @YouChube32 жыл бұрын

    Natural numbers, floating points and that third set I couldn’t bare even to try explain. Thank you narrator?

  • @quicksilver0311
    @quicksilver03112 жыл бұрын

    Am I the only one who was totally clueless for all 11 minutes? This video literally gives me "What am I doing with my life?" vibes and I love it. XD

  • @edgedg
    @edgedg2 жыл бұрын

    My favourite videos of every year!

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader2 жыл бұрын

    Thanks for making these. Very important

  • @srivatsavakasibhatla823
    @srivatsavakasibhatla8232 жыл бұрын

    The last one made me remember what David Hilbert implied. "Physics is too complicated to be left for Physicists alone".

  • @gettingdatasciencedone
    @gettingdatasciencedone Жыл бұрын

    I love these intro videos that try and convey the complexity of recent advances. One small problem with this video is that the opening line is not strictly speaking true. The 1950s neural networks did not use the same learning rules as the human brain. They were very simplified models based on a bunch of assumptions.

  • @droro8197
    @droro81972 жыл бұрын

    Talking about the continuum hypothesis without mentioning the results of Cohen and Godel is pretty much a crime. Basically the continuum hypothesis is independent from the the rest of set theory axioms and can be assume to be true or false. i guess the real problem here is talking about very heavy math problem in 10 minute video…

  • @caracasmihai01
    @caracasmihai012 жыл бұрын

    My brain had a meltdown when watching this video.

  • @badalism
    @badalism2 жыл бұрын

    We have known for a while that infinite width neural network + SGD is equivalent to Gaussian Process.

  • @zfyl

    @zfyl

    2 жыл бұрын

    thanks for single handedly eradicating the breakthrough level of that paper 😅

  • @Bruno-el1jl

    @Bruno-el1jl

    2 жыл бұрын

    Not for dnns though

  • @Amir_404
    @Amir_4042 жыл бұрын

    Bit of a nitpick, but "neural networks" in computer science(or at least the ones that people use to solve problems) are not comparable to the neural networks in the brain. The two fundamental differences are that computers are "feed forward" and synchronous. in English, every layer fires at the same time and there are no loops. It is not that we can't make a neural network more similar to a brain(there is a lot of interesting research going on), but nobody has found an effective way of training those types of networks.

  • @pvic6959
    @pvic69592 жыл бұрын

    I love how google showed up in both the physics and math/comp sci break through videos. it shows how much theyre doing and how much they're pushing humanity forward little by little. love them or hate them, its so cool to see science being done!

  • @martinschulze5399

    @martinschulze5399

    2 жыл бұрын

    Google is not altruistic ;)

  • @LA-eq4mm

    @LA-eq4mm

    2 жыл бұрын

    @@martinschulze5399 as long as someone is doing something

  • @willlowtree

    @willlowtree

    2 жыл бұрын

    i have great respect for the scientists working at google, but as a company it is inevitable that their goals are not always allied with humanity's interests

  • @pvic6959

    @pvic6959

    2 жыл бұрын

    @@willlowtree yeah my comment wasnt about about goals or anything. just that theyre doing so much science and sharing a lot of it with the world

  • @baronvonbeandip

    @baronvonbeandip

    2 жыл бұрын

    @@martinschulze5399 Water is wet. Nothing is altruistic.

  • @Quwertyn007
    @Quwertyn0072 жыл бұрын

    6:33 Saying an axiom is "likely true" makes no sense, unless it was to follow from other axioms and thus be unnecessary. Axioms are what you start with - you can start with whatever assumptions you want, the best they can do is not contradict each other and lead to interesting/useful mathematics. Math doesn't take into account the physical world - it is only based on axioms. Maybe you could make an argument about this axiom likely being related to the physical world in some way, which in some non mathematical sense would make it "true", but that seems rather difficult.

  • @Quwertyn007

    @Quwertyn007

    2 жыл бұрын

    @FriedIcecreamIsAReality I think you make a good point, but I don't think many people would understand "likely true" as "intuitively making sense". That's just not what "true" means.

  • @Quwertyn007

    @Quwertyn007

    2 жыл бұрын

    @FriedIcecreamIsAReality I'm still just a mathematics student, so I'm not in the best position to judge whether it really is used this way, but this video isn't aimed at professors, so I think the phrasing is at least misleading

  • @robertschlesinger1342
    @robertschlesinger13422 жыл бұрын

    Very interesting, informative and worthwhile video. Be sure to read the linked articles.

  • @mdoerkse
    @mdoerkse2 жыл бұрын

    Interesting that all three breakthroughs have to do with connections between different theories and 2 of them are mapping something useful to something easy to compute.

  • @zfyl

    @zfyl

    2 жыл бұрын

    what useful?

  • @mdoerkse

    @mdoerkse

    2 жыл бұрын

    @@zfyl Deep neural nets and quantum physics/gravity.

  • @seenaman96

    @seenaman96

    2 жыл бұрын

    I learned about kernels back in 2017 when using SVM... How are kernels breakthroughs? If you have inputs that are not activated in 1 dimension, exploding to a higher dimension will not include them... So it's fine to skip the work, DUH

  • @mdoerkse

    @mdoerkse

    2 жыл бұрын

    @@seenaman96 I'm not a mathematician and I don't know anything about kernels, but the video wasn't saying that kernels are the breakthrough. It's saying they are the old, easily computible thing that neural nets can be mapped to. The mapping is the breakthrough.

  • @monad_tcp
    @monad_tcp2 жыл бұрын

    So they proved the equivalence between convolution kernels and neural networks. As someone who does searchers in computing graphics, I always had this feeling that they were very close, as you could use them together and sometimes even replace one for another.

  • @szymonbaranowski8184

    @szymonbaranowski8184

    Жыл бұрын

    Seems not as any great or surprising breakthrough then.

  • @tetomissio8716
    @tetomissio87162 жыл бұрын

    Fantastic set of videos

  • @RegiKusumaatmadja
    @RegiKusumaatmadja Жыл бұрын

    Superb explanation! Thank you for the video

  • @piercevaughn7000
    @piercevaughn70002 жыл бұрын

    Excellent intro Edit: excellent everything I’m pretty clueless on all of this, but this was awesome

  • @MadScientyst
    @MadScientyst Жыл бұрын

    I'd sum this up with a reference to a title of author Eric Temple Bell: 'Mathematics queen and servant of science'.....brilliant read & exposition as per this Quanta snippet!!

  • @josueibarra4718
    @josueibarra4718 Жыл бұрын

    Gotta love how Gauss still somehow manages to butt in to present-day, groundbreaking discoveries

  • @scifithoughts3611
    @scifithoughts36112 жыл бұрын

    Great video series!

  • @akshaysingh11990
    @akshaysingh119902 жыл бұрын

    I wished I lived a million years and watched all the content created forever

  • @andraspongracz5996
    @andraspongracz59962 жыл бұрын

    Got halfway through the video, and stopped. I wonder if the creators ever asked the scientists in the video (or any expert, really) to check the final version of the narration. It is full of inconsistencies, and in case of the second segment (continuum hypothesis) just completely off. We know that the continuum hypothesis is independent from ZFC (the standard system of axioms of set theory) for nearly 60 years. It was famously Paul Cohen who proved this, and he was the one who developed the technique of forcing (in order to prove this result and others). He even got a Fields Medal for his work. I'm not sure about the relevance of the Aspero-Schindler theorem ("Martin's Maximum++ implies Woodin's axiom (∗)") as I'm not a set theorist, but it must be much more subtle than what the video suggests. It is well-understood for decades what the possible alef indices of the continuum can be. In particular, it is not necessarily alef_1, as suggested early on in this video, and contradicted later. The video has very nice graphics and catchy phrases, but the content is just wrong. It was quite cringey to listen to it, really.

  • @pingdingdongpong

    @pingdingdongpong

    2 жыл бұрын

    Yea, I agree. I know enough set theory (and it ain't much) to know that this is a bunch of hogwash.

  • @Macieks300

    @Macieks300

    2 жыл бұрын

    Yes. I agree. Set theory basics are easy enough to understand for undergraduates so it's the most approachable subject among all in these videos but hearing how wrong their explanation is I now must wonder how wrong are their explanations of the other discoveries.

  • @elmaruchiha6641
    @elmaruchiha66412 жыл бұрын

    Greate! I love the video with the Animations and the topic!

  • @raajjann
    @raajjann2 жыл бұрын

    Great exposition!

  • @J3Compton
    @J3Compton Жыл бұрын

    Love this! It would be nice to have the urls to the papers here if possible

  • @thanhtunghoang3448
    @thanhtunghoang34482 жыл бұрын

    The first breakthrough is called Neural Tangent Kernels, first introduced in 2018 by Arthur Jacot at EPFL. He at that time, not a Google employee. Attributing this breakthrough to Google is unfair and misleading.

  • @WilliamParkerer

    @WilliamParkerer

    2 жыл бұрын

    No one's attributing it to this Google employee

  • @deleted-something
    @deleted-something Жыл бұрын

    I knew in the moment they started speaking about the Continuum hypothesis this was gonna be interesting

  • @chilling00000
    @chilling000002 жыл бұрын

    Isn’t the equivalence of wide NN and kernels known for a long time already…?

  • @satishkpradhan

    @satishkpradhan

    2 жыл бұрын

    even i thought so... but as i saw all comments of people in amazment i was confused. Thank God someone else also think so ... else I thought to reread everything I had learned... or revisit my analytical thinking.

  • @StratosFair

    @StratosFair

    2 жыл бұрын

    It is in fact (part of) what my Master's thesis was about and I am quite confused because indeed this has been known for some time already

  • @David-rb9lh

    @David-rb9lh

    2 жыл бұрын

    It’s about dnn here

  • @StratosFair

    @StratosFair

    2 жыл бұрын

    @@David-rb9lh I did a bit of digging and it turns out that the paper which introduces the result (wide deep neural networks are equivalent to kernel machines) has in fact been written in 2017. Now don't get me wrong, this is a very nice result, but by no means a 2021 breakthrough unfortunately.

  • @David-rb9lh

    @David-rb9lh

    2 жыл бұрын

    @@StratosFair I’m agree with you . I’ve not digged to much into details to be honest .

  • @charlesvanderhoog7056
    @charlesvanderhoog70562 жыл бұрын

    Kernel Machine new? We used variance analysis in multiple dimensions as far back as the 1970's and it was developed into what is called positioning in marketing. These techniques enable the researcher to extract immense amounts of data from small samples.

  • @nateb3277
    @nateb32772 жыл бұрын

    I discovered Quanta only a few months ago but already love coming back to them for this kind of quality content on new developments in science and tech :) Like it's well written, well animated, and easily understood *chef's kiss*

  • @SolaceEasy
    @SolaceEasy2 жыл бұрын

    Man, math's mysterious.

  • @kravandal
    @kravandal2 жыл бұрын

    Omg. I can't wait next year's video.

  • @NickMorozov
    @NickMorozov2 жыл бұрын

    So, do I understand correctly that the neural networks are hyperdimensional? Or use extra dimensions for calculations? I'm sure I don't understand the ramifications but it sounds incredibly cool!

  • @sheriffoftiltover

    @sheriffoftiltover

    2 жыл бұрын

    Dimension in this context just means additional parameters from my understanding. EG: For a light, one dimension might be wavelength, one might be frequency and another might be luminosity

  • @user-ei8yd3tm9l
    @user-ei8yd3tm9l2 жыл бұрын

    towards the end of the video, I was like: this is pretty much why my naive thought of majoring in pure math got crushed after first-year university... math before university is nowhere close to real hard-core math, which is a different beast altogether.

  • @goldensnitch1614
    @goldensnitch16142 жыл бұрын

    great vid! btw 11:08 is Simons foundation made by the guy who made Rennaissance Technologies?

  • @JustNow42
    @JustNow42 Жыл бұрын

    If you would like to crack anything, try group theory . Split observations into groups and then use groups of groups etc.

  • @Po0pypoopy
    @Po0pypoopy Жыл бұрын

    I wish I was smart enough to contribute to humanity like these people I would feel so fulfilled in life :/

  • @viniciush.6540
    @viniciush.65402 жыл бұрын

    "This enables to compute things that physicists don't know how to compute" oh man how i love this phrase lol

  • @deantoth
    @deantoth2 жыл бұрын

    I've watched several of these breakthrough videos and although extremely interesting, you simplify a concept so much that rather than clarifying the topic, you make it more opaque. And just when I think you are about to provide some insight, you move on to the next segment. You could spend a few more minutes on each topic.. OR make a full video per topic please ! Thank you for your hard work.

  • @domdubz7037
    @domdubz70372 жыл бұрын

    2021 and Gauss is still with us

  • @UsamaThakurr
    @UsamaThakurr2 жыл бұрын

    Thank you

  • @Rawi888
    @Rawi8882 жыл бұрын

    Thanks for making me feel smart.

  • @dEntz88
    @dEntz882 жыл бұрын

    With regard to the contiuum hypothesis: Did I understand this correctly that they are no longer operating in ZFC, but added more and stricter axioms? Wouldn't this imply that the continuum hypothesis is still undecidable in ZFC?

  • @hunterdjohny4427

    @hunterdjohny4427

    2 жыл бұрын

    Yes the continuum hypotheses is known to be undecidable in ZFC since Gödel. It has also been known for a while that if you were to add either of the axioms MM++ or Woodin's axiom (*) to ZFC, then the Continuum hypotheses would be false. Now, the paper by David Asperó and Ralf Schindler proves that (*) is weaker than MM++. This ofcourse has no bearing on the continuum hypotheses at all unless you consider either of them an axiom. How the video chooses to present this is quite odd. I guess the point they are trying to make is that since they were always considered rival axioms and we now know that one actually implies the other we might just add M++ as an axiom to ZFC. Woodin stated something along the lines that we shouldn't accept MM++ or (*) an axiom because MM++ is incompatible with the natural strengthening of (*). Regardless of what that actually means it at least should be clear that there are objections to simply accepting MM++ as an axiom.

  • @dEntz88

    @dEntz88

    2 жыл бұрын

    @@hunterdjohny4427 Thank you. I also found it weird how they framed it in the video. At least to me it came across that they were implying that the results could also be used in ZFC alone. Hence my question.

  • @dEntz88

    @dEntz88

    2 жыл бұрын

    @FriedIcecreamIsAReality But isn't that just creating new problems? If I remember Gödel correctly, every sufficiently powerful system of axioms will run into similar problems as the continuum hypothesis. My issue is that the video, at least to how I perceived it, framed the issue in a way that implies that result leads to a result which is "more true". But the notion of true solely depends on the axioms we choose and is subjective to a certain extent.

  • @hunterdjohny4427

    @hunterdjohny4427

    2 жыл бұрын

    ​@@dEntz88 Adding an axiom to ZFC wouldn't create new problems. Every theorem that was previously provable (or refutable) is still provable (or refutable), and some that were previously undecidable may now be provable (or refutable). So by adding an axiom your theory gets more 'specific'. What Gödel showed is that this process of adding axioms can never lead to a system of mathematics in which every statement is provable (or refutable), unless you add many many axioms in such a way that your set of axioms loses it's recursiveness. This is hardly desirable, since the set of axioms being non-recursive means that if I write down a statement you have no way of telling whether it is an axiom or not, neither will you be able to tell whether a given proof is valid or not. Our only option is to accept that any decent theory of mathematics (decent as in powerful enough to express basic arithmetic) can't be complete. Your issue with the video is correct of course, they pretend statements have an absolute truth value regardless of the system of axioms worked in. What is said at 6:33 is especially bizarre: [MM++ and (*) are both likely true] makes no sense whatsoever since both axioms are independent of ZFC.

  • @dEntz88

    @dEntz88

    2 жыл бұрын

    @@hunterdjohny4427 Thank you for your explanation. I only have a somewhat superficial knowledge of that area of maths and was actually thinking about the issues you elaborated.

  • @kamabokogonpachiro6797
    @kamabokogonpachiro67972 жыл бұрын

    "When you watch a video, you get the sensation of understanding, but you never actually learn anything" ~ Veritasium

  • @peterb9481
    @peterb94812 жыл бұрын

    Wow - all so interesting. Good video.

  • @lebiquo8501
    @lebiquo85012 жыл бұрын

    god i would love a "breakthroughs in chemistry" video

  • @Ashallmusica
    @Ashallmusica2 жыл бұрын

    I'm the least educated person watching this( had only completed junior school ) now as a 21 years old. I just get curious with different things and clicking this video get me to learn a new word - Aleph. It's amazing for me yet i still didn't understand much here but I love this.

  • @lifeisstr4nge
    @lifeisstr4nge2 жыл бұрын

    I understand the outputs to be an answer of like a classification type. But why are there exactly the same number of inputs always shown? What is the input here?

  • @SilBu3n0
    @SilBu3n02 жыл бұрын

    incredible video!

  • @Fan-fb4tz
    @Fan-fb4tz2 жыл бұрын

    great videos always!

  • @cobywhitw5748
    @cobywhitw5748 Жыл бұрын

    Does anyone know where I can read the paper about the Deep Neural Networks shown in the video??

  • @nicholasb1471
    @nicholasb14712 жыл бұрын

    This video makes me want to do my calculus 3 homework. If only it wasn't winter break right now.

  • @mobjwez
    @mobjwez2 жыл бұрын

    would be nice to see how these theories and works can be applied to real-world situations, cheers

  • @saugatbhattarai9826
    @saugatbhattarai98262 жыл бұрын

    I like your explanation ........and thanku for updates .................

  • @a.movement
    @a.movement2 жыл бұрын

    Appreciate this!

Келесі