Liquid Neural Networks, A New Idea That Allows AI To Learn Even After Training

Ғылым және технология

Daniela Rus currently serves as the Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Rus is a renowned Andrew (1956) and Erna Viterbi professor at CSAIL. With a passion for advancing the field of robotics, Rus has made significant contributions to areas such as autonomous vehicles, swarm robotics, and distributed algorithms. Her research and leadership have earned her numerous accolades, establishing her as a prominent figure in the world of robotics and artificial intelligence.
Subscribe to FORBES: kzread.info?s...
Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
account.forbes.com/membership...
Stay Connected
Forbes newsletters: newsletters.editorial.forbes.com
Forbes on Facebook: forbes
Forbes Video on Twitter: / forbes
Forbes Video on Instagram: / forbes
More From Forbes: forbes.com
Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.

Пікірлер: 282

  • @LindsayHiebert
    @LindsayHiebert11 ай бұрын

    Kudos Daniela Rus and Computer Science and Artificial Intelligence Laboratory (CSAIL) team at MIT! Excellent work and innovation!

  • @JChen7
    @JChen711 ай бұрын

    Getting strong “Any sufficiently advanced technology is indistinguishable from magic” vibes right now thanks to the wizards at MIT. Wish I was smart and dedicated enough to learn what's happening here. Looks amazing.

  • @mattjohnson1775

    @mattjohnson1775

    11 ай бұрын

    I agree 100%. Magik is very real....but its divination/ witchcraft. Ask Gordie Rose, he doesnt sugarcoat depending who hes speaking to .

  • @CrazyAssDrumma

    @CrazyAssDrumma

    11 ай бұрын

    Me too my friend. The good news is there ireally is no point now. Given how much time and effort it would take you (and me). By the time you got there, we'll have SuperAI anyways lol 😂

  • @LabGecko

    @LabGecko

    11 ай бұрын

    The better news is that anyone can learn it. There are enough resources now that anyone can dive into AI if that is your passion, and learn from the ground up. I did, and now I've trained several AI (technically machine learning algorithms) and look forward to getting my hands on (or building) a liquid network soon! For reference, I started on Google's AI courses, but SentDex here on KZread does a great job with several of his tutorials too. Also 3Blue1Brown does a great job of explaining anything math related, and he has a series on machine learning.

  • @AnExplorer1000

    @AnExplorer1000

    11 ай бұрын

    @@LabGecko Do you write a blog about your projects or have public GitHub projects?

  • @LabGecko

    @LabGecko

    11 ай бұрын

    @@AnExplorer1000 hadn't thought of that. I'm retired with PTSD though, so my projects are pretty hit and miss, might not be something ppl would want to follow.

  • @justinlloyd3
    @justinlloyd311 ай бұрын

    No paper link no other links. No name for the paper. Thanks Forbes.

  • @LabGecko

    @LabGecko

    11 ай бұрын

    I know, while I thank them for posting the vid, not crediting and posting sources in this day and age of journalism is pathetic

  • @shubhamdhapola5447

    @shubhamdhapola5447

    11 ай бұрын

    Maybe, you're pre-occupied with some higher priority task Let me help you out. Here you go fella - arxiv.org/pdf/2006.04439.pdf P. S. - It just required typing literally 25 characters (including whitespace) "liquid networks mit csail" into the search bar of the search engine you prefer and then investigating the top 2-3 links for the exact match. As far as giving "credit and recognition" goes - that's utterly unprofessional & unethical from Forbes' side.

  • @thearchersb
    @thearchersb11 ай бұрын

    I don't understand anything but I completely agree with her.

  • @concernedspectator
    @concernedspectator11 ай бұрын

    Absolutely incredible.

  • @thomasfreund-programandoha961
    @thomasfreund-programandoha96111 ай бұрын

    Wow! This is amazing. Thanks for sharing

  • @energyeve2152
    @energyeve215210 ай бұрын

    very cool. I look forward to the many applications this can be used in. Thanks for sharing.

  • @dreamphoenix
    @dreamphoenix11 ай бұрын

    Fascinating. Thank you.

  • @the_curious1
    @the_curious111 ай бұрын

    Very interesting and a good presentation, thank you!

  • @ChesterSings
    @ChesterSings11 ай бұрын

    Amazing!! Thank you ❤

  • @francisdelacruz6439
    @francisdelacruz643911 ай бұрын

    Really important work, does it scale to 1000x neurons? Cooperative networks?

  • @philforrence
    @philforrence11 ай бұрын

    Curious how the field will receive this. Let's get her on Lex Fridman!

  • @The-Singularity-M87

    @The-Singularity-M87

    11 ай бұрын

    Before I saw your comment, I had already saw this video and posted a link on One of lex Freeman's videos. Since I don't know how to email him directly but that's the same thing I thought. Great minds right 👍

  • @philforrence

    @philforrence

    11 ай бұрын

    @@The-Singularity-M87 you and me, myron. The greatest of minds!

  • @electrolove9538

    @electrolove9538

    11 ай бұрын

    That autonomous driving was one of Lex's projects right?

  • @katherandefy

    @katherandefy

    11 ай бұрын

    Gosh yeah I would love to hear more than we can get in a short talk.

  • @chrisf1600

    @chrisf1600

    11 ай бұрын

    Oh god please no. "So, uhhhh, can a liquid network ever fall in love ?"

  • @MathPhysicsEngineering
    @MathPhysicsEngineering11 ай бұрын

    No link for the original paper in the description?

  • @crackrule
    @crackrule11 ай бұрын

    This will make learning faster and better networks,. the vision to detect object seems to be much more clear. Hope this will be out soon. Or may be we need to push this to Tensorflow or Pytorch soon for easy accessibility with wide frameworks. The more experiments get performed using this, the better outcomes can be seen in real world.

  • @arlogodfrey1508

    @arlogodfrey1508

    11 ай бұрын

    Move fast break things let's go

  • @Supreme_Lobster

    @Supreme_Lobster

    11 ай бұрын

    I already did some testing with ego localization (finding your relative coordinates at every frame by watching a video) and it seemed promising

  • @whannabi

    @whannabi

    11 ай бұрын

    ​@@arlogodfrey1508 depends what relies on the things you wanna break...

  • @crackrule

    @crackrule

    11 ай бұрын

    @@Supreme_Lobster can you share github link?

  • @Supreme_Lobster

    @Supreme_Lobster

    11 ай бұрын

    @@crackrule search for the repo CfC_LiquidNetwork-DeepVO . I use the same username as on here. Cant post the url because it gets deleted

  • @web3global
    @web3global9 ай бұрын

    WOW! Amazing, thanks for sharing Forbes! 🚀

  • @jeanbernardmbarga3265
    @jeanbernardmbarga326511 ай бұрын

    Great presentation

  • @user-qw1rx1dq6n
    @user-qw1rx1dq6n11 ай бұрын

    Incredible that in my limited understanding this seems to perform almost the same process in the sense that the effects are the same as self attention but it’s just more direct about it.

  • @SilenceOnPS4
    @SilenceOnPS411 ай бұрын

    Can someone please inform me of the advantages of LNNs (if it can be used) on diffusion models such as Stable Diffusion, DALL-E and Midjourney? If I am right, these diffusion models use DNNs?

  • @Viewpoint314
    @Viewpoint31411 ай бұрын

    What is the difference between neural network and liquid neural network? Unlike traditional neural networks that only learn during the training phase, the liquid neural net's parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data.Apr 19, 2023

  • @michaelm6928

    @michaelm6928

    11 ай бұрын

    This shouldn’t give the smooth results as she showed? It’s learning on the validation set?

  • @vaakdemandante8772

    @vaakdemandante8772

    11 ай бұрын

    How is network's ability to change its parameters over time connected with it being explainable? Isn't learning "after learning" still learning? It's a lot of claims for really not a lot of substance. Where's the link to the publication? Did anybody replicated those findings? Looks like a bunch of PR fluff really.

  • @katherandefy

    @katherandefy

    11 ай бұрын

    @@vaakdemandante8772 It’s because they simplified the structure and added a tree which makes the parameters easier for us and the machine to focus data that previously computed from one large block.

  • @rpcruz

    @rpcruz

    9 ай бұрын

    The parameters do NOT change after training. The neural network output is a derivative - that is, the outputs are relative to each other - the neural network focus is on how the new input influences the output relative to the previous one.

  • @gpt-jcommentbot4759

    @gpt-jcommentbot4759

    7 ай бұрын

    @@rpcruz Reread the comment

  • @md.adnannabib2066
    @md.adnannabib206611 ай бұрын

    that's the most impressive thing i have ever seen,kudos to the researcher

  • @coalkey8019
    @coalkey801911 ай бұрын

    Wow. That is absolutely huge.

  • @benealbrook1544
    @benealbrook154411 ай бұрын

    Amazing this is revolutionary

  • @user-qw1rx1dq6n
    @user-qw1rx1dq6n11 ай бұрын

    It is unbelievable what they managed to do with 20000 parameters I must learn this technique fast

  • @mahmga1
    @mahmga111 ай бұрын

    Unbelievably ground breaking from lay view. I was just saying the other day that there had to be a better way that redefines the NN

  • @PixelPulse168
    @PixelPulse16811 ай бұрын

    Thanks for sharing

  • @LearnAINow
    @LearnAINow11 ай бұрын

    How does this network react when confronted with outside noise that directly affects the trained task? How does this compare with the other forms of networks? Thank you. I’d love to know more

  • @revanthchouhan3068

    @revanthchouhan3068

    4 ай бұрын

    Same doubt @LearnAINow. If anyone’s into this, please share some knowledge.

  • @KevonLindenberg
    @KevonLindenberg11 ай бұрын

    This is the innovation in AI that is going to change our world beyond recognition.

  • @Gabcikovo

    @Gabcikovo

    11 ай бұрын

    Yes

  • @Gabcikovo

    @Gabcikovo

    11 ай бұрын

    9:38

  • @Gabcikovo

    @Gabcikovo

    11 ай бұрын

    10:00

  • @semitope

    @semitope

    11 ай бұрын

    It's not AI. Or at least it's not intelligent. It's really fancy real world data processing. Like feeding an algorithm data on the stock market and it doing it's best to make predictions. Except this time they do it with real world images, human generated information etc. It's good they are able to get computers to produce meaningful calculations from real world data but it's to be expected. How do you get a computer to navigate the real world? Feed it a bunch of images and make it capable of combining all of that data to process what the camera is capturing. Next you make it flexible enough to handle data outside what is feed and hope to minimize errors Thinking of, the idiots who thought it was ok to let computers mess around with the stock market better watch what people do on there with these new pieces of code.

  • @hufficag

    @hufficag

    11 ай бұрын

    Yes

  • @bhargavsai2449
    @bhargavsai244911 ай бұрын

    Excellent blown away

  • @SuperMaDBrothers
    @SuperMaDBrothers11 ай бұрын

    9:40 love this cut

  • @superuser8636
    @superuser863611 ай бұрын

    CSAIL is the premiere AI lab at MIT! I know because I worked there developing their AI infrastructure 😂 I really dig this experiment and talk

  • @ItzGanked
    @ItzGanked11 ай бұрын

    good for alignment if the arch works well

  • @joeriben
    @joeriben11 ай бұрын

    Amazing. Moving human targets can be tracked by drones independent of place and season. Isn't that what we all have been waiting for.

  • @daveloomis

    @daveloomis

    11 ай бұрын

    😅

  • @ps3301

    @ps3301

    11 ай бұрын

    The perfect terminator machine which can track you all day long. The utopia we all looking for

  • @vladyslavkorenyak872
    @vladyslavkorenyak87211 ай бұрын

    This feels like the brain is trying new neurons to better improve its functioning!

  • @omop5922

    @omop5922

    11 ай бұрын

    This video was obv not for u mate.

  • @jakebrowning2373

    @jakebrowning2373

    11 ай бұрын

    ​@@omop5922who's it for?

  • @technolus5742

    @technolus5742

    11 ай бұрын

    Exactly. Changing the internal configuration of the neurons during training seems to allow for more efficient and powerful models. This is the kind of fumdqmental breakthroughs that this field needs in order to continue making progress beyond larger and larger models.

  • @muhonbhuiyan8687
    @muhonbhuiyan868711 ай бұрын

    What is the problem if you use Sonar instead of Camera 📷, as Input? 🤔

  • @imranbaloch3414
    @imranbaloch341411 ай бұрын

    Excellent achievement!

  • @stanleyashiwel7047
    @stanleyashiwel704711 ай бұрын

    Thank you

  • @dairop3220
    @dairop322011 ай бұрын

    Is it fundamentally different from the NEAT algorithm ?

  • @eduardomanotas7403
    @eduardomanotas740310 ай бұрын

    Hey, Any link to the paper or git repository?

  • @j.d.4697
    @j.d.469711 ай бұрын

    Wow from 100 000 to 19 neurons! Can those liquid neurons be similarly scaled??

  • @ricosrealm

    @ricosrealm

    11 ай бұрын

    I don't think that's what she meant. She said those are 19 liquid networks, which likely comprise of thousands of neurons as well.

  • @alaapdhall8541

    @alaapdhall8541

    11 ай бұрын

    @ricosrealm Look at the Fig she showed. Essentially she replaced the FC Layer having 100k neurons spread throughout layers with her 19 liquid neurons that are divided into 3 parts, 12 inter neurons, 6 command neurons, and 1 final motor neuron. Not 19 diff networks. She calls these networks with liquid neurons as liquid networks. That's what is impressive that instead of 100k neurons she used only 19 neurons. However, as mentioned in video each neuron is a series of diff. equations instead of typical f(a(wx+b)). Each neurons computation is more complex but still better than 100k also she has not replaced CNN and only replaced FC Layer. Ideally a FCNN model like Yolo should still be better as they don't have FC layers as such, but in Attention based transformers this can be useful as they often use FC layers.

  • @AnExplorer1000

    @AnExplorer1000

    11 ай бұрын

    @@alaapdhall8541 Hi there! I don't comment very often, but I have a couple of questions for you if you don't mind. You seem very knowledgeable about these things. As for myself, I love mathematics and work on it whenever I can, but I don't know anything about LLNs, ML, or AI. How did you reach a point where you understood all these things you're commenting about? Did you perhaps go through Andrew Ng's course or something similar? What's your educational background? Thanks in advance.

  • @GreenCowsGames

    @GreenCowsGames

    11 ай бұрын

    @@AnExplorer1000 If you binge a bunch of youtube video's on deep learning you can learn a lot. Then the jump to reading papers will not be that much. Channels like yannic kilcher are really good. If you want to implement then there are lots of resources on the pytorch website itself.

  • @han_0210

    @han_0210

    11 ай бұрын

    As she mentioned in the video , you can see the liquid time constant equation for each neuron and she also mentioned about how they change the wiring of the network

  • @education.online_frevryone
    @education.online_frevryone9 ай бұрын

    I was wondering a few days ago about black boxes and now we have liquid neural networks. Amazing 😍

  • @sydneyrenee7432
    @sydneyrenee74326 ай бұрын

    When is this going to be used for NLP and AGI?

  • @kkviks
    @kkviks11 ай бұрын

    Interesting!

  • @thorvaldspear
    @thorvaldspear11 ай бұрын

    It's interesting how this team has been talking about this invention for over a year, and yet has failed to gather significant attention despite the revolutionary qualities of liquid neural networks. Perhaps there is a catch that they are not telling us about?

  • @sidneymonteiro3670

    @sidneymonteiro3670

    11 ай бұрын

    MARKETING! The commercial products capture the headlines. It has nothing to do with having a catch.

  • @BB-uy4bb

    @BB-uy4bb

    11 ай бұрын

    ​​@@sidneymonteiro3670ah, all the team members of the big labs/companies always scan/read through all the new papers, if this was so much better than what we have currently than everyone would use it. Science in this area does not need Marketing at all

  • @jeremykothe2847

    @jeremykothe2847

    11 ай бұрын

    @@sidneymonteiro3670 So why has nothing been commercialised?

  • @thad1300

    @thad1300

    11 ай бұрын

    @@jeremykothe2847 you guys are talking as if all the current existing work was in research phase a year ago lol. Even the pathway from "Attention is All You Need" to recent LLMs took a few years and that's really fast. The recent explosion in AI is really hardware driven, the realization that as long as you just add more compute power, your models become a lot better. But the fundamental research on AI/ML was decades ago. We're hitting a hardware compute wall soon and further improvements will be made on the algorithmic side.

  • @NeonTooth

    @NeonTooth

    10 ай бұрын

    ​I think there are much better ways of conceptualizing neural networks in development that will drastically lower the compute necessary to run them. And liquid neurons are certainly an example of this. I mean you can run it on a raspberry pi. That's the kind of thing that will make models far more accessible to open source folks, as well as strip larger tech companies of their monopolies over the models.

  • @ophthojooeileyecirclehisha4917
    @ophthojooeileyecirclehisha491711 ай бұрын

    thank you

  • @Kugelschrei
    @Kugelschrei11 ай бұрын

    So basically liquid networks are neural networks with differntial equations instead of "standard" sigmoid activation functions?

  • @Jukau
    @Jukau10 ай бұрын

    Isn't that another huge step in the direction of AGI?

  • @erikdong
    @erikdong11 ай бұрын

    Bravo!

  • @aboucard93
    @aboucard9311 ай бұрын

    This is amazing

  • @hellucination9905
    @hellucination99056 ай бұрын

    I'm no expert, just curious, but it seems to me like (1) a form of continous self-reflexivity regarding the specific neuronal changes produced within the liquid network in relation to the produced output effects; and (2) a mapping of the causal relationships of the temporalized relations between these internal neuronal reconfigurations and the specific output effects produced by them.

  • @GBlunted
    @GBlunted11 ай бұрын

    No link to her paper?? That's messed up...

  • @pytebyte
    @pytebyte11 ай бұрын

    Wondering why in the first driving example the camera input stream is quite noisy but when they switch to the liquid network its smooths. Anyway. Interesting work.

  • @LabGecko

    @LabGecko

    11 ай бұрын

    First, a few assumptions: 1) I think they probably developed the driving in-house so it isn't likely to have the same richness of data as something like Tesla or Waymo. 2) I doubt they're employing the same level of computational power as Tesla, Google, et al for this project. Haven't read their paper on the topic, so this is completely off-the-cuff, and take it as such. Given that, I think the first is grainy simply because the model's data is noisier than it possibly had to be, possibly not finely-tuned, and so it has some inherent bias issues that make it pay attention to more details than it needs. As for the liquid net, that's simply the nature of derivative math. They (tend to) smooth out lower level formulas, so it makes sense to me that the image recognition result is going to be more gradients than pixel-sharp black and white decisions.

  • @pytebyte

    @pytebyte

    11 ай бұрын

    ​@@LabGecko Thank you for the answer. Maybe I get it completely wrong but when I read "camera input stream" I assume we are talking about untouched data coming directly from camera of the car. So whatever Model is processing this data in the next step gets the same kind of quality. In their presentation it gives me the feeling they wanted to show more noise in the attention map and that's why they added some extra noise to the camera input feed.

  • @LabGecko

    @LabGecko

    11 ай бұрын

    @@pytebyte good chance you're right, good point. I certainly can't be sure, not being there myself. :D

  • @rheedan

    @rheedan

    11 ай бұрын

    If I understand it correctly, the noise in the attention map isn't noise from the camera feed. The noisy bright parts of the image represent the attention weights of the model. In other words, the bright pixels represent areas the model thinks are important for making its predictions. The problem with the CNN is it pays attention to lots of things that humans wouldn't pay attention to, while the liquid network has a tighter focus, and pays attention in a way that make sense to us.

  • @johnniefujita
    @johnniefujita11 ай бұрын

    I learned about liquid neural network something like 3 years ago... but i just couldn't find any base model to implement it... does anyone know where to find any repositoty with code to implement it.

  • @Tbone913

    @Tbone913

    11 ай бұрын

    repo of the inventor

  • @kipling1957
    @kipling195711 ай бұрын

    Relevance realization

  • @prilep5
    @prilep511 ай бұрын

    Imagine Lewis Hamilton training an A.I.bot only if computer can scan his brain and eye movements to learn his decision making

  • @SaiyanGokuGohan
    @SaiyanGokuGohan11 ай бұрын

    Neuromorphic computing is where it’s at, using spiking neural networks.

  • @chrisf1600

    @chrisf1600

    11 ай бұрын

    Great comment. It's striking that none of the current ANN approaches uses "spikes" of activation. That's totally unlike how our brains work. Presumably, evolution uses spiking neurons for a reason. I wonder if the AI industry is heading down yet another blind alley ?

  • @Asha-td7bm
    @Asha-td7bm11 ай бұрын

    Amazin

  • @PeterMoueza
    @PeterMoueza11 ай бұрын

    7:20 causal

  • @sumansaha295
    @sumansaha29511 ай бұрын

    Hmm I thought the hype had died down on these. Hopefully something good comes out of it so people don't need server farms to do AI research.

  • @broyojo
    @broyojo11 ай бұрын

    I would have liked to see a transformer model comparison, seeing as it is the current state of the art for many AI problems

  • @michaelm6928

    @michaelm6928

    11 ай бұрын

    You can probably make the transformer “liquid”

  • @DS-nv2ni

    @DS-nv2ni

    11 ай бұрын

    You cannot make a Transformer "Liquid", embeddings are not compatible with this approach. Regarding Transformers being the SOTA for AI problems, I'm not sure about that, they work great for NLP problems, and generative content creation, but those only overlap with a fraction of the issues that are important to solve through AI, and not even the most important ones. On top of that, the results are not good enough, Transformers have no causality to start with, and even if Liquid Networks seem to have causality, *they can't really "understand", a device that can represent causality doesn't mean it has understanding of it (like when you program a computer), so Transformers are two steps behind what we need, and Liquid Networks seem to be a step forward in that direction, still they can't solve the problems at which Transformers excel to. The step about "understanding" we are missing, it's more than a step, it's a *long run, and doesn't look close at all, probably decades from now, at the moment AI is mostly hype unfortunately. EDIT: Typos.

  • @skadday

    @skadday

    11 ай бұрын

    @@DS-nv2ni you dont have a single clue what you are talking about

  • @DS-nv2ni

    @DS-nv2ni

    11 ай бұрын

    @@skadday I think I have a clue and an informed opinion, after spending twenty years as researcher and engineer on AI systems. On the other hand, I think that someone that like you jumps on a topic, saying that others "can't understand", without even pointing out why in a reasonable way, it's indeed the type of person that doesn't understand the topic but has some strong bias to defend. I can't even image how I may have triggered you, perhaps the part about AI being hype, just because I had previous experience of people deranging on that. I hope for you is not the case and you have some valid point to make actually, otherwise I suggest you to find a better way to spend your time.

  • @skadday

    @skadday

    11 ай бұрын

    @@DS-nv2ni you clearly dont even have an argument

  • @The-Martian73
    @The-Martian7311 ай бұрын

    The thrive of ai is exponential, meaning it is an industrial revolution, I am not freaked out tho, things really are and will be going naturally as expected !!

  • @GT-tj1qg
    @GT-tj1qg11 ай бұрын

    This talk seems a bit off. Any of that testing she showed could be completely unfair and we would have no way to know without looking at her data. How many neurons did they give the deep nets vs their liquid nets? Did they use the same training data for both? Did they choose an unusually small dataset to make the deep net underperform?

  • @mariomariovitiviti
    @mariomariovitiviti9 ай бұрын

    This is huge. Robust under data distribution variance. By targeting more task relevant features. This means less data necessary for continuous learning which is the only and super costly way to keep a model in production.

  • @tallwaters9708
    @tallwaters970810 ай бұрын

    But, just because that first car is focused on e.g. the side of the road (which is perhaps a heuristic visualization anyway), that's not bad. What if, for example, there's a kid on the side of the road, I'd want the network to be on the lookout for that!

  • @FrankAbagnale1
    @FrankAbagnale111 ай бұрын

    How do I implement this tensorflow pytorch etc as a programmer? My brain fried at the math part.

  • @balakrishnaprabhunallendra999
    @balakrishnaprabhunallendra99911 ай бұрын

    She should have been given an opportunity/ a facility to sit down & present her matter in question!

  • @DistortedV12

    @DistortedV12

    11 ай бұрын

    She needs a chair for sure

  • @fog3911

    @fog3911

    11 ай бұрын

    @@DistortedV12 aw man

  • @raphaelcardoso7927

    @raphaelcardoso7927

    11 ай бұрын

    maybe she was offered a chair and denied. we all know that to stand in a presentation makes it easier to retain attention

  • @gaetanomontante5161

    @gaetanomontante5161

    11 ай бұрын

    My dear friend, that is irrelevant. I am glad she stood tall, very tall, when presenting us with such an innovation that has the potential to truly change many of the "old" ways we look at things. I am a journeyman but I was totally awed at the ability of this new approach, liquid neurons, to deliver effective solutions to many problems. I want to kiss her own liquid neurons with true Human love. One thing seems a little strange at the time of my writing: Despite having been there 10,502 view, I witness only 257 likes, including mine, and ONLY 38 previous comments, and, may I add, most of them being perfunctory and at least one being totally inane.

  • @escesc1

    @escesc1

    11 ай бұрын

    I doubt she was not offered a chair. She simply preferred to stand up :D

  • @guten5221
    @guten52219 ай бұрын

    Please make something like skynet

  • @MrChaluliss
    @MrChaluliss11 ай бұрын

    How y'all gonna not link the papers relevant to this talk?

  • @greyowlaudio
    @greyowlaudio11 ай бұрын

    The entire field of AI is at risk with companies now pay-walling off their data and/or charging obscene amounts to use it. That's a major short-term spectre that will need to be dealt with.

  • @Voltlighter
    @Voltlighter11 ай бұрын

    So they work more similarly to real neurons then?

  • @andybrice2711
    @andybrice271111 ай бұрын

    Surely it's entirely reasonable that an autonomous vehicle would be paying attention to bushes at the side of the road? There could be potential hazards obscured by those bushes. Like people or animals behind them.

  • @GT-tj1qg

    @GT-tj1qg

    11 ай бұрын

    In an advanced system, perhaps that would be a very good feature to include. But I suspect the models they were demonstrating here were just designed for lane-keeping (finding the road and staying on it)

  • @Aldraz
    @Aldraz11 ай бұрын

    I mean this is cool and all, but can it be applied to language models?

  • @LabGecko

    @LabGecko

    11 ай бұрын

    Of course. It's just a different dataset. The neurons structure / math just allows it to learn post-training, which should be a definite advantage for LMs

  • @Aldraz

    @Aldraz

    11 ай бұрын

    @@LabGecko I am not so sure about that, transformers are very different. Even if it would work, it may not work as efficiently.

  • @LabGecko

    @LabGecko

    11 ай бұрын

    @@Aldraz We're talking about predicting language tokens right? To me they're just a gradient of a list of sounds with a statistical chance of being used based on its neighbors, and image data is effectively grayscale gradient with a statistical chance of being useful based on what is around it. Am I missing a piece?

  • @Aldraz

    @Aldraz

    11 ай бұрын

    @@LabGecko I guess you are correct, but with my limited understanding you wouldn't be able to easily switch and use transformers as before. Because most transformers are not RNN based. But I could be wrong.

  • @clray123

    @clray123

    11 ай бұрын

    @@LabGecko The current transformer-based algorithms process discrete sequences of tokens. CNNs were tried in the beginning and they struggled with modelling language in which there are strong time/causal dependencies between tokens at different distances in the sequence that CNNs can't capture well. RNNs did ok in principle, but they did not scale because they could not be trained on an entire sequence of tokens in parallel like transformers can today. I have no idea whether Liquid NNs suffer from the same problem, but the comparisons to LSTM and RNN do not bode well.

  • @sachinknight19
    @sachinknight1910 ай бұрын

    ❤❤❤

  • @Gabcikovo
    @Gabcikovo11 ай бұрын

    8:26

  • @Madayano
    @Madayano10 ай бұрын

    👍

  • @blengi
    @blengi11 ай бұрын

    what's the killer app using this versus AI products like fsd, gpt4,midjourney, alphafold etc that are already changing the world?

  • @samaBR_85

    @samaBR_85

    11 ай бұрын

    if it gives good results and uses less energy, its already killer!

  • @blengi

    @blengi

    11 ай бұрын

    ​@@samaBR_85 so beyond self aggrandizing claims the market must be already implementing this self evidently superior technology in some awesomely popular application or 20 that I can download or read drooling reviews about - What are they?

  • @GT-tj1qg

    @GT-tj1qg

    11 ай бұрын

    I don't know the killer app as you describe it, but this is a clue to finding it: the main difference of liquid nets is that they try to find fundamental relationships, rather than statistical clusters. This has the benefit of being less noisy and more consistent, but at the cost of potentially learning a completely wrong solution to the problem.

  • @gpt-jcommentbot4759

    @gpt-jcommentbot4759

    7 ай бұрын

    @@blengi There is no "app". AI takes a while to actually get recognized in the market. Just take a look at GPT-3, it was widely known in ML but not really talked about outside of it. Besides, apps are probably not gonna recognize this and are just going to use the same generic architecture. Over and over again until something truly revolutionary arrives, and they will use that over and over again.

  • @ApteraEV2024
    @ApteraEV202411 ай бұрын

  • @alirezagoudarzi1915
    @alirezagoudarzi191511 ай бұрын

    Amini and Hasani , Two Iranians are leading this project. it amazes me how these boys are changing the world !!👏👏

  • @yogiwp_
    @yogiwp_10 ай бұрын

    This seems like a bigger breakthrough than whatever else on the AI news in the past couple of months?

  • @nias2631
    @nias263111 ай бұрын

    Are these based upon Liquid State Machines or entirely different?

  • @exmachina767

    @exmachina767

    11 ай бұрын

    According to ChatGPT: “Liquid neural networks (Liquid NNs) and liquid-state machines (LSMs) are closely related concepts, often used interchangeably or as variations of the same idea. Both Liquid NNs and LSMs emphasize the utilization of continuous dynamics and interaction to process information. Liquid-state machines were introduced as a type of recurrent neural network inspired by the behavior of liquid matter. In LSMs, the computational units, often called "liquid neurons," have continuous activation dynamics governed by nonlinear differential equations. These units interact with each other through dense and recurrent connections, forming a liquid-like medium. The dynamics of the liquid neurons allow for the computation of temporal patterns and the processing of time-varying information. The term "liquid neural networks" is often used to refer to a broader class of unconventional neural network architectures that share similarities with liquid-state machines. While LSMs specifically emphasize the liquid metaphor and dynamics, liquid neural networks can encompass a wider range of architectures that incorporate liquid-like properties. In essence, liquid neural networks and liquid-state machines are closely related concepts that aim to harness the power of continuous dynamics and interaction in neural computation. The distinction between them may lie in the specific architectural variations, training algorithms, or implementation details, but they share the common goal of utilizing liquid-like properties for information processing.”

  • @nias2631

    @nias2631

    11 ай бұрын

    ​@@exmachina767So it's in the reservoir computing family. LSMs have been around since 2002 or so. If this group is rebranding it, I guess I will have to go through their paper and see what is so different.

  • @Viewpoint314
    @Viewpoint31411 ай бұрын

    What is liquid neural network? Liquid Neural Networks: Definition, Applications ... A Liquid Neural Network is a time-continuous Recurrent Neural Network (RNN) that processes data sequentially, keeps the memory of past inputs, adjusts its behaviors based on new inputs, and can handle variable-length inputs to enhance the task-understanding capabilities of NNs.May 31, 2023

  • @MrChaluliss
    @MrChaluliss11 ай бұрын

    Kind of strange just how quickly she's going through these slides, like what the heck else is so important that this needs to be rushed? This seems like a significant breakthrough, maybe I am over reacting, but I get the sense this is one of those big steps forward that may make a big difference in enabling AI to be used in a variety of problem cases.

  • @technolus5742

    @technolus5742

    11 ай бұрын

    She likely needs to stay within the allocated time for her talk. While this seems like a breakthrough, this is only a talk.

  • @jackbauer322
    @jackbauer3227 ай бұрын

    yeah well basically they mimic focusing like we do and are robust to context change ... at last !

  • @dag410
    @dag41011 ай бұрын

    🎉

  • @Sooyush
    @Sooyush11 ай бұрын

    Daniela & Team❤❤❤

  • @salehisabeyki4275
    @salehisabeyki42759 ай бұрын

    1:12

  • @redblue2644
    @redblue264411 ай бұрын

    Did she say Liquid Networks solutions adapt better because the equations are in effect less complex and so they focus on less?

  • @LabGecko

    @LabGecko

    11 ай бұрын

    That isn't what I heard. My understanding is that they adapt better because part of the current data gets re-introduced on a continuous basis, but is smoothed out by the derivative functions. But I'm open to being corrected.

  • @stevenesposito9305
    @stevenesposito930511 ай бұрын

    Interesting…

  • @Glowbox3D
    @Glowbox3D11 ай бұрын

    Stupid question: If Elon and his team weren't aware of this method, and then became aware of it already years in development on their own systems, would they potentially re-route their own models, or switch them out totally to use a new method like LNN? Or are they too far in on their own models, they wouldn't dare touch a new system? I would assume, if a *clearly* better technology comes around, innovators are sort of 'forced' to make the change as well?

  • @TimothyOBrien6

    @TimothyOBrien6

    11 ай бұрын

    They will switch to use this. Their system is modular enough that they can swap out the black box neural network with another that has the same inputs and outputs, and it shouldn't be very hard.

  • @GT-tj1qg

    @GT-tj1qg

    11 ай бұрын

    ​@@TimothyOBrien6 Where did you learn that, I wonder? Everything I've read would indicate that Tesla has dedicated vast computational resources to their existing deep neural network system and to discard it would waste a significant financial investment.

  • @GT-tj1qg

    @GT-tj1qg

    11 ай бұрын

    Glowbox, I'm not sure this is what the creators are claiming it to be. I suspect it has more limitations than they are letting on.

  • @technolus5742

    @technolus5742

    11 ай бұрын

    ​@@GT-tj1qg My guess is that they will be forced to change if a better route becomes apparent. Continuing to sink money into something that doesn't work very well would be a worse alternative. Besides the data they have collected and their current model can be used to train the new model, avoiding the issue of having to start from scratch.

  • @StephenRoseDuo
    @StephenRoseDuo6 ай бұрын

    Isn't this from ~4 years ago?

  • @ps3301
    @ps330111 ай бұрын

    There are no simple math demonstrations of this model

  • @benhammond6717
    @benhammond6717Ай бұрын

    Is this what Tesla is using in their latest FSD update?

  • @technowey
    @technowey11 ай бұрын

    Neural nets that learn after training is *not* a new idea. I have a book about that, with the algorithms, that I bought in 2015. If I had access to my library now, I’d post the title. I’ll watch this video to see if these are similar algorithms. I’m skeptical about the claims about “causality.” Even adaptive networks find patterns in data that might show causality, however, they will also find correlations that are *not* necessarily have a causal relationship.

  • @technolus5742

    @technolus5742

    11 ай бұрын

    Looking at those attention graphs, it does seem to do well regarding causality, sifting better through the noise.

  • @jebprime

    @jebprime

    10 ай бұрын

    It creates a representation of the environment that should converge to an equilibrium/ some sort of pattern unless new input changes it's internal representation of the environment. I think that's what they use to help their notion of causality.

  • @georgetrench2809

    @georgetrench2809

    10 ай бұрын

    The thing is that with the incredible number of neurons and all the interactions that take place between them, it is very difficult, if possible at all, to fathom how the transformer network arrived at its conclusions. This, whereas with the liquid neural network this all beoomes quite evident....

  • @krox477
    @krox47711 ай бұрын

    What is the "liquid" here

  • @LabGecko

    @LabGecko

    11 ай бұрын

    If I had to guess, it's the derivational smoothing of equations handling input

  • @shubhamdhapola5447

    @shubhamdhapola5447

    11 ай бұрын

    It's the ability of the network to learn new patterns during inference on the real-world, test data. Canonical way, for traditonal networks is to perform all the "learning" during the training phase, and become rigid/static once it has been sufficiently trained. Another aspect of it being "liquid" is that it can handle variable length time-series data. Link to the paper : arxiv.org/pdf/2006.04439.pdf

  • @krox477

    @krox477

    11 ай бұрын

    ​@@shubhamdhapola5447thanks I'll definately learn about it

  • @samaBR_85
    @samaBR_8511 ай бұрын

    that would be great running on a Photonic chip

  • @GT-tj1qg

    @GT-tj1qg

    11 ай бұрын

    Photonic chips are irrelevant to this topic.

  • @testales
    @testales10 ай бұрын

    It's a little bit of cheating to give one NN a noisy camera input stream and the other a clear stream, isn't it? ;-) Either way, I'm looking forward to some implementations of this.

  • @lostpianist
    @lostpianist11 ай бұрын

    This seems like a natural consequence of improved tech rather than a 'new idea', but all discovery is serendipity, really, anyway. Cool.

  • @LabGecko

    @LabGecko

    11 ай бұрын

    No, their method of manipulating the math is groundbreaking. Current models need _billions_ of neurons to do things like what OpenAI has done on ChatGPT, and this model does the same with *_19!?_* Seriously groundbreaking.

  • @kayakexcursions5570

    @kayakexcursions5570

    11 ай бұрын

    I agree. Nothing to see here.

  • @f.jideament

    @f.jideament

    11 ай бұрын

    ​@@LabGeckohow do you know their efficiency and precision is the same? Checked and compared the data for every possible problem? What is the definition of better here?

  • @krox477
    @krox47711 ай бұрын

    It's all math and probability under the hood

  • @LabGecko

    @LabGecko

    11 ай бұрын

    That's life

  • @katherandefy
    @katherandefy11 ай бұрын

    Hopefully YT does not delete the paper link for this idea since the platform hosting these talks does not advertise the work directly but is neutral or that is my assumptions… See link in my reply to myself here.

  • @SboNtuli.
    @SboNtuli.11 ай бұрын

    Crazy that Manuel cars are going to be a fairytale ww tell our greatgrand kids one day

  • @NeonTooth
    @NeonTooth10 ай бұрын

    This is cool and all but I recommend watching the original talk by Ramin Hasani. The salience map she shows for the traditional neuron is being made intentionally bad by introducing noise into the input image, whereas the liquid neuron example is not affected by the noise. Slightly dishonest representation of the results

Келесі