AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"

Ғылым және технология

Andrew Ng, Google Brain, and Coursera founder discusses agents' power and how to use them.
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
HuggingGPT - • NEW HuggingGPT 🤗 - One...
ChatDev - • How To Install ChatDev...
Andrew Ng's Talk - • What's next for AI age...
Chapters:
0:00 - Andrew Ng Intro
1:09 - Sequoia
1:59 - Agents Talk
Disclosure:
I'm an investor in CrewAI

Пікірлер: 494

  • @e-vd
    @e-vdАй бұрын

    I really like how you feature your sources in your videos. This "open source" journalism has real merit, and it separates authentic journalism from fake news. Keep it up! Thanks for sharing all this interesting info on AI and agents.

  • @Chuck_Hooks
    @Chuck_HooksАй бұрын

    Exponentially self-improving agents. Love how incremental improvements over a period of years is so over.

  • @andrewferguson6901

    @andrewferguson6901

    Ай бұрын

    I'm expecting deep mind to at any point just pop off with an ai that plays the game of making an ai

  • @aoeu256

    @aoeu256

    Ай бұрын

    When did the information age end and the AI age begin haha. I still think, we need to figure out how to make self-replicating robots (that replicate themselves half-size each generation) by making them out of lego-blocks, and then have the lego-blocks be cast from a mold that the robot itself makes. Once hardware(robots) improves the capabilities of software can improve.

  • @wrOngplan3t

    @wrOngplan3t

    Ай бұрын

    @@aoeu256 Oh come on now, you know how that'll end. Admit it, you've watched Futurama :D

  • @jonyfrany1319

    @jonyfrany1319

    Ай бұрын

    Not sure if. I love that

  • @paulsaulpaul

    @paulsaulpaul

    Ай бұрын

    It may refine the quality of results, but it won't teach itself anything new or have any "ah hah!" moments like a human thinker. There will be an upper limit to any exponential growth due to eventual lack of entropy (there's a limit to how many ways a set of information can be organized). Spam in a can is a homogenous mixture of meat scraps left over from slaughtering pigs. It's the ground up form of the parts that humans don't want to see in a butcher's meat display. LLMs produce the spam from the pork chops of human creativity. These agents will produce a better looking can with better marketing speak on the label. Might have a nicer color and smell to it. But it's still spam that will never be displayed next to real cuts of meat. Despite how much the marketers want you to think it's as good as or superior to the real thing.

  • @stray2748
    @stray274829 күн бұрын

    LLM AI + "self-dialogue" via reflection = "Agent". Multiple "Agents" together meet. User asks them to solve a problem. "Agents" all start collaborating with one another to generate a solution. So awesome!

  • @ihbrzmkqushzavojtr72mw5pqf6

    @ihbrzmkqushzavojtr72mw5pqf6

    27 күн бұрын

    Is self dialog same as Q* ?

  • @stray2748

    @stray2748

    26 күн бұрын

    @@ihbrzmkqushzavojtr72mw5pqf6 I think it's the lynchpin they discovered to be a catalyst for AGI. Albeit self-dialogue + multimodality being trained from the ground-up in Q* (something ChatGPT did not have in it's training). Transformers were built on mimicking the human neuron (Rosenblatt Perceptron) ; okay now following human nature, lets train it ground-up with multimodal data and self-dialogue (like humans posess).

  • @Korodarn

    @Korodarn

    26 күн бұрын

    @@ihbrzmkqushzavojtr72mw5pqf6 Not exactly, Q* is pre-thought, before inference is complete. The difference is with planning if someone asks you a question like "how many words are in your response, you can think about it, and come to a conclusion like to say 'One'" but if you don't have pre-thought, you're doing simple word prediction every time, and the only way to get a simple outcome is if something akin to key/value pairs passed into LLM at some point gives it the idea to try that in one shot. Even if it has a chance to iterate it'll probably never reach that response without forethought.

  • @enriquea.fonolla4495

    @enriquea.fonolla4495

    20 күн бұрын

    give it a couple more ai models, like world simulators, a little bit of time...and then something similar to what we refer as consiciousness may emerge of all those intereactions.

  • @defaultHandle1110

    @defaultHandle1110

    14 күн бұрын

    They’re coming for you Neo.

  • @8691669
    @8691669Ай бұрын

    Matthew, I've watched many of your videos, and I want to thank you for sharing so much knowledge and news. This latest one was exceptionally good. At times, I've been hesitant to use agents because they seemed too complex, and didn't work on my laptop when I tried. However, this video has convinced me that I've been wasting time by not diving deeper into it. Thanks again, and remember, you now have a friend in Madrid whenever you're around.

  • @janchiskitchen2720
    @janchiskitchen2720Ай бұрын

    The old saying comes to mind: Think twice , say once. Perfectly applicable to AI where LLM checks its own answer before outputting it. Another excellent video.

  • @richardgordon
    @richardgordon29 күн бұрын

    Your commentary "dumbing things down" for people like me was very helpful in understanding all this stuff. Good video!

  • @carlkim2577
    @carlkim2577Ай бұрын

    This is one of the best vids you've made. Good commentary along with the presentation!

  • @BTFranklin
    @BTFranklinАй бұрын

    I really appreciate your rational and well-considered insights on these topics, particularly your focus on follow-on implications. I follow several AI News creators, and your voice stands out in that specific respect.

  • @samhiatt

    @samhiatt

    Ай бұрын

    Matthew is really good, isn't he? I want to know how he's able to keep up with all the news while also producing videos so regularly.

  • @JohnSmith762A11B
    @JohnSmith762A11BАй бұрын

    Excellent video. Helped clear away a lot of fog and hype to reveal the amazing capabilities even relatively simple agentic workflows can provide.👍

  • @jonatasdp
    @jonatasdpАй бұрын

    Very good Matthew! Thanks for sharing. I built my simple agent and I see it improving a lot after a few interactions.

  • @youri655
    @youri655Ай бұрын

    Great point about combining Groq's inference speed with agents!

  • @saadatkhan9583
    @saadatkhan9583Ай бұрын

    Matthew, everytjing that Prof Ng referenced, you have already covered and analyzed. Much credit to you.

  • @user-en6ot9ju7f
    @user-en6ot9ju7fАй бұрын

    Thank you so much for all your videos. You are gold. Please never stop!

  • @hansenmarc
    @hansenmarc23 күн бұрын

    My favorite turnaround of all time. Thanks for sharing your versions.

  • @JacquesvanWyk
    @JacquesvanWyk11 күн бұрын

    I have been thinking about agents for months without knowing what I am thinking of untill I found videos like crewai and swarm-agent and my mind is blown. I am all in for this and trying to learn as much as i can because this is for sure the future. Thanks for all your uploads

  • @NateMina
    @NateMinaАй бұрын

    You are probably my number one source for bleeding edge info and explanations on AI and AI agents keep it up and great job Matthew! you were one of the fleeting influences in my learning AI and basically learning python for that matter now that I can use AI as a personal tutor for free anyone can learn anything now way better than being in a classroom because having an AI tutor way better than human

  • @ronald2327
    @ronald2327Ай бұрын

    All of your videos are very informative and I like that you keep the coding bugs in rather than skipping ahead, and you demonstrate solving those issues as you go. I’ve been experimenting with ollama, LM studio, and CrewAI, with some really cool results. I’ve come to realize I’m going to need a much more expensive PC. 😂

  • @timh8490
    @timh8490Ай бұрын

    Wow, I’ve been a big believer in agentic workflows since I saw your first video on chatdev and later on autogen. It’s really validating to hear someone of this stature thinking along the same lines

  • @AINEET
    @AINEETАй бұрын

    You upload on the least expected random times of the day and I'm all for it

  • @matthew_berman

    @matthew_berman

    Ай бұрын

    LOL. Keeping you on your toes!

  • @holdthetruthhostage

    @holdthetruthhostage

    Ай бұрын

    Haha 😂

  • @SuperMemoVideo
    @SuperMemoVideo10 күн бұрын

    As I come from neuroscience, I insist it must the the right track. The brain also uses "agents" which are more likely to be called "concepts" or "concept maps". These are specialized portions of the network doing simple jobs such as recognizing a face, or recognizing the face of a specific person. Tiny cost per concept, huge power of the intellect when working in concert and improved dynamically

  • @narindermahil6670
    @narindermahil667024 күн бұрын

    I appreciate the way you explained every step, very informative. Great video.

  • @animalrave7167
    @animalrave71677 күн бұрын

    Love your breakdowns! Adding context and background info into the mix. Very useful.

  • @d.d.z.
    @d.d.z.Ай бұрын

    Thank you. Great analysis

  • @existentialquest1509
    @existentialquest150922 күн бұрын

    i totally agree - was trying to make this case for years - but i guess technology has now evolved to the point where we can see this as a reality

  • @mayagayam
    @mayagayamАй бұрын

    Super informative, thank you so much! ❤

  • @StefRush99
    @StefRush99Ай бұрын

    I'm glad we all seem to be on the same page but I think it would help to use a different word when thinking about the implementation of "Agents". What I think was a breakthrough for me was replacing the word "Agent" with "Frame of mind" or something along those lines when prompting an "Agent" for a task in a series of steps where the "Frame of mind" changes for each step until the task is complete. Not trying to say anything different than what has been said thus far but only help us humans see that this is how we think about a task. As humans we change our "Frame of mind" so fast we often don't realize we are doing it when working on a task. For a LLMs your "Frame of mind" is a new LLM prompt on the same or different LLM. Thanks Matthew Berman you get all the credit for getting into this LLM rabbit hole. I'm also working on a LLM project I hope to share soon. 😎🤯😅

  • @kliersheed

    @kliersheed

    28 күн бұрын

    agens = actor = compartimented entity doing smth. i think the word fits perfectly. its like transistors are simulating our neurons and the agent is simulating the individual compartments in our brain. a frame of mind would be a fitting expression for the supervising AI keeping the agents in check and organizing them to solve the problem perceived. its like the "me" as in consciousness, ruling the processes in the brain. a frame always has to contain smth and IMO its hard to say what an agent contains as its already really specialized and works WITHIN a frame (not being a frame). even if you speak of frames as in relation systems, the agent is WITHIN, not one itself. just my thoughts on the terms ^^

  • @weishenmejames
    @weishenmejames8 күн бұрын

    Nice share with valuable commentary throughout, you've got yourself a new subscriber!

  • @virtualalias
    @virtualaliasАй бұрын

    I like the idea of replacing a single 120b (for instance) with a cluster of intelligently chosen 7b fine-tuned models if for no other reason than the hardware limitations lift drastically. With a competently configured "swarm," you could run one or two 7b sized models in parallel, adversarially, or cooperatively, each one contributing to a singular task/workspace/etc. They could even be guided by a master/conductor AI tuned for orchestrating its swarm.

  • @kliersheed

    @kliersheed

    28 күн бұрын

    ehem, skynet. :D but i agree

  • @jakeparker918
    @jakeparker918Ай бұрын

    Awesome video. Yeah, this is why I voted for speed in the poll you did, this is the what I was talking about.

  • @AC-go1tp
    @AC-go1tpАй бұрын

    Great video and valuable clarifications of AN's insights. It will be also great if you are able to make a video that capture all these concepts and notions using CrewAI and/or Autogen. Thank you Matt!

  • @pengouin
    @pengouinАй бұрын

    Excellent video my friend , you are my favorite channel , continue your good work! ❤

  • @federico-bi2w
    @federico-bi2wАй бұрын

    ...ok I can see its right...having done a lot of "by hand iterations"...I mean i am not using agent yet...but if you think with GPT...you ask something...you test....you adjust...you give it back..and the result is better...and in this process if you do question on the same topic but from different aspect it becomes better...so an agent is basically doing this by itself!. Great video! Thank you :D

  • @danshd.9316
    @danshd.931617 күн бұрын

    Thank you just finished its great that you explained for ones who may not be as techie as ng expected.

  • @CM-zl2jw
    @CM-zl2jwАй бұрын

    Thank you Matt. I appreciate your explanations, insights and exploration. This is a journey.

  • @lLvupKitchen
    @lLvupKitchen28 күн бұрын

    I saw the original video, but the commentary adds a lot. thx

  • @michaelmcwhirter
    @michaelmcwhirter28 күн бұрын

    Great video! Thank you for the insights 🔥

  • @rafaelvesga860
    @rafaelvesga86018 күн бұрын

    Your input is quite valuable. Thanks!

  • @NasrinHashemian
    @NasrinHashemian14 күн бұрын

    Matthew, your videos are really informative. Many thanks to you for sharing such knowledge and update. This latest one was exceptionally good.

  • @RasoulGhaderi

    @RasoulGhaderi

    14 күн бұрын

    I love this video. In the long run Advances in A.I surely can be debated for the good of AI Agents, though most will argue that only a few will benefit especially to their pockets, at the end, interesting to see what the future holds.

  • @YousefMilanian

    @YousefMilanian

    14 күн бұрын

    I also agree that it will be interesting, take a look at the benefits of the computing age millions of people were made for life simply because they made the right decisions at the time thereby creating lifetime wealth.

  • @RasoulGhaderi

    @RasoulGhaderi

    14 күн бұрын

    I wasn't born into lifetime wealth handed over, but I am definitely on my way to creating one, $715k in profits in one year is surely a start in the right path for me and my dream. Others had luck born in wealth, I have a brain that works.

  • @ShahramHesabani

    @ShahramHesabani

    14 күн бұрын

    I can say for sure you had money laying around and was handed over to you from family to be able to achieve such.

  • @RasoulGhaderi

    @RasoulGhaderi

    14 күн бұрын

    It may interest you to know that no such thing happened, I did a lot of research on how the rich get richer and this led me to meet, Linda Alice parisi . Having someone specialized in a particular field do a job does wonders you know. I gave her 100 grand at first

  • @seanhynes9516
    @seanhynes95162 күн бұрын

    Awesome thanks for the great perspectives!

  • @TestMyHomeChannel
    @TestMyHomeChannelАй бұрын

    I loved this video. Your selection was great and your comments were right to the point and very useful. I like that you test things yourself and provide links to the topics that are discussed previously.

  • @ManolisPolychronides
    @ManolisPolychronides21 күн бұрын

    Really cutting edge! Thanks.

  • @luciengrondin5802
    @luciengrondin5802Ай бұрын

    The iterating part of the process seems more important to me than the "agentic" one. If we compare current LLMs to DeepMind's AlphaZero method, it's clear that so far LLMs currently only do the equivalent of AlphaZero's evaluation function. They don't do the equivalent of the Monte-Carlo search thing. That's what reasoning needs : the ability to explore the tree of possibilities, the NN being used to guide that exploration.

  • @joelashworth7463

    @joelashworth7463

    Ай бұрын

    what get's interesting about agentic - is what if certain agents have access to differrent 'experiences' - meaning their context window starts with 'hidden' priorities objectives and examples of what final state should look like. Since Context windows are limited right now this is an exciting area. Of course the other part of agentic vs iterative - is that since a model isn't really 'thinking' it needs some for of stimulus that will disrupt the previous answer - so you either have to use self reflection or external crtiic - if the external critic uses a differrent model (fine tune or lora) and is given a differrent objective you should be able to 'stimulate' the model into giving radically differrent end products.

  • @agenticmark
    @agenticmarkАй бұрын

    Something you guys never talk about - the INSANE cost of building and running these agents. It limits developers just as much as compute limits AI companies. The reason agentic systems work is they remove the context problem. LLMs get off track and confused easily. But if you open multiple tabs and keep each copy of the LLM "focused" it gets better results" - so when you do the same with agents, each agent outperforms a single agent who has to juggle all the context. We get better results with GPT 3.5 using this method than you would get in a browser with GPT4. Basically, you are "narrowing" the expertise of the model. And you can select multiple models and have them responsible for different things. Think Mixtral but instead of a gating model, the agent code handles the gating.

  • @DaveEtchells

    @DaveEtchells

    Ай бұрын

    I’m really intrigued by your multi-tab workflow, it sounds super powerful, but I’m not sure how it works in practice. Do you have the different tabs working on different sub-tasks or performing different roles (kind of a manual agentic workflow, but with human oversight of each of the zero-shot workers), or are they working in parallel on the same task, or … ? IANAP, but I need to have ChatGPT (my current platform, or it could be Claude or whatever) do some fairly complex tasks like parsing web pages and PDFs to navigate a very large dataset and use reasoning to identify significantly-relevant data, download and assemble it into a knowledge database that I’ll then want to use as test input for another AI system. Ideally I’d use one of the no-code/low-code agent dev tools to automate the whole thing but as I said IANAP, and just multi-tabbing it could get me a long way there. It sounds like whatever you’re doing is exactly what I need to - and likely a boatload of others as well: I do wish someone would do a video on it. Meanwhile, would you be willing to share a brief description of an example use case and what you’d have the various tabs doing for it? (I hope @matthew_berman sees this and makes a vid on the topic: Your comment is possibly the most important I’ve ever encountered on YT, at least in terms of what it could do for my work and personal life.) Thanks for the note!

  • @japneetsingh5015

    @japneetsingh5015

    Ай бұрын

    You don't always need the state of the art models like that. GPT, Gemini, Claude etc. many open source 7B models work just as well for most of the companies.

  • @DefaultFlame

    @DefaultFlame

    Ай бұрын

    @@japneetsingh5015 Yeah, llama, mistral, mixtral, the list goes on. If you want something even more lightweight than 7B, stablelm-zephyr is a 3B that is surprisingly capable. Orca-mini is good too and comes in 3B, 7B, 13B, and 70B versions so you can pick whichever you want based on your hardware.

  • @user-bd8jb7ln5g

    @user-bd8jb7ln5g

    Ай бұрын

    What You're saying is: Attention is all you need 😁 I do agree that mixing goals will confuse models, as it could people. People however already have learned processes to compartmentalise tasks. We might have to teach agents to do that, apart from constructing them to minimize this confusion.

  • @DefaultFlame

    @DefaultFlame

    Ай бұрын

    @@user-bd8jb7ln5g The whole point of multiple agents with different "jobs," personalities, or even different models powering them, is that we can cheat. The point of multiple agents is that we don't **need** to teach a single agent or model those learned processes, we can just connect several that each do each part, each agent taking on the role of different parts of a single functional brain.

  • @marshallodom1388
    @marshallodom1388Ай бұрын

    I convinced my chat AI that our new mutually conceived idea of think before you speak extremely helpful for both of us

  • @dhruvbaliyan6470
    @dhruvbaliyan647024 күн бұрын

    When me realising I realized this over a month ago . And thinking to create virtual environment where multiple agents work together that are especially fine tuned for each use case . So my brain is as intelligent as this person.

  • @mykdoingthings
    @mykdoingthings4 күн бұрын

    GPT 3.5 cognitive performance going from 48% to 95%+ by just changing how we interact with the same exact model is WILD! Are we learning that "team work makes the dream work" is true even for AI? I wonder what other common human sayings will cause the next architectural breakthrough in the field🤔 Thank you Matthew for this walkthrough, first time I learn about agentic workflow, Andrew Ng is amazing but you made it even more accessible 🙏

  • @zaurenstoates7306
    @zaurenstoates7306Ай бұрын

    Decentralized, highly specialized agents running on lower parameter count models (7b-70b) working together to accomplish tasks is where I think opportunity lies. I was mining ETH back when it was POW with my gaming rig to earn some money on the side. I did the calculations once and the entire eth computation available was a couple hundred exaflops. With more and more devices being manufactured for AI calculation (phones, GPUs, etc) the available computing will only increase

  • @jets115
    @jets115Ай бұрын

    Imagine an extensive neural network, except instead of W/B in nodes, each are an agent.

  • @Tayo39

    @Tayo39

    Ай бұрын

    like the internet, where every computer is a node agent/expert ?

  • @jets115

    @jets115

    Ай бұрын

    @NewAccount_WhoDis Don't think of it as a literal NN.. more like expanding the original prompt. If you can ask one researcher, imagine asking 100 with small variations in prompts to each! :)

  • @mintakan003
    @mintakan003Ай бұрын

    Andrew Ng is actually one of the more conservative of the AI folks. So when he's enthusiastic about something, he has a pretty good basis for doing so. He's very practical. As for this video, good point on Groq. We need a revolution on inference hardware. Also, another point to consider, is the criteria for specifying when something is "good" or "bad", when doing iterative refinement. I suspect, the quality of the agentic workflows will also depend on the quality of this specification, as in the case of all optimization algorithms.

  • @DougFinke
    @DougFinkeАй бұрын

    Good stuff, really like the commentary side by side.

  • @RaitisPetrovs-nb9kz
    @RaitisPetrovs-nb9kzАй бұрын

    I think the real breakthrough will come when we have user-friendly UI and agents based on computer vision, allowing them to be trained on existing software from the user's perspective. For example, I could train an AI agent on how to edit pictures or videos, or how to use a management application, etc. One approach could be to develop a dedicated OS for AI agents, but that would require all the apps to be rewritten to work with the AI agent as a priority. However, I'm not sure if that's feasible, as people may not adopt such a system rapidly. The fastest way forward might be to let the AI agent perform the exact task workflows that I would perform from the UI. This approach would enable the AI to work with existing software without requiring significant changes to the applications themselves.

  • @darwinboor1300
    @darwinboor1300Ай бұрын

    Nice review of the field of agents you have built in your videos over the past few months. Next build a team of agents to build an AI to build, refine, optimize, and validate agents and agent teams for various tasks. Now repeat the process.

  • @bobharris5093
    @bobharris509316 күн бұрын

    this is absolutely fascinating.

  • @elon-69-musk
    @elon-69-muskАй бұрын

    awesome analysis

  • @icns01
    @icns0121 күн бұрын

    I did in fact like andrew's talk but I liked it even more with your moderation, which was extremely helpful made a big difference in my understanding of the talk. Just Subbed, thank you very much! Off to take a look at your Hugging GPT video🏃‍♂

  • @evanoslick4228
    @evanoslick4228Ай бұрын

    It makes sense to be agents. They are parallelized and can be specificly trained where needed.

  • @agilejro
    @agilejro19 күн бұрын

    Amazing. Multi agents debating... Exciting

  • @greatworksalliance6042
    @greatworksalliance6042Ай бұрын

    Im considering delving into this space and curious what your preference is @Mathew Berman between Autogen, CrewAI, and whatever else is most comparable in the current market. What are your current rankings of them are, and optimal current use cases. Might make for a good upcoming video?

  • @d_b_
    @d_b_Ай бұрын

    20:00 such a good point!

  • @TrasThienTien
    @TrasThienTien17 күн бұрын

    Your input is quite valuable

  • @johnh3ss
    @johnh3ssАй бұрын

    What gets really interesting is that you could hook agentic workflows into an iterative distillation pipeline. 1) Create a bunch of tasks to accomplish 2) Use an agentic workflow to accomplish the tasks at a competence level way above what your model can normally do with one-shot inference 3) Feed that as training data to either fine tune a model, or if you have the compute, train a model from scratch 4) Repeat at step 2 with the new model. In theory you could build a training workflow that endlessly improves itself.

  • @autohmae

    @autohmae

    Ай бұрын

    Let's also remember this is what open source tools were already doing over a year ago, but often these got stuck in loops. I'm really interested in revisiting them.

  • @gotoHuman

    @gotoHuman

    21 күн бұрын

    Or don't start the pipeline with a bunch of tasks, but rather let it be triggered from the outside when a task appears, e. g. in form of a customer support ticket

  • @davedave2941
    @davedave294125 күн бұрын

    Very interesting - coding and work flow and having worked with coders /with Asperger in order to communicate we moved to a very simple process of task and explain to coder thru subject verb, subject verb and so on it smoothly flattered communication thus task to coding workflows.

  • @samfurlong4050
    @samfurlong4050Ай бұрын

    Fantastic breakdown

  • @rupertllavore1731
    @rupertllavore1731Ай бұрын

    @MatthewBerman what do you recommend I pick out to have a more synergistic Value as i prepare for the near future. Im already using Chat GPt Plus and perplexity pro. But Because of this video i might need to take away one so i can Add in Agent GPT So what do you Recommend i pick out Perplexity Pro + Agent GPT? Or ChatGPTplus + Agent GPt? Your advice would truly be appreciated.

  • @Lukas-ye4wz
    @Lukas-ye4wzАй бұрын

    Did you know that this is actually how our mind/brain works as well? We have different parts (physical and psychological) that fulfill different roles. That is why we can experience inner conflict. One part of us wants this. Another part wants this. IFS teaches about this.

  • @CharlesVanNoland
    @CharlesVanNolandАй бұрын

    As long as we're relying on backpropagation to fit a network to pre-designated inputs/outputs, we're not going to have the sort of AI that will change the world overnight. The future of machine intelligence is definitely agentic, but we're not going to have robotic agents cleaning our house, cooking our food, fixing our house, constructing buildings, etc... unless we have an online learning algorithm that can run on portable hardware. Backpropagation, gradient descent, automatic differentiation, and the like, isn't how we're going to get there. We need a more brain-like algorithm. Throwing gobs and gobs of compute at backprop training progressively larger networks isn't how we're going to get where we're going. It's like everyone saw that backprop can do some cool stuff and then totally forgot about brains being the only example of what we're actually trying to achieve. They're totally ignoring that brains abstract and learn without any backpropagation. Backprop is the expensive brute force way to make a computer "learn". I feel like we're living in a Wright Brothers age right now where everyone believes that the internal combustion powered vehicle is the only way humans will ever move around the earth, except it's backpropagation that everyone has resigned to being the only way we'll ever make computers learn, when there's no living sentient creatures that even rely on backpropagation to exhibit vastly more complex behaviors than what we can manage with it. A honeybee only has one million neurons, and in spite of ChatGPT being, ostensibly, one trillion parameters, all it can do is generate text. We don't even know how to make a trillion parameter network that can behave with the complexity of an insect. That should be a huge big fat hint to anyone actually paying attention that backprop is going to end up looking very stupid by comparison to whatever does actually end up being used to control thinking machines - and the people who are fully invested in (and defending) backprop are most certainly going to be the last ones who figure out the last piece of the puzzle. When you have people like Yann LeCunn pursuing things like I-JEPA, and Geoffrey Hinton putting out whitepapers for algorithms like Forward-Forward, and Carmack saying things like "I wouldn't bother with an algorithm that can't do online learning at ~30hz", that should be a clue to everyone dreaming that backprop will get us where we're going that they're on the wrong track.

  • @sup3a

    @sup3a

    26 күн бұрын

    Maybe. Though it's fun to hear what people said when Wright brothers and such tried to crack flying: this is not how birds fly, this is inefficient etc. We "brute forced" flying by just blasting shit ton of energy into the problem. Maybe we can do the same with intelligence

  • @bilderzucht

    @bilderzucht

    26 күн бұрын

    Learning within a Single individual Brain maybe without any backpropagation. But couldn't the whole evolutionary process through billions of brains and ariving at a setup with different Brain Regions be seen as some sort of backpropagation?

  • @vicipi4907

    @vicipi4907

    22 күн бұрын

    I think the idea is to get it to an advanced enough stage where it is competent and reliable. so much so it exaperdite the process in researching something that looks more like the the human brain process as a replacement. We might even get it to a point where it self improve there is no reason to think it won't find a different approach thats doesn't involve back propagation. Either we can't deny it has great potential and application to make AI advancement significantly much faster.

  • @colmxbyrne

    @colmxbyrne

    18 күн бұрын

    Progression is rarely linear and innovation follows a line of optimism use not the end game. That's why we had the 'stupid' internal combustion engine for over 100 years melting our planet😢

  • @Mattje8

    @Mattje8

    16 күн бұрын

    This assumes the goal of AI is to mimic a brain. It probably isn’t, mostly because it (probably) can’t, at least using existing compute approaches and current physics. If consciousness involves quantum effects as Penrose puts forward, current physics isn’t there yet. Or maybe it’s neither quantum nor algorithmic but involves interactions we can’t properly categorise today, which may or may not be deterministic. All of which is to say that I basically agree with you that all of the current approaches are building fantastic tools, but certainly nothing approaching sentience.

  • @fernandodiaz8231
    @fernandodiaz823117 күн бұрын

    It was usefull your explanation after each pause.

  • @baumulrich
    @baumulrich28 күн бұрын

    whether we know or not - that is how most of us work. we evaluate the prompt, then we do a first pass, then we reevaluate, then we edit, then we do more, and reevaluate, check against the prompt, edit, do more work, etcetc

  • @peterpetrov6522
    @peterpetrov652228 күн бұрын

    The future will be agentic. Yes the future will be bananas. Well said!

  • @ondrazposukie
    @ondrazposukie16 күн бұрын

    amazingly inspiring and informative video

  • @konstantinlozev2272
    @konstantinlozev227228 күн бұрын

    If you spend some exchanges on brainstorming first with GPT4 a few different approaches and only then give it a task, it is superb. I can see a pair of agents brainstorming in the future instead.

  • @stevencord292
    @stevencord29223 күн бұрын

    This makes total sense

  • @EliyahuGreitzer
    @EliyahuGreitzer20 күн бұрын

    Thanks!

  • @YorkyPoo_UAV
    @YorkyPoo_UAV28 күн бұрын

    I just started learning how to set up AI last month but this is what I thought this is what Multi-Agents or a Crew was.

  • @MrJawnawthin
    @MrJawnawthinАй бұрын

    This is definitely the future. It’s the same as using chat GPT to brainstorm and Claude to write the first draft and Gemini to critique it- that’s my current work flow. once it becomes repetitive - the algorithm for workflow is an easy model.

  • @u2b83
    @u2b834 күн бұрын

    I've long suspected that iteration is the key to spectacular results, it's like an ODE solver iterating on a differential equation until it stumbles into a basin of attraction. You could probably do "agents" with just one GTP and loop through different roles. Then again maybe multiple agents are a crutch for small context windows lol. However, keep in mind that GPT4 already gives you an iterative solution by running the model as many times as there are tokens.

  • @DefaultFlame
    @DefaultFlameАй бұрын

    I've been thinking that this is the future for a while now, partially from my own experience and experiments with what you can get a model to do with prompting it with it's own output as well as having it reflect on it ever since I got access to GPT-3, partially thanks to everything I've learned about agents from you, Matthew. (I have spent an embarassing amount of money fiddling with AI and figuring out their limits, considering that I'm just an interested layman.)

  • @StuartJ
    @StuartJАй бұрын

    Maybe this is what Grok 1.5 is doing behind the scenes to get a better score to GPT4.

  • @NOYFB982
    @NOYFB982Ай бұрын

    With a limited context window, this hits an asymtotic wall very quickly. Keep in mind, I’m not saying the approach is not a big improvement; it is. However, my extensive experience is that it is not able to go nearly far enough. LLMs are still not fully functional of high performing work. That can only still do basics (or high level information recall). Perhaps with a large context window, this would actually be useful.

  • @santiagoc93
    @santiagoc93Ай бұрын

    Agents will be part of the future of LLMs. Just imaging different expert(agents) working in different parts of the app and an agent that's the program manager. Will be able to create app in weeks instead of months

  • @christiandarkin
    @christiandarkinАй бұрын

    great breakdown as always. I'm a bit scared to play with agents until I can do so on a local llm. I'm afraid the costs will run away with themselves if I do an ambitious project.

  • @gregkendall3559
    @gregkendall355916 күн бұрын

    You can actually tell a gpt to break itself into multiple separate personalities. Give them each a goal. One can write code then the next reviews it and have the one chatbot work it all without resorting to a convoluted separate agents system. Tell them to talk to each other to get a task done. Name them.. Bob, Joe and tell it to preference their discussion with their names as each one talks. I tried it and results were very promising.

  • @GregoryBohus
    @GregoryBohus27 күн бұрын

    Is it possible for say Gemini to iterate itself if you prompt it correctly in your first prompt? Or do you need to build an application to do such? Can you use the web interface to do such?

  • @rayhere7925
    @rayhere7925Ай бұрын

    Matthew is back from the SHOCK. Glad to have you back again.

  • @mrpro7737
    @mrpro7737Ай бұрын

    this really good

  • @paulblart5358
    @paulblart5358Ай бұрын

    It's a very good strategy instead of training single long duration models. I do wonder about security, but the technology is very fascinating.

  • @user-qn7iw4ih3d
    @user-qn7iw4ih3dАй бұрын

    Great videos, thank you! I have a question about this agentic framework that perhaps you answer ... it seems like the iteration process inherent in the likes of Autogen & CrewAI will be built in to the next LLM models (CHatGPT5, Claude 4 etc) - does that make Autogen redundant at that point? Or, am I missing something? Thanks

  • @chrispteemagician
    @chrispteemagicianАй бұрын

    With the way things are going and the power of agentic AI, I'd suggest that Deep Thought would arrive at the number 42 at least three minutes quicker, or within three minutes. There's no way to tell but I reckon Douglas would love this.

  • @saxtant
    @saxtantАй бұрын

    Agents have been here a while, but they are very expensive, because the zero shot of an llm still has errors. If I could get enough value from my rtx 3090 to run agents that could actually make progress on something I'm not going to throw away, then I'm all over it. Function calls are only one part of empowering an llm. Listeners are just as important... Tools that just operate on your workflow and can make suggestions to you that may include a full multi agent stack to complete a definable task.

  • @destinypuzzanghera3087
    @destinypuzzanghera308728 күн бұрын

    The last couple years humanity has gotten an upgrade. Hopefully it will turn into AI Utopia.

  • @denijane89
    @denijane89Ай бұрын

    Damn, when I was learning English the expression "He's incredibly bullish" would totally have made me scratch my head. I don't know why people like it so much, as it's very investment specific. Otherwise, great video, if it wasn't for you, I'd have missed this video by Andrew and I agree, having agents running on the background of our tasks would speed up stuff. In the end, the only limit would be our own bandwidth limit and also the limit at which we can come up with new tasks and ideas. I don't know about you, but my own is definitely not infinite.

  • @bobnothing4921
    @bobnothing4921Ай бұрын

    I am looking for something like Autogen/GPT Pilot 2, but that is designed for programming for iOS, such as Swift/Xcode. Is there something along those lines?

  • @gene4094
    @gene409420 күн бұрын

    I used AI CHAT GPT-4 questions on a new hypothetical energy source. The source of energy is water splitting for a hydrogen liquid-phase plasma. The water splitting reaction has a key material Bismuth ferrite that is a nano catalyst that absorbs a weak infrared electromagnetic wave and refracts the wave both the infrared and a water slitting ultraviolet radiation.

  • @michaelcharlesthearchangel
    @michaelcharlesthearchangel23 күн бұрын

    10 years ago, I created the data architecture for AI Agent and AI Congress networks.

  • @TheStandard_io
    @TheStandard_ioАй бұрын

    Yeah, Sequoia Capital also misled everyone by not doing actual due diligence on FTX. When everyone heard that they invested, no one else did Due Diligence because they assumed Sequoia did. And they did not go to court or get any punishment

  • @rakoczipiroska5632
    @rakoczipiroska5632Ай бұрын

    Thank you for your great job. If things go like this, maybe there won't be a requirement for a startup accelerator to include a professional programmer among the founders? It won't be enough if someone is a hobbyist programmer, but professional propmt engineers?

  • @anonymeforliberty4387

    @anonymeforliberty4387

    17 күн бұрын

    i bet you are still gonna need a prompt engineer and programmer, but alone he will do the work of a team

  • @jbavar32
    @jbavar32Ай бұрын

    I've been using AI for a couple of years now for a creative workflow. (I don't do code) and I've often said Ai is like having the most brilliant collaborator on the planet but it has a slight drinking problem. My question is how does one create an agent so that one LLM can pass its result to other LLM's? In other words how do you engage several LLMs each working on the same problem? It looks like you would need a special code or a custom API.

  • @snuwan
    @snuwanАй бұрын

    This is the way it should work. But the cost could be very high if agents starts to iterate a lot sending a lot of tokens

  • @blijebij
    @blijebij29 күн бұрын

    Great video and presentation! I think its true Agentic is the future but it also comes with a cost as probably this way AI usuage cost a bit more money. But maybe iam wrong.

  • @user-xh7xs1hh6w
    @user-xh7xs1hh6w22 күн бұрын

    That obviously seems that the conversation over people involved in GPT chat training displace a few cornerstone and important things. The information analized with it statistics can be easily checked up and qulifies its source purity, sorting listed public sources if they are proudent or not. The set of information passed trough the analisys and valued as not proven gives the opportunity to analyse why it is not proven, or misleading, or just simply faking. So, the discussion was not about agents as well as independent and open press are, but about "mediates" who trying to place uncofident and fakefull information instead. P.S. Obviously, this video was placed by one of the previously told "mediates" in the certain fakefull purposes and could be such example of what was genuinely the conversation about.

Келесі