No Priors Ep. 41 | With Imbue Co-Founders Kanjun Qiu and Josh Albrecht

Ғылым және технология

00:00 - Introduction to Imbue
04:55 - The Spectrum of Agent Tasks
10:23 - Specialization and Generalization With Agents
14:08 - Code and Language in AI Agents
21:00 - Evaluating AI Development Tools Efficiently
26:39 - Prioritizing GPU Usage

Пікірлер: 4

  • @aiwithdhruv
    @aiwithdhruv6 ай бұрын

    Really Helpful thanks for the podcast Got so many ideas to implement

  • 5 ай бұрын

    🎯 Key Takeaways for quick navigation: 18:20 🧩 *Fokus auf Codierung und Bewertung* - Fokus auf Codierung wegen objektiver Bewertung, Beschleunigung der Agentenentwicklung und bessere Handlungsfähigkeit. 22:08 🚀 *Zukunftsvision für Agenten* - In der Zukunft individuelle, vielseitige, benutzerdefinierte Agenten, die natürliche Sprache verstehen und handeln können. 24:41 💰 *Kapitalbeschaffung und Ressourceneinsatz* - Ein großer Teil der Mittel wird für Rechenleistung verwendet, um effizientere Modelle und Tools zu entwickeln. 28:12 🖥️ *Warum Fokus auf Codierung?* - Codierung ermöglicht schnellere Agentenentwicklung, Automatisierung von Aufgaben und Verbesserung der Codequalität. Made with HARPA AI

  • @easyaistudio
    @easyaistudio5 ай бұрын

    while AutoGPT is big hat no cattle I feel like these are the same pitch but big hat, 1 cattle

  • @Tor1smo
    @Tor1smo5 ай бұрын

    Interesting. So part of their idea is to have smaller models focused on particular subtasks. Doesn't this go against the current direction of the field? It's by consolidating the reasoning into singular large models that really interesting progress was finally made. For example, you can't really have a great neural spellchecker without a good amount of world understanding because general reasoning is necessary to correctly interpret the text. Sure, you can downsize the model keeping the parts most relevant to the spellchecker. At some point, however, your spell checker starts making silly mistakes again because it simply doesn't have the general intelligence required, even if it has a great understanding of spelling, grammar, etc. And at that point, aren't we back to double-checking the results ourselves because we're afraid the model made a mistake? Seems like machine learning's "bitter lesson" is peaking its head once again.

Келесі