Local GenAI LLMs with Ollama and Docker (Ep 262)

Ғылым және технология

Learn how to run your own local ChatGPT clone and GitHub Copilot clone by setting up Ollama and Docker's "GenAI Stack" to build apps on top of open source LLMs and closed-source SaaS models (GPT-4, etc.). Matt Williams is our guest to walk us through all the parts of this solution, and show us how Ollama can make it easier on Mac, Windows, and Linux to setup custom LLM stacks.
🗞️ Sign up for my weekly newsletter for the latest on upcoming guests and what I'm releasing: www.bretfisher.com/newsletter/
Matt Williams
============
/ technovangelist
/ technovangelist
Nirmal Mehta
============
/ nirmalkmehta
/ normalfaults
hachyderm.io/@nirmal
Bret Fisher

=========
/ bretefisher
/ bretfisher
www.bretfisher.com
Join my Community 🤜🤛
================
💌 Weekly newsletter on upcoming guests and stuff I'm working on: www.bretfisher.com/newsletter/
💬 Join the discussion on our Discord chat server / discord
👨‍🏫 Coupons for my Docker and Kubernetes courses www.bretfisher.com/courses/
🎙️ Podcast of this show www.bretfisher.com/podcast
Show Music 🎵
==========
waiting music: Jakarta - Bonsaye www.epidemicsound.com/track/Y...
intro music: I Need A Remedy (Instrumental Version) - Of Men And Wolves www.epidemicsound.com/track/z...
outro music: Electric Ballroom - Quesa www.epidemicsound.com/track/K...

Пікірлер: 5

  • @harrivayrynen
    @harrivayrynen27 күн бұрын

    Very good video and fresh content.

  • @DanielAzevedo94
    @DanielAzevedo94Ай бұрын

    Great talk, thanks guys

  • @ehza
    @ehzaАй бұрын

    Thanks

  • @tonychia2227
    @tonychia2227Ай бұрын

    I love llama

  • @tonychia2227
    @tonychia2227Ай бұрын

    first

Келесі