How To Easily Run & Use LLMs Locally - Ollama & LangChain Integration

Ғылым және технология

Quick overview of how to get your first local LLM running on your machine, and use it as a model in LangChain.
Ollama Website - ollama.com/
Chapters:
00:00 - Intro
00:38 - Checking System Requirements
02:16 - Downloading Ollama
03:06 - Installing Ollama
03:44 - Choosing a Model
04:10 - Running the Model
05:11 - Chatting With Our First Model
06:07 - A Note About Ports
06:48 - Setting Up Your Model With LangChain

Пікірлер

    Келесі