Build your own LLM on Google Cloud
Ғылым және технология
Custom large language models (LLMs) can be fine-tuned and deployed using Google Kubernetes Engine (GKE) and Cloud Run. Watch along as Ali Zaidi, Solutions Architect Google Cloud, demonstrates the architecture of a custom LLM on Google Cloud as well as how to get started today. Check out the GitHub code repository below to get started.
Chapters:
0:00 - Meet Ali
0:31 - Demo architecture
1:29 - How to fine-tune LLMs
2:07 - How to deploy LLMs on GKE & Cloud Run
2:51 - Chatting with the LLMs
4:14 - Wrap up
Resources:
GitHub → goo.gle/4btqsKt
Watch more Cloud Next 2024 → goo.gle/Next-24
Subscribe to Google Cloud Tech → goo.gle/GoogleCloudTech
#GoogleCloudNext #GoogleGemini
Event: Google Cloud Next 2024
Speaker: Ali Zaidi
Products Mentioned: Google Kubernetes Engine, Cloud Run, Gemma
Пікірлер: 4
👀 Check out more demos from Cloud Next 2024 here → goo.gle/Next-24.
Can someone without any experience execute an LLM?
Why would I not just deploy an open source quantized model using WebUI and then use MagickML (which you sponsor, poorly) to add recursive self awareness to a model cobbled together from garbage that runs better than anything your corp has ever made?
@kubectlgetpo
23 күн бұрын
Yes you can but no enterprise with security and compliance requirements wants to work with WebUI as that's now WebUI's focus. Just because one thing is at good at something doesn't mean that an alternative is bad and you lead with negativity.