FINALLY! Open-Source "LLaMA Code" Coding Assistant (Tutorial)

Ғылым және технология

This is a free, 100% open-source coding assistant (Copilot) based on Code LLaMA living in VSCode. It is super fast and works incredibly well. Plus, no internet connection is required!
Download Cody for VS Code today: srcgr.ph/ugx6n
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V

Пікірлер: 291

  • @matthew_berman
    @matthew_berman4 ай бұрын

    Llama code 70b video coming soon!

  • @DopeTropic

    @DopeTropic

    4 ай бұрын

    Can you make a video to a local LLM with fine tuning guide?

  • @orangeraven3869

    @orangeraven3869

    4 ай бұрын

    codellama 70b has been amazing for me so far. code is definitely SOTA for local model. Can't wait to see tunes and merges like Phind or DeepSeek in the near future. Will you cover miqu 70b too? Rumors aside, it's closest to GPT-4 for any local model yet and I predict it produces a surprise or two if you put it through your normal benchmarks.

  • @Ricolaaaaaaaaaaaaaaaaa

    @Ricolaaaaaaaaaaaaaaaaa

    4 ай бұрын

    @@orangeraven3869 How is it compared to the latest GPT4 build?

  • @SaveTheDoctor-fl7hn

    @SaveTheDoctor-fl7hn

    4 ай бұрын

    LOL cant wait!

  • @Chodak166

    @Chodak166

    4 ай бұрын

    How about the current huggingface leader, the moreh momo 72b model?

  • @5Komma5
    @5Komma54 ай бұрын

    Need to sign in to use the plugin. No thanks. That is not completely local.

  • @carktok

    @carktok

    4 ай бұрын

    Are you saying you had to login to authenticate your license to use a local instance of their software for free? 🤯

  • @nicolaspace1182

    @nicolaspace1182

    4 ай бұрын

    ⁠@@carktokYes, and that is a deal breaker for many people, believe it or not.

  • @cesarruiz1202

    @cesarruiz1202

    4 ай бұрын

    Yeah buts that's mainly because they're paying OpenAI and Claude 2 completions API to use it without cost. Also if you want to I think you can self host Cody without login to sourcegraph.

  • @vaisakhkm783

    @vaisakhkm783

    4 ай бұрын

    cody is open source, you can completely run it locally..

  • @SFSylvester

    @SFSylvester

    4 ай бұрын

    @@vaisakhkm783 It's not open-source if they force you to login. My machine, my rules!

  • @rohithgoud30
    @rohithgoud304 ай бұрын

    I typically don't rely too heavily on AI when coding. I use TabbyML, which has a limited model, but it works for me. It's completely open-source and includes a VSCode extension too. It's free and doesn't require login. I use the DeepSeekCoder 6.7B model locally.

  • @hrgdavor

    @hrgdavor

    4 ай бұрын

    thanks for the hint, I was looking for that. I hate that cloud crap.

  • @haroldasraz

    @haroldasraz

    3 ай бұрын

    Cheers for the suggestion.

  • @YadraVoat

    @YadraVoat

    3 ай бұрын

    VSCode? Why not VSCodium?

  • @justingolden21

    @justingolden21

    Ай бұрын

    Just tried tabby, thanks!

  • @mayorc
    @mayorc4 ай бұрын

    The problem with Cody is that it just autocomplete with local models, a thing you can do with many VsCode Extensions like LLaMA Coder, an many more. All the nice features use the online version, which is extremely limited in numbers of requests if you go for the free plan (a bit of expansion on the monthly numbers of these would make things better to test or to grow a serious interest later leading to a better plan). Also there is a, not indifferent, number of extensions that do those nice features (chat, document, smells, refactoring, explain and tests) the same all in one extension and for free using local models (ollama or openai compatible endpoints). Cody does these features a little better and has a better interaction with the codebase, probably due to the bigger context window (at least from my tests) and a nicer implementaion/integration in VScode, but unless you pay you're not gonna really benefit from them cause of the low free number of requests you can afford, which aren't really enough to seriously dive in.

  • @ruifigueiredo5695

    @ruifigueiredo5695

    4 ай бұрын

    Matthew just confirmed on a post above that the limitations on the free tier does not apply if you run the model locally.

  • @alx8439

    @alx8439

    4 ай бұрын

    Can you suggest any particular alternative among those "different number of extensions"?

  • @mayorc

    @mayorc

    4 ай бұрын

    @@alx8439 There are many, I so far tested a few but I'm not using them at the moment so I don't remember those names. What I did was searching extensions with names like "chat, gpt, AI, code, llama" and many will be there, then you have to test them one by one (this if what i did). I suggest you go for those who already in the description and in the pictures show options for customization like base URL for ollama or openai compatible local servers. I think one of them has "genie" in the name.

  • @woozie_tv

    @woozie_tv

    4 ай бұрын

    i'm curious of those too @@alx8439

  • @alx8439

    @alx8439

    4 ай бұрын

    I'll answer myself then: Twinny, Privy, Continue, TabbyML

  • @jbo8540
    @jbo85404 ай бұрын

    Matt Williams, a member of the ollama team, shows how to make this work 100% free and open source in his video "writing better code with ollama"

  • @mickelodiansurname9578

    @mickelodiansurname9578

    4 ай бұрын

    thanks for that head up man

  • @brian2590

    @brian2590

    4 ай бұрын

    This is how i am setup. works great!

  • @LanceJordan

    @LanceJordan

    4 ай бұрын

    link please?

  • @mickelodiansurname9578

    @mickelodiansurname9578

    4 ай бұрын

    @@LanceJordan "writing better code with ollama" btw there's an issue on YT putting links into a comment, even YT links, seemingly a lot of comments with links go on the missing list!

  • @ArthurMartins-jw8fq

    @ArthurMartins-jw8fq

    3 ай бұрын

    Does it have knowledge of the entire codebase?

  • @AlexanderBukh
    @AlexanderBukh4 ай бұрын

    How is it local if i have to authorize with 3rd party 😮

  • @HUEHUEUHEPony

    @HUEHUEUHEPony

    4 ай бұрын

    it is not, it is clickbait

  • @hqcart1

    @hqcart1

    4 ай бұрын

    nothing is free dude.

  • @zachlevine1857

    @zachlevine1857

    4 ай бұрын

    Pay a little money and have fun my people!

  • @a5tr00
    @a5tr004 ай бұрын

    since you have to sign in, does it sends any data upstream when you use local models?

  • @KodandocomFaria
    @KodandocomFaria4 ай бұрын

    I know it is a sponsored video, but is there any open source alternative to Cody extension? We need a completely local solution, because Cody may use telemetry and gathering some information behind the scenes

  • @Nik.leonard

    @Nik.leonard

    4 ай бұрын

    Continue does chat and fix, but doesn’t do autocompletion, and is quite unstable. There is another one that does autocomplete with ollama (LlamaCode).

  • @UvekProblem

    @UvekProblem

    4 ай бұрын

    You have collama which is a fork of Cody and uses llama.cpp

  • @hqcart1

    @hqcart1

    4 ай бұрын

    @@Nik.leonardPhind, best free one.

  • @alx8439

    @alx8439

    4 ай бұрын

    Twinny, Privy, TabbyML

  • @kartiknarang3152

    @kartiknarang3152

    Ай бұрын

    one more issue with cody is it can take only 15 files for context at a time while i need an assistant that can take whole folder of project

  • @RichardGetzPhotography
    @RichardGetzPhotography4 ай бұрын

    Is it cody that understands? I think it is the LM that does. Also, why $9 if I am running everything locally?

  • @supercurioTube
    @supercurioTube4 ай бұрын

    Wait, you have GitHub Copilot enabled there too, which shows up in your editor. Are you sure that the completion itself is not provided by the GitHub Copilot extension and not Cody with the local model?

  • @kate-pt2ny

    @kate-pt2ny

    4 ай бұрын

    There is text for you to choose in the video, and it has the icon of cody, so you can see that it is the code generated by cody.

  • @Resursator
    @Resursator4 ай бұрын

    The only time I'm coding - is while being on flight. I'm so glad I can use LLM from now on!

  • @AlexanderBukh

    @AlexanderBukh

    4 ай бұрын

    About 40 minutes of battery life. Yep, i ran llms on my 15watt 7520U laptop. My 5900HX would gobble the battery even faster i think.

  • @mc9723
    @mc97234 ай бұрын

    Even if its not world changing breakthroughs, the speed at which all this tech is expanding can not be overstated. I remember one of the research labs was talking about how every morning they would wake up and another lab had solved something they had just started/were about to start. This is a crazy time to be alive, stay healthy everyone.

  • @evanmarshall9498
    @evanmarshall94984 ай бұрын

    Does this method also allow for completion for large code bases like you went over in a previous tutorial using universal-ctags? Or do you have to still download and use universal-ctags? I think it was your aider-chat tutorial. I do not work with pyhton so using this vscode extension and cody is much better for me (front end developer using HTML, CSS and JS).

  • @Joe_Brig
    @Joe_Brig4 ай бұрын

    I'm looking for a local code assistant. I don't mind supporting the project, with a license for example, but I don't want to log in each use or at all. How often does this phone-home? Will it work if my IDE is offline? Pass.

  • @ryzikx

    @ryzikx

    4 ай бұрын

    wait for llama code 70b tutorial

  • @InnocentiusLacrimosa

    @InnocentiusLacrimosa

    4 ай бұрын

    ​@@ryzikxthat should require > 40GB VRAM.

  • @AlexanderBukh

    @AlexanderBukh

    4 ай бұрын

    @@ryzikx 70b would require 2x 4090 or 3090. 34b takes 1.

  • @kshitijnigam

    @kshitijnigam

    4 ай бұрын

    Tabby and code llama can do that , let me find the link to the playlist

  • @michai333
    @michai3334 ай бұрын

    Thanks so much! We need a video on how to train a local model via LM Studio / VS / Python

  • @ScottWinterringer

    @ScottWinterringer

    4 ай бұрын

    or just use oobabooga and stop using junk?

  • @vivekpadman5248
    @vivekpadman52484 ай бұрын

    thanks for the video, this is absolutely a blessing of an assistant

  • @WhiteDragon103
    @WhiteDragon1033 ай бұрын

    ollama : The term 'ollama' is not recognized as the name of a cmdlet, function, script file, or operable program is there a working tutorial for windows 10?

  • @SageGoatKing
    @SageGoatKing4 ай бұрын

    Am I misunderstanding something or are you advertising this as an open source solution while it still dependent on a 3rd party service? What exactly is cody? I would have assumed if it is completely local, it's just a plugin that lets you use local models on your machine. Yet you describe it as having multiple versions with different features in each tier, including a paid tier. How exactly does that qualify as open source?

  • @zachlevine1857

    @zachlevine1857

    4 ай бұрын

    He shows you how fast it is.

  • @iseverynametakenwtf1
    @iseverynametakenwtf14 ай бұрын

    can you select the OpenAI one and run it through LM Studio locally too?

  • @themaridv2000
    @themaridv20003 ай бұрын

    Apparently they only support the given models. And the llama one actually only uses coda-llama13b. Basically it can't run something like Mistral or other llama models. Am I right?

  • @olimpialucio
    @olimpialucio4 ай бұрын

    Is it possible to use it in a Windows and WSL system? If yes how we should install LLaMA?

  • @thethiny

    @thethiny

    4 ай бұрын

    Same steps

  • @scitechtalktv9742
    @scitechtalktv97424 ай бұрын

    What an amazing new development! Thanks for you video. A question: can I use this to complete translate a Python code repository to C++ with the goal to make it run faster? How exactly would we go about doing this?

  • @olimpialucio
    @olimpialucio4 ай бұрын

    Thank you very much for your replay. What type of HW is required to run this model locally

  • @jawadmansoor6064
    @jawadmansoor60644 ай бұрын

    can it only work with ollama? or what if i have a server from llama.cpp running on desired/same port as that of ollama, will it not work? what url (complete, including port) does ollama output, so that I can make my server running on same url. of course it will be localhost like localhost:8080 (original as where llama.cpp server runs) localhost:8081/v1/chat/completion (if api_like_OAI is used). so what does ollama output?

  • @TubatsiM
    @TubatsiM4 ай бұрын

    I followed your instructions and I failed at 2:38 because I'm using Linux I'm seeing a different output. And thanks for your assistance.

  • @paolovolante
    @paolovolante4 ай бұрын

    Hi, thanks! I use chatgpt 3.5 for generating python code by just describing what I want. It kind of works... In your opinion is this solution you propose better than gpt 3.5?

  • @Daniel-xh9ot

    @Daniel-xh9ot

    4 ай бұрын

    Way better than gpt3.5, gpt3.5 is pretty outdated even for simple tasks.

  • @ruifigueiredo5695
    @ruifigueiredo56954 ай бұрын

    Does anyone knows if the 500 autocompletions per month on the Free tier, also applies if we run codellama locally?

  • @matthew_berman

    @matthew_berman

    4 ай бұрын

    You get unlimited code completions with a local model.

  • @synaestesia-bg3ew

    @synaestesia-bg3ew

    4 ай бұрын

    ​@matthew_berman it said "Windows version is coming soon",I had to stop at the download step ,so I cannot continue this tutorial. Not everyone got a lunix machine or a powerful Mac. Could you warn people about prerequisites before starting new videos? That would help thanks.

  • @DanVoronov
    @DanVoronov4 ай бұрын

    Despite the extension being available in the marketplace of VSCodium, after registration, it attempts to open regular Visual Studio Code (VSC) and doesn't function properly. It's unfortunate to encounter developers creating coding helpers that turn out to be broken tools.

  • @technovangelist
    @technovangelist4 ай бұрын

    It’s not actually fully offline. It still uses their services for embedding and caching even when using local models.

  • @vransomware7601
    @vransomware76014 ай бұрын

    can it be run using text generation web UI

  • @ew3995
    @ew39954 ай бұрын

    can you use this for reviewing PRs?

  • @janalgos
    @janalgos4 ай бұрын

    how does Cody compare to the Cursor extension with GitHub copilot?

  • @warezit
    @warezit4 ай бұрын

    🎯 Key Takeaways for quick navigation: 00:00 💻 *Introduction to Local Coding Assistants* - Introduction to the concept of a local coding assistant and its advantages, - Mention of the coding assistant "Codi" setup with "Olama" for local development. 01:07 🔧 *Setting Up the Coding Environment* - Guide on installing Visual Studio Code and the Codi extension, - Instructions on signing in and authorizing the Codi extension for use. 02:00 🚀 *Enabling Local Autocomplete with Olama* - Steps to switch from GPT-4 to local model support using Olama, - Downloading and setting up the Olama model for local inference. 03:39 🛠️ *Demonstrating Local Autocomplete in Action* - A practical demonstration of the local autocomplete feature, - Examples include writing a Fibonacci method and generating code snippets. 05:27 🌟 *Exploring Additional Features of Codi* - Description of other useful features in Codi not powered by local models, - Examples include chatting with the assistant, adding documentation, and generating unit tests. 07:04 📣 *Conclusion and Sponsor Acknowledgment* - Final thoughts on the capabilities of Codi and its comparison to GitHub Copilot, - Appreciation for Codi's sponsorship of the video. Made with HARPA AI

  • @kate-pt2ny
    @kate-pt2ny4 ай бұрын

    I chose the ollama local model, can cody only use codellama:7b-code? Can I switch to other models that can't be used, or where can I modify them?

  • @Baleur
    @Baleur4 ай бұрын

    So the local one is the 7b version, not the 70b? Or is it a typo in the release?

  • @InnocentiusLacrimosa

    @InnocentiusLacrimosa

    4 ай бұрын

    70b was released and it can be run locally, but it is a massive model and should require around 40GB VRAM.

  • @d-popov
    @d-popov4 ай бұрын

    That's great! But how is it magically linked to the ollama? How to specify other ollama hosted model (13/34b)?

  • @froggy5967
    @froggy59674 ай бұрын

    Might I ask M2 Max with how much memory and is 14inch? Thinking about get a Max 14" as well. Thanks

  • @LanceJordan
    @LanceJordan4 ай бұрын

    I seem I seem to have missed something even though I followed steps exactly, I can't tell if I'm using local model or not. But when I unplugged my modem, it didn't respond until I plugged it back in. So I'm doing something wrong. I am running Windows with WSL Linux Subsystem. Typically I can install and run anything Linux/Ubuntu and I do have the ollama server running. 🤷🏻‍♂

  • @rrrrazmatazzz-zq9zy
    @rrrrazmatazzz-zq9zy3 ай бұрын

    Can it reference variables in other files, same directory, while working in a separate file?

  • @nobound
    @nobound3 ай бұрын

    I have a similar setup, but I'm encountering difficulty getting Cody to function offline. Despite specifying the local model (codellama) and disabling telemetry, the logs indicate that it's still attempting to connect to sourcegraph for each operation.

  • @cyanophage4351
    @cyanophage43513 ай бұрын

    Tried on windows and couldn't get it to connect to my ollama. The dropdown was set to "experimental-ollama" and "codellama" but when I asked in the chat "what can you do" it would reply with "i'm claude from anthropic" so not sure what is up with that

  • @Krisdomain
    @Krisdomain4 ай бұрын

    How can you are not enjoying creating unit test

  • @shaileshsundram
    @shaileshsundram3 ай бұрын

    I am using 2017 MacBook Air. Will using it be instantaneous?

  • @Ray88G
    @Ray88G4 ай бұрын

    Can you please also include steps for those who are using Windows

  • @stvn0378
    @stvn03784 ай бұрын

    I'm pretty capped using 2080s (8gb)/16gb ram--have you tried out HS spaces yet? Would love to figure a way to test out dolphin mixtral etc

  • @DiomedesDominguez
    @DiomedesDominguez3 ай бұрын

    Do I need a GPU of 4 GB vRAM or more for the 7b? Also, Python is the easiest of the programming languages, can I use cody locally for C/C++ or C# and other more robust languages?

  • @toml6535
    @toml65353 ай бұрын

    how do i get the cody settings when useing webstorm? or can i only do this on vscode?

  • @michaelvarney.
    @michaelvarney.3 ай бұрын

    How do you deploy this on an a completely airgapped network? No network connections during install.

  • @Ludecan
    @Ludecan4 ай бұрын

    This is so cool, but doesn't the Cody login kind of invalidate the local benefits? A 3rd party still gets access to your code.

  • @mayorc

    @mayorc

    4 ай бұрын

    Yes, don't know though how and if the code is retained long term somehow the moment you start chatting with your codebase, plus free version has very limited amount of request you can issue a month, 500 autocomplete requests a month ( that you would probably end in a day or two considering the moment you stop typing it will process a request immediately in a few seconds delay), this is solvable with the local model, but then you have only 20 chat messages or builtin commands per months which make them useless unless you choose the paid plan.

  • @BrandosLounge
    @BrandosLounge4 ай бұрын

    No matter what i do, i always get this when asking for instructions - "retrieved codebase context before initialization". Is there a discord where we can get support for this?

  • @skybuck2000
    @skybuck2000Ай бұрын

    Ok it worked, kinda funny: I wrote first two lines and last line and the rest cody did after telling it to "generate fibonnaci sequence code"... thanks might be usefull some day, bit flimsy, but interesting, next I try if it can translate code too function Fibannoci : integer; begin var a, b, c: integer; a := 0; b := 1; while b begin writeln(b); c := a + b; a := b; b := c; end; end; end;

  • @user-mz2ei2nx2p
    @user-mz2ei2nx2p4 ай бұрын

    Can anyone tell me if thre is a difference in code production between Q4 and Q8? i mean Q8 will produce less errors? is it more ''complete''? thnx!

  • @mdazhardware
    @mdazhardware4 ай бұрын

    Thanks for this awesome tutorial, how to do that for Windows os??

  • @user-nm9sy6fr7h
    @user-nm9sy6fr7h4 ай бұрын

    Enterprise AI is the best alternative for OpenAI, always helpful with coding questions

  • @skybuck2000
    @skybuck2000Ай бұрын

    Must the pull be placed in some special folder ? This is not explained, I doubt this will work, the way I did it, don't want models on SSD C drive but HD G drive to experiment with it and safe space on SSDs which really need it like windows updates, got twice 4 TB on SSD but still...

  • @jakeaquilina505
    @jakeaquilina5053 ай бұрын

    is their an extension for visual studio rather than VS code?

  • @skybuck2000
    @skybuck2000Ай бұрын

    Seems to conflict with omni pascal extension/code completion, not sure if both can be used ? Any ideas ?

  • @piratepartyftw
    @piratepartyftw3 ай бұрын

    Will the Chat function be available with Ollama soon?

  • @rbrcurtis
    @rbrcurtis4 ай бұрын

    the default model for cody/ollama to use is deepseek-coder:6.7b-base-q4_K_M. You have to change this in raw json settings if you want to use a different model.

  • @amitdingare5064

    @amitdingare5064

    3 ай бұрын

    how would you do that? appreciate the help.

  • @henrychien9177
    @henrychien91774 ай бұрын

    what about window?~anyway to run llama ?

  • @pierruno
    @pierruno4 ай бұрын

    Can you write in the Title for what OS this Tutorial is?

  • @skybuck2000
    @skybuck2000Ай бұрын

    Now the only thing I need to figure out is how to add a command to the cody pop up menu or something to add: "translate from go language to pascal language" so I don't have to re-type this constantly... testing big translation now...

  • @mrdl9199
    @mrdl9199Ай бұрын

    Thanks for this awesome tutorial

  • @bradstudio
    @bradstudio4 ай бұрын

    Nova editor needs support for this.

  • @micknamens8659
    @micknamens86594 ай бұрын

    The code for the Fibonacci function is correct in the sense of a specification. But as an implementation it's totally inefficient with exponential time O(2^n). (In functional languages where all functions are referential transparent their results can be cached transparently, called "memoizing. But Python lacks this feature.)

  • @Sergatx
    @Sergatx4 ай бұрын

    I just tried running this while being offline and it doesnt work how is this local?

  • @monaluthra4769
    @monaluthra47694 ай бұрын

    Please make a tutorial on how to use AlphaGeometry

  • @planetchubby
    @planetchubby4 ай бұрын

    Nice! Seems to work pretty well on my linux laptop. Would be great if I could save my 10 euros a month for copilot.

  • @skybuck2000
    @skybuck2000Ай бұрын

    Tried it with C cause python apperently not installed in vs code by default, didn't work for C code, but I see cody is working somewhat, a yellow light bulb appears. I came here for code translation, though code generation is interesting too and similiar, but can cody translate code too ? from go to delphi/pascal ? Is what I am interested in...

  • @skybuck2000
    @skybuck2000Ай бұрын

    I get some strange window says: edit instruction code, I guess I have to tell it with to do... generate fibonnaci sequence code perhaps ?

  • @skybuck2000
    @skybuck2000Ай бұрын

    Also it's already downloaded, what is the pull for ?

  • @ArturRoszczyk
    @ArturRoszczyk4 ай бұрын

    It does not work for me. The extension seems to prefer connecting to sourcegraph over the internet, even though it shows it selected codellama from usntable-ollama. Inference simply does not work if I unplug the wire.

  • @alx8439

    @alx8439

    4 ай бұрын

    Try other, better extensions. There are number of truly open source ones, which run locally unlike this gimmick. Privy, Twinny, TabbyML, Continue and many more

  • @quincy1048
    @quincy10484 ай бұрын

    any plan to roll this into a visual studio extension for c++ c# coding.

  • @jeffspaulding43
    @jeffspaulding434 ай бұрын

    Don't think this is an option unless you got a pretty good graphics card. I set mine up and gave it a setup to autocomplete. I heard my mac cpu fan going crazy, and it took about 20 secs to get a 5 token suggestion (it was correct tho :P )

  • @SahilP2648

    @SahilP2648

    4 ай бұрын

    Get an M3 Max 64 or 96GB MacBook Pro. The inference speed is really good. For development it seems like you need a really good Mac nowadays.

  • @aijokker
    @aijokker4 ай бұрын

    Is it better than chatgpt4?

  • @Sigmatechnica
    @SigmatechnicaАй бұрын

    what's the point of a local model if you have to sign into some random service to use it???

  • @peterfallman1106
    @peterfallman11064 ай бұрын

    Great but what are the the requirements for Microsoft servers and clients?

  • @nufh
    @nufh4 ай бұрын

    Damn... That is super dope.

  • @georgeknerr
    @georgeknerr4 ай бұрын

    Love your channel Matthew! For me however, 100% Local is not having to have an account with an external vendor to run your coding assistant completely locally. I'm looking for just that.

  • @rogermarquez1314
    @rogermarquez13144 ай бұрын

    Is this just for Mac users?

  • @Yewbzee
    @Yewbzee4 ай бұрын

    Does anybody know if this can code Swift UI ?

  • @kninghtanirecaps1470
    @kninghtanirecaps14702 ай бұрын

    can i use that without internet ?

  • @skybuck2000
    @skybuck2000Ай бұрын

    You lost me at the terminal step, how you get into ollama, is that it's folder ?

  • @skybuck2000
    @skybuck2000Ай бұрын

    It also automatically opened a command prompt... can proceed from there... plus there is an item in the start menu... probably linked to this messy installation.

  • @maddoglv
    @maddoglvАй бұрын

    if someone has error when running `ollama pull codellama:7b-code` in terminal just close and reopen your VSCode

  • @skybuck2000
    @skybuck2000Ай бұрын

    However I did not yet install go extension, maybe if go extension is installed, maybe cody can then do code translation from go language ? hmmm not sure yet.... probably not... but very maybe..

  • @freaq.creation
    @freaq.creation4 ай бұрын

    It's not working... I get an error where it says it can't find the model :(

  • @haydnrayturner1383
    @haydnrayturner13834 ай бұрын

    *sigh*Any idea when Ollama is coming to windows??

  • @bhanunamikaze2508
    @bhanunamikaze25084 ай бұрын

    This is awesome

  • @JoeBrigAI
    @JoeBrigAI3 ай бұрын

    No local models when using JetBrains plugin?

  • @alx8439
    @alx84394 ай бұрын

    If you love open source and hate products which are having strings attached and spying on you, prefer to use VSCodium instead of VSCode which is having a lot of telemetry included by default

  • @skybuck2000
    @skybuck2000Ай бұрын

    cody settings: provider: Now it says experimental ollama, curious how to connect it to the pull/download... watching video/continueing

  • @jayashankarmaddipoti6964
    @jayashankarmaddipoti69644 ай бұрын

    Seems like ollama is compatiable for both linux and max. How to use it for windows user?

  • @RonaldvanWeerd

    @RonaldvanWeerd

    4 ай бұрын

    Try running it in a Docker container. Works fine for me.

  • @user-yy4xh3ym2f
    @user-yy4xh3ym2f3 ай бұрын

    Who's here for the Chaos that comes when he starts evaluating the models on BabyAGI?

  • @yagoa
    @yagoa4 ай бұрын

    how do I do it if Ollama is on my LAN?

  • @first-thoughtgiver-of-will2456
    @first-thoughtgiver-of-will24564 ай бұрын

    Awesome video! This video series is the best source for cutting edge practical AI applications bar none. Thanks for all the work you do.

  • @kritikusi-666
    @kritikusi-6664 ай бұрын

    correction. By default, it uses Claude 2.0

  • @skybuck2000
    @skybuck2000Ай бұрын

    It now looks like once code is selected from pull down list: CodyCompletionProvider:initialized: experimental-ollama/codellama:7b-code

  • @user-dy9mp1pf2t
    @user-dy9mp1pf2t3 ай бұрын

    Curious how they compare.

  • @neronenerone7366
    @neronenerone73663 ай бұрын

    How about using the same idea but with gpt pilot

Келесі