Пікірлер

  • @krishnakishoreveluru9879
    @krishnakishoreveluru98796 сағат бұрын

    it's really good information video 👏♥

  • @SaiTeja-nu5pw
    @SaiTeja-nu5pw10 күн бұрын

    Hey man! Could you provide information on approximately how many prompts can be utilized with the $5 free credits? Additionally, is this offer available in all regions, or is it limited to specific areas?

  • @SaiTeja-nu5pw
    @SaiTeja-nu5pw10 күн бұрын

    I mean Multimodal queries

  • @enricd
    @enricd9 күн бұрын

    As a reference, an image of 1000x1000px would take 1400 tokens, so 1000 image would cost 4 dollars. You can get more info in here: docs.anthropic.com/en/docs/build-with-claude/vision

  • @maiseja9987
    @maiseja998711 күн бұрын

    thank you so much for this video. I can tell claude will be important in the near future. Though I wanna ask, is there a way to make it send files and get back files? I'm currently working on a project using claude's api

  • @enricd
    @enricd10 күн бұрын

    the model itself from the api I would say it's not capable of storing files or keeping them, you have to manage such functionalities in a backend logic in any cloud or your own computer. Like doing a vector database for example and the RAG logic.

  • @RichardPlay2109
    @RichardPlay210912 күн бұрын

    code ? for download !

  • @enricd
    @enricd10 күн бұрын

    no code for download ! :) sorry

  • @henkhbit5748
    @henkhbit574814 күн бұрын

    Thanks, for your video!

  • @martad9672
    @martad967215 күн бұрын

    😂😂😂

  • @henkhbit5748
    @henkhbit574816 күн бұрын

    Excellent video👍 if u deploy at streamlit is the url publicly available? How to deploy from you're own server?

  • @enricd
    @enricd15 күн бұрын

    Thanks! when you deploy your app into the Streamlit Community Cloud, the <your-sub-domain>.streamlit.app URL is available from anywhere in the internet, I have some other videos in my channel showing to do it. If you want to deploy it somewhere else you can do it either from your own server (which could be risky as you are opening it to the public internet) or you can also deploy it on AWS, Azure, GCP, Heroku and some other public clouds.

  • @henkhbit5748
    @henkhbit574813 күн бұрын

    @@enricd thanks. for anthropic I get an error about billing "credit balance too low.." . I did create an account and succesfully claimed the $5. Is this not sufficient?

  • @nkwachiabel5092
    @nkwachiabel509216 күн бұрын

    Great Video! Very simple and straight forward. Question: Have you thought about how you can show those responses with images from the model? Also, how about file upload (csv, pdf, etc)? will they all still be in the same format?

  • @enricd
    @enricd16 күн бұрын

    Thanks! GPT-4o doesn't produce images, it could generate the prompt for dalle-3 or any txt2img model to generate them. If you want to send it a document, you could use the langchain document loaders for it :)

  • @abdullahalasker536
    @abdullahalasker53616 күн бұрын

    Thank you. Clear and straight to the point.

  • @henkhbit5748
    @henkhbit574816 күн бұрын

    Great video! Maybe the next time use langchain for interfacing for better flexibility when using other multi modal LLM. Now you have Gemini flash/pro and Claude 3.5 sonnet multi models beside openai 4o model.

  • @enricd
    @enricd16 күн бұрын

    totally! that's something I had in mind, but it would add some more complexity (although the interface and integration would be better) and also when dealing with Gemini 1.5 2 weeks ago, I found that langchain was not yet capable of dealing with upload files like videos, audios, docs and so, so it was still for the Gemini 1.0 capabilities but not for the 1.5. But yes, it would have some advantages and depending on the project, I would use it and also structure better the code with more abstractions and a better architecture. Here I wanted to show a basic and plain integration of it.

  • @VincentdeRenty
    @VincentdeRenty19 күн бұрын

    Nice Job, but unfortunatly i have an issue when i claim my 5$ ... the code I received didn't work and I can't receive a new one 😢

  • @enricd
    @enricd19 күн бұрын

    Hi Vincent, I think whenever you create an account with an email, you already have the $5, but I'm not sure. Try with a new email, and check if this is also available in your country as maybe it's only available in certain regions. I hope you will be able to fix it :)

  • @maiseja9987
    @maiseja998711 күн бұрын

    i have the same problem. i verified my phone number and still couldn't claim it

  • @enricd
    @enricd9 күн бұрын

    @@maiseja9987 support.anthropic.com/en/articles/8994925-how-do-i-claim-my-free-credits this is what they say in their FAQs

  • @michaelshostack2547
    @michaelshostack2547Күн бұрын

    @@enricdthat didn’t work for me and their support is unresponsive. My phone number is now registered, I cannot re register it, and no credit.

  • @fallou_fall
    @fallou_fall20 күн бұрын

    Perfect ,great work always

  • @-vishalJ
    @-vishalJ21 күн бұрын

    Good Work!!

  • @fallou_fall
    @fallou_fall24 күн бұрын

    Amazing well done !!!

  • @user-mo2en6us8x
    @user-mo2en6us8x26 күн бұрын

    Thank you for your video. I have a question whether the image generation is still not worked. Because gpt-4o generates text outputs, then maybe it can generate the encoded text of the output image that we want.

  • @enricd
    @enricd23 күн бұрын

    Nice question! so the fact that a digital image can be encoded from its pixels values to the png or jpg compressed format and then those bytes to base64 to be sent through the internet, and then in the end decompressed back again, doesn't mean that it's possible to generate meaningful images directly in base64. It's something almost impossible, a 2D image of something needs to be generated in a 2D array pixel by pixel so its visual patterns make sense, and only then transformed or compressed to png/jpg and base64 :), but really good question to think out-of-the-box!

  • @danielmartosarroyo5969
    @danielmartosarroyo596926 күн бұрын

  • @brunoomg5489
    @brunoomg5489Ай бұрын

    Explicalo en español bro, seria increible

  • @enricd
    @enricdАй бұрын

    <Yo mejoro mi pronunciación en inglés> 🤝 <Tu aprendes inglés>

  • @tonywhite4476
    @tonywhite4476Ай бұрын

    Nice app. Love the UI. I was wondering if the tts feature is waiting for the response to finish streaming before it converts it? In other words, is the streaming response feature increasing the audio response time?

  • @enricd
    @enricdАй бұрын

    Thanks Tony! Yes, as I implemented it here, the TTS receives the response text when this is fully completed and then it starts to convert it to audio, and we receive the audio in the UI when this is fully completed. It's possible to do a more advanced strategy where you stream the text response to TTS (phrase by phrase, otherwise it could miss in properly taking the language, tone, and so) and then we could also stream the TTS audio response while it's generated, in order to reproduce it while it's generated and not to wait until the end of it. With these two stream workflows the final audio response would get to us so much faster (I'm pretty sure that this is what OpenAI is doing in the demo of chatgpt-4o and also in the current app probably)

  • @tonywhite4476
    @tonywhite4476Ай бұрын

    @@enricd I was wondering if there was a way to do it synchronously.

  • @sergiosobral5776
    @sergiosobral5776Ай бұрын

    Hey, how do I remove the openKey entry? And go straight.

  • @enricd
    @enricdАй бұрын

    Hi! what do you mean? the openai api key? you needed to get one on platform.openai.com creating an account first and charging some dollars first (5 dollars would be enough) so then you can consume tokens doing requests to the models

  • @sergiosobral5776
    @sergiosobral5776Ай бұрын

    Sorry, I wrote it wrong. I have the API key, but I would like to remove this add the key section. How do I get it to the direct model when it runs? Without the need to validate the key.

  • @enricd
    @enricdАй бұрын

    @@sergiosobral5776 so if you are cloning the code from the app and running it locally in your computer, you can directly assign the openai_api_key variable to your api key in plain text, and even remove those lines related to the verification of the key

  • @CarlosHidalgoLa
    @CarlosHidalgoLaАй бұрын

  • @fallou_fall
    @fallou_fallАй бұрын

    Well done Sir Amazing work!

  • @maxidiazbattan
    @maxidiazbattanАй бұрын

    Crack!

  • @tonywhite4476
    @tonywhite4476Ай бұрын

    Why so many requirements/denpendencies?

  • @enricd
    @enricdАй бұрын

    They are needed to build the entire website and their components :)

  • @DavidCarmonaMaroto
    @DavidCarmonaMarotoАй бұрын

    Grande Enric

  • @danielmartosarroyo5969
    @danielmartosarroyo5969Ай бұрын

    Amazing 😮

  • @reformulandoer
    @reformulandoerАй бұрын

    can you help me with pricing? This bot requires the CHATGPT 4 API, right? How much do I need to spend on that?

  • @enricd
    @enricdАй бұрын

    yes you need to create an account on platform.openai.com and put some credits, a single image recognition cost depends on its resolution but it's around a cent or less

  • @reformulandoer
    @reformulandoerАй бұрын

    @@enricd but I need to pay $20 month for the GPT4, or just putting like $5 I can expand with an image prompt?

  • @enricd
    @enricdАй бұрын

    @@reformulandoer you can put 5$ and they will be spent as you consume tokens, there is no monthly paymenys/subscritption with the api, it's only pay-per-use

  • @reformulandoer
    @reformulandoerАй бұрын

    @@enricd thanks a lot. You are the best bro!

  • @w0lf503
    @w0lf503Ай бұрын

    @@enricd form the docs i understood tha tthe API cannot take image input, and from my calculations 1 cost would be like 12$ per game for an average of 10 moves( each move is an API call) also, i need to ask, if this actually works, why give it for free, why not just use it, put some money and beat the casino

  • @gekstudio6926
    @gekstudio6926Ай бұрын

    what was the price of one recognition?

  • @enricd
    @enricdАй бұрын

    it depends on the resolution but it's usually around a cent or less

  • @gekstudio6926
    @gekstudio6926Ай бұрын

    It will be very expensive on regular basis

  • @enricd
    @enricdАй бұрын

    @@gekstudio6926 depends on the case but in general is really cheap, and now with gpt-4o the tokens cost half of the previous price

  • @lypsyrobotti4326
    @lypsyrobotti43262 ай бұрын

    What is strategy based on the gpt plays?

  • @enricd
    @enricd2 ай бұрын

    I'm not sure if this is what you asked but basically there are 2 AI agents each with a prompt instruction on how to play and also the context of the board at every turn, and they call GPT-4 to get a response on what direction to play next

  • @lypsyrobotti4326
    @lypsyrobotti43262 ай бұрын

    @@enricd link?

  • @enricd
    @enricd2 ай бұрын

    @@lypsyrobotti4326 full video: kzread.info/dash/bejne/ZX1-mZaces2rfbQ.html app: enricd.streamlit.app/The_LLMs_Arena

  • @bababear1745
    @bababear17453 ай бұрын

    Sending you an email with a proposal. Please check and respond.

  • @ronen6020
    @ronen60203 ай бұрын

    can someone say if the bot wins?

  • @enricd
    @enricd3 ай бұрын

    sometimes, depends on the expertise of the other players

  • @umtombs
    @umtombs4 ай бұрын

    I am stuck. Getting the following error Import "openai" could not be resolved Any idea guys? I followed everything exactly.

  • @enricd
    @enricd4 ай бұрын

    is it while doing "pip install openai" in the terminal?

  • @umtombs
    @umtombs4 ай бұрын

    No it is when I am trying to run script.@@enricd I think I'll have to take some intro courses, now getting this. TypeError: OpenAI.__init__() takes 1 positional argument but 2 were given

  • @umtombs
    @umtombs4 ай бұрын

    @@enricd (.venv) PS C:\PokerBot> python poker_bot_dev.py Traceback (most recent call last): File "C:\PokerBot\poker_bot_dev.py", line 18, in <module> client = OpenAI(openai_api_key) ^^^^^^^^^^^^^^^^^^^^^^ TypeError: OpenAI.__init__() takes 1 positional argument but 2 were given (.venv) PS C:\PokerBot>

  • @umtombs
    @umtombs4 ай бұрын

    @@enricd I was able to get it to work... some minor changes to syntax. Thanks!

  • @D3ADLOLO
    @D3ADLOLO4 ай бұрын

    That's funny it's also one of the first usecase of gpt4-v I've done

  • @enricd
    @enricd6 ай бұрын

    UPDATE: when doing pip install of the python libraries, make sure that opencv is installed like this pip install opencv-python-headless (instead of opencv-python) make sure that in your requirements.txt there is also only the opencv-python-headless, not the opencv-python, otherwise your app will not work in streamlit cloud the other libraries are good as they are sorry for this!

  • @fallou_fall
    @fallou_fall6 ай бұрын

    WELL DONE Sir ....Keep going 💯

  • @ax_HvH
    @ax_HvH6 ай бұрын

    sabes si hay otra api comatible que sea gratis?

  • @enricd
    @enricd6 ай бұрын

    que yo sepa no y si la hay será con algunos pocos créditos gratis al día/mes o algo así para probarla, pero gratis gratis seguro que no. Si tienes un PC con una gráfica medio buena puedes correr un modelo open source de vision por ti mismo ahí sin pagar nada (solo la luz que consuma claro xd). Aun así, es relativamente barato yo creo el coste de openAI en general para estas cosas.

  • @TheFefiOnfire
    @TheFefiOnfireАй бұрын

    Consideras que hay algún modelo Open Source de visión que esté a la altura? Muy interesante el vídeo la verdad.

  • @ax_HvH
    @ax_HvH6 ай бұрын

    No sé inglés pero algo me enterado

  • @ax_HvH
    @ax_HvH6 ай бұрын

    No sabía que podía hacer eso, buen vídeo

  • @Panpaper
    @Panpaper6 ай бұрын

    I dont see a convert-pth-to-ggml.py file anywhere in the llama.cpp repository. Was it recently removed? Can’t proceed at all, appreciate any help

  • @enricd
    @enricd6 ай бұрын

    Thanks for you question, apparently they have recently changed this in the llama.cpp project and now they have a more general script called convert.py that can handle different weights files formats as input and convert any of them to ggml. You can check the details from the llama.cpp GitHub reamde but it should work by running python convert.py <pth-file-name> (but I haven't tried it)

  • @manishraj-rz4lh
    @manishraj-rz4lh4 ай бұрын

    @@enricdSo , what should be code like ?

  • @user-ob7fd8hv4t
    @user-ob7fd8hv4t7 ай бұрын

    Glad to see your video, because I'm trying to use my social media chats to train my own 'digital twin', I'm currently thinking about whether 96GB of m2 max is enough for my needs, because I want to run the model training and deployment locally, if this plan is feasible, I may also do some model training locally in the future related to other more sensitive data, instead of uploading my data to gpts, which I am currently using 16GB The memory of the m2pro doesn't seem to support this idea of mine very much

  • @sanchogodinho
    @sanchogodinho2 ай бұрын

    I would prefer buying a mac enough for your normal coding & stuff and run the AI training on cloud servers instead of your device. You'll save a lot of money since you might hardly require training it! Just ignore my comment if you really need to frequently train Large AI models. Else, you can consider my suggestion...

  • @ordermusic3941
    @ordermusic39417 ай бұрын

    thanks!

  • @syenza
    @syenza7 ай бұрын

    Great video, thanks! Do you take on private clients to implement the Vision API and other projects?

  • @enricd
    @enricd7 ай бұрын

    Thanks @syenza !! Send me an email to [email protected]

  • @randotkatsenko5157
    @randotkatsenko51577 ай бұрын

    I have something similar, but more advanced. GPT4 can actually play poker on quite a high level.

  • @enricd
    @enricd7 ай бұрын

    Nice! What is your approach?

  • @randotkatsenko5157
    @randotkatsenko51577 ай бұрын

    @@enricd Multiple AI agents analyzising different aspects of the game - think of AI swarms for making the best move in poker.

  • @DrSTAXX
    @DrSTAXX7 ай бұрын

    rando can i get in touch w u to discuss about the poker bot?

  • @SokiiCZ
    @SokiiCZ5 ай бұрын

    ​@@randotkatsenko5157 tutorial video would be awesome 😇😃

  • @thelinkofperfectioncharity9469
    @thelinkofperfectioncharity94697 ай бұрын

    THAT COULD BE VERY DANGEROUS

  • @enricd
    @enricd7 ай бұрын

    yes, indeed, I think to have made it clear in several parts of the video. For me it was a really interesting example, benchmark and use case to apply a deep learning model in a closed loop end to end, in a controlled environment. I'm sure this will inspire many other applications to people watching it :)

  • @pankajverma29007
    @pankajverma2900711 ай бұрын

    Can you make a video on using 70B model on CPU?

  • @enricd
    @enricd11 ай бұрын

    I'm afraid that it would be to heavy, even after quantization it wouldn't fit into my macbook air's RAM, which is 24GB. Probably in a macbook pro or mac studio with 64GB would work

  • @pankajverma29007
    @pankajverma2900710 ай бұрын

    @@Doctor_monk That will be awesome! Can you please share with me the link to that tutorial?

  • @pankajverma29007
    @pankajverma2900710 ай бұрын

    @@Doctor_monk subscribed ! Waiting for the video.

  • @leemark7739
    @leemark77395 ай бұрын

    @@Doctor_monk where is it

  • @DocuFlow
    @DocuFlow11 ай бұрын

    Apologies if I missed it, but did the GPU get used, and if so was shared memory useful? I'm wondering if I should get a Mac Mini with max RAM to run in GPU mode.

  • @enricd
    @enricd11 ай бұрын

    Hey no worries, at the end of the video I showed the gpu monitor graph and the cpu one and everything related to the LLM is running only on cpu. gpu is only used for other apps like screen recording and so.

  • @human-pl7kx
    @human-pl7kx11 ай бұрын

    How many RAM does your Macbook have?

  • @enricd
    @enricd11 ай бұрын

    24gb but it was barely using 8gb while running it, having some chrome tabs open and the screen recording software

  • @human-pl7kx
    @human-pl7kx11 ай бұрын

    @@enricd 13B model?

  • @enricd
    @enricd11 ай бұрын

    @@human-pl7kx yes, you can check at the end of this video where I showed the Mac's Activity Monitor with the RAM around 8-9GB: kzread.info/dash/bejne/hmihrMWzZ8e4pqg.html

  • @human-pl7kx
    @human-pl7kx11 ай бұрын

    @@enricd I cannot run llama 2 13B on a mac with 8GB. Looks like I ran out of memory.

  • @enricd
    @enricd11 ай бұрын

    @@human-pl7kx oh interesting... and does it work with the 7B version? Have you also any other apps open using ram apart from llama.cpp?

  • @manavshah9062
    @manavshah906211 ай бұрын

    WIll it work on linux?

  • @enricd
    @enricd11 ай бұрын

    It should work, yes. llama.cpp runs on Windows, Linux, MacOs and Docker

  • @Nuwiz
    @Nuwiz11 ай бұрын

    Have you noticed any performance drop in the 70b version?

  • @enricd
    @enricd11 ай бұрын

    I only tried the 7B and 13B versions, for the 70B version I'm not sure what machine specs would be required but apparently the llama 2 70B is the best open base LLM model in the HF leaderboard