Local Low Latency Speech to Speech - Mistral 7B + OpenVoice / Whisper | Open Source AI

Ғылым және технология

Local Low Latency Speech to Speech - Mistral 7B + OpenVoice / Whisper | Open Source AI
👊 Become a member and get access to GitHub:
/ allaboutai
🤖 AI Engineer Course:
scrimba.com/?ref=allabtai
Get a FREE 45+ ChatGPT Prompts PDF here:
📧 Join the newsletter:
www.allabtai.com/newsletter/
🌐 My website:
www.allabtai.com
Openvoice:
github.com/myshell-ai/OpenVoice
LM Studio:
lmstudio.ai
I created a local low latency speech to speech system with LM Studio, Mistral 7b, OpenVoice and Whisper. This work 100% offline , uncensroed and with dependencies like APIs etc. Still working on optimizing the latency. Running on a 4080.
00:00 Intro
00:31 Local Low Latency Speech to Speech Flowchart
01:32 Setup / Python Code
05:13 Local Speech to Speech Test 1
07:06 Local Speech to Speech Test 2
10:06 Local Speech to Speech Simulation
12:37 Conclusion

Пікірлер: 159

  • @JohnSmith762A11B
    @JohnSmith762A11B4 ай бұрын

    More suggestions: add a "thought completed" detection layer that decides when the user has finished speaking based on the stt input so far (based upon context and natural pauses and such). It will auto-submit the text to the AI backend. Then have the app immediately begin listening to the microphone at the conclusion of playback of the AI's tts-converted response. Yes, sometimes the AI will interrupt the speaker if they hadn't entirely finished what they wanted to say, but that is how real human conversations work when one person perceives the other has finished their thought and chooses to respond. Also, if the user says "What?" or "(could you) Repeat that"" or "please repeat?" or "Say again?" Or "Sorry I missed that." the system should simply play the last WAV file again without going for another round trip to the AI inference server and doing another tts conversion of the text. Reserve the Control-C for stopping and starting this continuous auto-voice recording and response process instead. This will shave a many precious milliseconds of latency and make the conversation much more natural and less like using a walkie-talkie.

  • @SaiyD

    @SaiyD

    3 ай бұрын

    nice let me give one suggest to your suggestion. add a random choice with 50% chance to re play the audio or send your input to backend.

  • @ChrizzeeB

    @ChrizzeeB

    3 ай бұрын

    so it'd be sending the STT input again and again with every new word detected? rather than just at the end of a sentence or message?

  • @deltaxcd

    @deltaxcd

    3 ай бұрын

    I have better idea to feed it partial prompt without waiting user to finish and it starts generating response if there is a slightest pause if user continues talking more text is added to the prompt and output is regenerated. If user talks on top of speaking AI. Ai terminates its response and continues listening this will improve things 2 fold because moel will have a chance to process partial prompt and it will reduce time required to process the prompt later if we combine that to now wasting for full reply conversation will be completely natural there is no need for any of that say again because AI will do that by itself if asked

  • @williamjustus2654
    @williamjustus26544 ай бұрын

    Some of the best work and fun that I have seen so far. Can't wait to try on my own. Keep up the great work!!

  • @Canna_Science_and_Technology
    @Canna_Science_and_Technology4 ай бұрын

    Awesome! Time to replace my slow speech to speech code using openAI. Also, added eleven labs for a bit of a comedic touch. Thanks for putting this together.

  • @ales240
    @ales2404 ай бұрын

    Just subscribed! can't wait to get my hands on it, looks super cool!

  • @tommoves3385
    @tommoves33854 ай бұрын

    Hey Kris - that is awesome. I like it very much. Great that you do this open source stuff. Very cool 😎.

  • @BruceWayne15325
    @BruceWayne153254 ай бұрын

    very impressive! I'd love to see them implement this in smartphones for real-time translation when visiting foreign countries / restaurants.

  • @optimyse

    @optimyse

    3 ай бұрын

    S24 Ultra?

  • @deltaxcd

    @deltaxcd

    3 ай бұрын

    there are models that so speech to speech translation

  • @ryanjames3907
    @ryanjames39074 ай бұрын

    very cool, low latency voice, thanks for sharing, i watch all your videos, and i look forward to the next one,

  • @nyny
    @nyny4 ай бұрын

    Thats supah cool, I actually built something almost exactly like this yesterday. I get about the same performance. The hard part is needing to figure out threading/process pools/asyncio. To get that latency down. I used small instead of base. I think I get about the same response or better.

  • @user-rz6pp5my4t

    @user-rz6pp5my4t

    3 ай бұрын

    Hi ! Very impressive !! Do you have a github to share your code ?

  • @CognitiveComputations

    @CognitiveComputations

    2 ай бұрын

    can we see your code please

  • @limebulls

    @limebulls

    2 ай бұрын

    Im interested in it as well

  • @swannschilling474
    @swannschilling4744 ай бұрын

    I am still using Tortoise but Open Voice seems to be promising! 😊 Thanks for this video!! 🎉🎉🎉

  • @deeplearningdummy
    @deeplearningdummy3 ай бұрын

    I've been trying to figure out how to do this. Great job. I want to support your work and get this up and running for myself, but is KZread membership the only option?

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh3 ай бұрын

    I have tried open voice and bark, but VITS by far makes the most natural sounding voices.

  • @avi7278
    @avi72784 ай бұрын

    In the US we have this concept, if you watch a football game which is notorious for having a shizload of commercials (ie latency), if you start watching the game 30 minutes late but from the beginning, you can skip most of the commercials. If you just shift the latency to the beginning, 15 seconds of "loading" would probably be sufficient enough for a 5-10 minute conversation between the two chatbots, and also avoid loops by having a third party observer who reviews the last 5 messages and determines if the conversation has gone "stale" and interjects a new idea into one of the interlocutors.

  • @codygaudet8071
    @codygaudet8071Ай бұрын

    Just earned yourself a sub sir!

  • @arvsito
    @arvsito4 ай бұрын

    It will be very interesting to see this in a web application

  • @PhillipThomas87
    @PhillipThomas873 ай бұрын

    I mean, this is dependent on your hardware... Are the specs anywhere for this "inference server"

  • @aladinmovies
    @aladinmovies3 ай бұрын

    Good job. Interesting video

  • @denisblack9897
    @denisblack98974 ай бұрын

    I know about this for more than a year now and it still blows my mind. wtf

  • @darcwader
    @darcwader4 күн бұрын

    this was more comedy show than tech , lol. so hilarious responses from johnny.

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g4 ай бұрын

    This is great. But personally I think a speech recognition with push to talk or push to toggle talk is most useful.

  • @researchforumonline
    @researchforumonline3 ай бұрын

    wow very cool! Thanks

  • @cmcdonough2
    @cmcdonough26 күн бұрын

    This was great 😃👍

  • @yoagcur
    @yoagcur4 ай бұрын

    Fascinating. Any chance you could upgrade it so that specific voices could be used and a recording made automatically, Could make for some interesting Biden v Trump debates

  • @JohnSmith762A11B
    @JohnSmith762A11B4 ай бұрын

    I wonder if you are (or can, if not) caching the processed .mp3 voice model after the speech engine processes it and turns it into partials. That would cut out a lot of latency if it didn't need to process those 20 seconds of recorded voice audio every time. Right now it's pretty fast but the latency still sounds more like they are using walkie talkies than speaking on a phone.

  • @levieux1137

    @levieux1137

    4 ай бұрын

    it could go way further by using the native libs and dropping all the python-based wrappers that pass data between stages using files and that copy, copy, copy and recopy data all the time. For example llama.cpp is clearly recognizable in the lower layers, all the tunable parameters match it. I don't know for openvoice for example however, but the state the presenter arrived at shows that we're pretty close to reaching a DIY conversational robot, which is pretty cool.

  • @JohnSmith762A11B

    @JohnSmith762A11B

    4 ай бұрын

    @@levieux1137 By native libs, you mean the system tts speech on say Windows and macOS?

  • @levieux1137

    @levieux1137

    4 ай бұрын

    @@JohnSmith762A11B not necessarily that, but I'm speaking about the underlying components that are used here. In fact if you look, this is essentially python code built as wrapper on top of other parts that already run natively. The llama.cpp server for example is used here apparently. And once wrapped into layers and layers, you see that it becomes heavy to transport contents from one layer to another (particularly when passing via files, but even memcpy is expensive). It might even be possible that some elements are re-loaded from scratch and re-initialized after each sentence. The python script here appears to be mostly a wrapper around all such components,working like a shell script recording input from the microphone to a file then sending it to openvoice, then send that output to a file, then load another component with that file, etc... This is just like a shell script working with files and heavy initialization at every step. Dropping all that layer and directly using the native APIs of the various libs and components would be way more efficient. And it's very possible that past a point the author will discover that Python is not needed at all, which could suddenly offer more possibilities for lighter embedded processing.

  • @arkdirfe
    @arkdirfe3 ай бұрын

    Interesting, this is similar to a small project I made for myself. But instead of a chatbot conversation, the whisper output is fed into SAM (yes, the funny robot voice) and sent to an audio output. Basically makes SAM say whatever I say with a slight delay. I'm chopping up the speech into small segments so it can start transcribing while I speak for longer, but that introduces occasional weirdness, but I'm fine with that.

  • @SonGoku-pc7jl
    @SonGoku-pc7jl3 ай бұрын

    thanks, good project. Whisper can translate my spanish to english to spanish directly with little change in code? and tts i need change something also? thanks!

  • @fatsacktony1
    @fatsacktony13 ай бұрын

    Could you get it to read information and context from a video game, like X4: Foundations, so that you could ask it like a personal assistant to help you manage your space empire?

  • @MelindaGreen
    @MelindaGreen3 ай бұрын

    I'm daunted by the idea of setting up these development systems just to use a model. Any chance people can bundle them into one big executable for Windows and iOS? I sure would love to just load-and-go.

  • @duffy666
    @duffy66612 күн бұрын

    I really like it! It this already on Github for members (could not find it)?

  • @SaveTheHuman5
    @SaveTheHuman53 ай бұрын

    Hello, please can inform to us what is your cpu, gpu, ram etc?

  • @mastershake2782
    @mastershake27823 ай бұрын

    I am trying to clone a voice from a reference audio file, but despite following the standard process, the output doesn't seem to change according to the reference. When I change the reference audio to a different file, there's no noticeable change in the voice characteristics of the output. The script successfully extracts the tone color embeddings, but the conversion process doesn't seem to reflect these in the final output. I'm using the demo reference audio provided by OpenVoice (male voice), but the output synthesized speech remains in a female voice, typical of the base speaker model. I've double-checked the script, model checkpoints, and audio file paths, but the issue persists. If anyone has encountered a similar problem or has suggestions on what might be going wrong, I would greatly appreciate your insights. Thank you in advance!

  • @Embassy_of_Jupiter
    @Embassy_of_Jupiter3 ай бұрын

    This gave me an interesting idea. Once could build streaming LLMs that at least partially build thoughts one word at a time (I mean the input, not the output). Basically precomputing most of the final embedding with an unfinished sentence, and if it has the full sentence and it's time to answer, it only has to go threw just a few very low latency, very cheap layers. Different but related idea: Similarly you could actually feed unfinished senteces into Mistral with a prompt that says "this is an unfinished sentence, say INTERRUPTION if you think it is an appropriate time to interrupt the speaker", to make the voice bot interrupt you. Like a normal person would. Would make it feel much more natural.

  • @deltaxcd

    @deltaxcd

    3 ай бұрын

    Actually AI can do that you can feed it partial prompt let it process it then acd more and continue from where you left. thats huge speedup. but prompt processing is pretty fast anyway to make it respond faster you need to let it speak before it finishes "thinking"

  • @jacoballessio5706
    @jacoballessio57063 ай бұрын

    I wonder if you could directly convert embeddings to speech to skip text inference

  • @gabrielsandstedt
    @gabrielsandstedt4 ай бұрын

    If you are fine venturing into c# or c++ then I know how you can improve the latency and create a single .exe that includes all of your different parts here, including using local models for the whisper voice recognition. I have done this myself using LLama sharp for runnign the GGUF file, and then embedding all external python into a batch process which it calls.

  • @matthewfuller9760

    @matthewfuller9760

    Ай бұрын

    code on github?

  • @gabrielsandstedt

    @gabrielsandstedt

    Ай бұрын

    @@matthewfuller9760 i should put it there actually. I have been jumping between projects lately without sharing much. Will send a link when it is up

  • @matthewfuller9760

    @matthewfuller9760

    Ай бұрын

    @@gabrielsandstedt cool

  • @irraz1
    @irraz121 күн бұрын

    wow! I would love to have such an assistant to practice languages. The “python hub” code, do you plan to share it at some point?

  • @googlenutzer3384
    @googlenutzer33843 ай бұрын

    Is it also possible to adjust to different languages?

  • @musumo1908
    @musumo19084 ай бұрын

    Hey cool…anyway to run this self hosted for an online speech to speech setup? Want to drop this into a chatbot project…what level membership to access the code thanks

  • @kleber1983
    @kleber1983Ай бұрын

    Hi, I´d like to know the computer specs required to run your speech to speech system, I m quite interested but I need to know first I my computer can handle it. thanks.

  • @LFPGaming
    @LFPGaming4 ай бұрын

    do you know of any offline/local way to do translations? i've been searching but haven't found a way to do local translations of video or audio using LargeLanguageModels

  • @deltaxcd

    @deltaxcd

    3 ай бұрын

    there is a program "subtitle edit" which can do that

  • @JG27Korny
    @JG27Korny4 ай бұрын

    I run the oobabooga silero plus whisper, but those take forever to make voice from text, especially silero.

  • @ProjCRys
    @ProjCRys4 ай бұрын

    Nice! I was about to create something like this for myself but I still couldn't use OpenVoice because I keep failing to run it on my venv instead of conda.

  • @Zvezdan88

    @Zvezdan88

    4 ай бұрын

    How do you even install OpenVoice?

  • @josephtilly258
    @josephtilly258Ай бұрын

    really interesting, lot of it i can't understand because I don't know coding but speech to speech could be a big thing within few years

  • @LadyTink
    @LadyTink3 ай бұрын

    Kinda feels like something the "rabbit R1" does with the whole fast speech to speech thing

  • @ExploreTogetherYT
    @ExploreTogetherYT3 ай бұрын

    how much RAM do you have to run mistral 7b locally? using gpu or cpu?

  • @EpicFlow
    @EpicFlow3 ай бұрын

    looks interesting but where is this community link you mentioned? :)

  • @skullseason1
    @skullseason12 ай бұрын

    How can i do this with the Apple M1, this is soooo awesome i need to figure it out!

  • @enriquemontero74
    @enriquemontero743 ай бұрын

    Can I configure it in Spanish? so that Mistral speaks Spanish and open voice in Spanish? I would like to confirm this to join as a member and access the github and try to make it work since my native language is Spanish, thank you for your work, it is incredible, you deserve many more followers, keep it up.

  • @JoeyRanieri12
    @JoeyRanieri123 ай бұрын

    does openvoice perform better than whisper's TTS?

  • @Yossisinterests-hq2qq
    @Yossisinterests-hq2qq3 ай бұрын

    hi I dont have talk.py, but is there another way of running it im missing?

  • @JohnGallie
    @JohnGallie3 ай бұрын

    is there anyway that you can give the python 90% of system resources so it would be faster

  • @MegaMijit
    @MegaMijit3 ай бұрын

    this is awesome, but voice could use some fine tuning to sound more realistic

  • @mickelodiansurname9578
    @mickelodiansurname95783 ай бұрын

    can the llm handle being told in a system prompt that it will be taking in the sentences in small chunks? say cut up into 2 second audio chunks per transcript. Can the mistral model do that? Anyway if so you might even be able to get it to 'butt in' to your prompt. now thats low latency!

  • @deltaxcd

    @deltaxcd

    3 ай бұрын

    No it cant be told that but it is not necessary. just feed it the chunk and then if user speaks before it managed to reply more restart and feed more

  • @NirmalEleQtra
    @NirmalEleQtraКүн бұрын

    Where can i find whole GitHub repo ?

  • @fire17102
    @fire171024 ай бұрын

    Would love to see some realtime animations to go with the voice, could be a face, but also can be minimalistic (like the R1 rabbit).

  • @wurstelei1356

    @wurstelei1356

    4 ай бұрын

    You need a second GPU for this. Lets say you put on Stable Diffusion. Displaying a robot face with emotions would be nice.

  • @leucome

    @leucome

    3 ай бұрын

    Try Amica AI . It has VRM 3D/vtuber character and multiple option for the voice and the llm backed.

  • @fire17102

    @fire17102

    2 ай бұрын

    ​@@leucomedoes it work locally in real time?

  • @fire17102

    @fire17102

    2 ай бұрын

    ​@@wurstelei1356Again, I think a minimalistic animation would also do the trick , or prerendeing the images once, and using them in the appropriate sequence in realtime.

  • @leucome

    @leucome

    2 ай бұрын

    ​@@fire17102 Yes it can work in real-time locally as long as the GPU is fast and has enough vram to run the AI+Voice. It can also connect to online service if required. I uploaded a video where I play Minecraft and talk to the AI at same time with all the component running on a single GPU.

  • @matthewfuller9760
    @matthewfuller9760Ай бұрын

    I think at even 1/3rd the speed with my rtx titan it would run just fine to learn a new language. Waiting 3 seconds is perfectly acceptable as a novice language learner.

  • @suminlee6576
    @suminlee65763 ай бұрын

    Do you have a video for showing how to do this step by step? I was going to be paid member but I couldn't see how to video in your paid channel?

  • @squiddymute
    @squiddymute3 ай бұрын

    no api = pure genius

  • @mertgundogdu211
    @mertgundogdu21119 күн бұрын

    How I can try this in my computer?? I couldnt find the talk.py in github code??

  • @weisland2807
    @weisland28073 ай бұрын

    would be funny if you had this in games - like the people on the streets of gta having convos fueled by somthing like this. maybe it's already happening tho, i'm not in the know. awesomesauce!

  • @binthem7997
    @binthem79973 ай бұрын

    Great tutorial but I wish you could share gists or share your code

  • @MrScoffins
    @MrScoffins3 ай бұрын

    So if you disconnect your computer from the Internet, will it still work?

  • @jephbennett

    @jephbennett

    2 ай бұрын

    Yes, this code package is not pulling APIs (which is why the latency is low), so it doesn't need internet connection. Downside is, it cannot access info outside of it's core dataset, so no current events or anything like that.

  • @OdikisOdikis
    @OdikisOdikis3 ай бұрын

    the predefined answer timing is what makes it not real conversation. It should spit answer questions at random timings like any human can think of something and only then answer. Randomizing timings would create more realistic conversations

  • @tag_of_frank
    @tag_of_frank2 ай бұрын

    Why LM Studio over OogaBooga? What are the pros/cons of them? I have been using Ooga, but wondering why one might switch.

  • @aboudezoa
    @aboudezoa3 ай бұрын

    Running on 4080 🤣 makes sense the damn thing is very fast

  • @64jcl
    @64jcl3 ай бұрын

    Surely the response time is a function of what rig you are doing this on - an RTX 4080 as you have is no doubt a major contributor here, and I would guess you have a beast of a CPU and high speed memory on a newer motherboard.

  • @microponics2695
    @microponics26953 ай бұрын

    I have the uncensored model the same one and when I ask it to list curse words it says it can't do that. ???

  • @jungen1093

    @jungen1093

    3 ай бұрын

    Lmao that’s annoying

  • @MetaphoricMinds
    @MetaphoricMinds3 ай бұрын

    What GPU are you running?

  • @AllAboutAI

    @AllAboutAI

    3 ай бұрын

    4080 RTX!

  • @deltaxcd
    @deltaxcd3 ай бұрын

    I think to decrease latency more you need to make it speak before AI finishes its sentence unfortunately there is no obvious way to feed it partial prompt but waiting until it will finish generating reply takes asy too long

  • @Nursultan_karazhigit
    @Nursultan_karazhigit3 ай бұрын

    Thanks . Is whisper api free ?

  • @m0nxt3r

    @m0nxt3r

    11 күн бұрын

    it's open source

  • @witext
    @witext2 ай бұрын

    I look forward to actual speech to speech LLM, not any speech to text translation layers, pure speech in and speech out, it would be revolutionary imo

  • @Ms.Robot.
    @Ms.Robot.3 ай бұрын

    ❤❤❤🎉 nice

  • @aestendrela
    @aestendrela3 ай бұрын

    It would be interesting to make a real-time translator. I think it could be very useful. The language barrier would end.

  • @deltaxcd

    @deltaxcd

    3 ай бұрын

    meta didi it already they created speech to speech translation model

  • @alexander191297
    @alexander1912973 ай бұрын

    I swear on my mother’s grave lol… this AI is hilarious! 😂😂😂

  • @Stockholm_Syndrome
    @Stockholm_Syndrome4 ай бұрын

    BRUTAL! hahaha

  • @TheRottweiler_Gemii
    @TheRottweiler_Gemii7 күн бұрын

    Anybody done with this and have a code or link can share please

  • @jeffsmith9384
    @jeffsmith93843 ай бұрын

    I would like to see how a chat room full of different models would problem solve... ChatGPT + Claude + * 7B + Grok + Bard... all in a room, trying to decide what you should have for lunch

  • @jerryqueen6755
    @jerryqueen6755Ай бұрын

    How can I install this on my PC? I am a member of the channel

  • @AllAboutAI

    @AllAboutAI

    Ай бұрын

    did you get the gh invite?

  • @jerryqueen6755

    @jerryqueen6755

    Ай бұрын

    @@AllAboutAI yes, thanks

  • @miaohf

    @miaohf

    Ай бұрын

    @@AllAboutAI I am a member of the channel too, how to get gh invite?

  • @ArnaudMEURET
    @ArnaudMEURET3 ай бұрын

    Just to paraphrase your models: “Dude ! Are you actually grabbing the gorram scrollbars to scroll down an effing window !? What is this? 1996 ? Ever heard of a mouse wheel? You know it’s even emulated by double drag on track pads, right?” 🤘

  • @ajayjasperj
    @ajayjasperj3 ай бұрын

    we can make youtube content with those conversation between bots😂❤

  • @JohnGallie
    @JohnGallie3 ай бұрын

    you need to get out more man lol. that was toooo much!

  • @Edward_ZS
    @Edward_ZS3 ай бұрын

    I dont see Dan.mp3

  • @picricket712
    @picricket71216 күн бұрын

    can someone please give me that source code

  • @tijendersingh5363
    @tijendersingh53634 ай бұрын

    Just wao

  • @ayatawan123
    @ayatawan1233 ай бұрын

    This made me laugh so hard!

  • @BrutalStrike2
    @BrutalStrike24 ай бұрын

    Jumanji Alan

  • @laalbujhakkar
    @laalbujhakkarАй бұрын

    How is a system that goes out to openAI, "local" ????????

  • @seRko123

    @seRko123

    13 күн бұрын

    Open air whisper is locally

  • @VitorioMiguel
    @VitorioMiguel3 ай бұрын

    Try fast-whisper. Open source and faster

  • @mickelodiansurname9578
    @mickelodiansurname95783 ай бұрын

    AI: "We got some rich investors on board dude, and their willing to back us up!" I think this script just announced the games commencing in the 2024 US Election... [not in the US so reaches for popcorn]

  • @kritikusi-666
    @kritikusi-6664 ай бұрын

    the voices are Mehh...cool project tho. You always have some fire content. You could train a LLM just off your content and be set haha.

  • @artisalva
    @artisalva3 ай бұрын

    haha AI conversations could have their own chanels

  • @MetaphoricMinds
    @MetaphoricMinds3 ай бұрын

    Dude just made a JARVIS embryo.

  • @smthngsmthngsmthngdarkside
    @smthngsmthngsmthngdarkside3 ай бұрын

    So where's the source code mate? Or is this just a hook for your newsletter marketing and crap website?

  • @Skystunt123

    @Skystunt123

    2 күн бұрын

    Just a hook, the code is not shared.

  • @robertgoldbornatyout
    @robertgoldbornatyoutАй бұрын

    Could make for some interesting Biden v Trump debates

  • @wurstelei1356
    @wurstelei13564 ай бұрын

    Sadly this video has fewer hits than it should have. I am looking forward for a more automated version of this. Hopefully the low amount of views wont hinder it.

  • @calvinwayne3017
    @calvinwayne30173 ай бұрын

    now add a metahuman and auidio2face :)

  • @lokiwhacker
    @lokiwhacker2 ай бұрын

    Thought this was really cool, love open source. But this really isnt open source if youre hiding it behind a pay wall... smh

  • @NoLimitYou
    @NoLimitYou3 ай бұрын

    Too bad you take open source and make it closed.

  • @mblend27

    @mblend27

    3 ай бұрын

    Explain?

  • @NoLimitYou

    @NoLimitYou

    3 ай бұрын

    @@mblend27 You take code openly available, and ask people to become a member, to receive the code of what you demo using the open source code. The whole idea of open source is that everyone contributes without putting it behind walls

  • @Ms.Robot.

    @Ms.Robot.

    3 ай бұрын

    You can in several ways.

  • @NoLimitYou

    @NoLimitYou

    3 ай бұрын

    You take open source and make something with that and put it behind a wall.

  • @TheGrobe

    @TheGrobe

    3 ай бұрын

    @@mblend27 You make someone pay to access something on github you comprised of open source components.

  • @asdasdaa7063
    @asdasdaa70634 ай бұрын

    its great but OpenVoice doesn't allow for commercial use. Would be nice to do this with a model that can be used for commercial use

  • @javedcolab
    @javedcolab3 ай бұрын

    Do not make AI lie on your face, man. Thankfully this is local.

  • @Canna_Science_and_Technology
    @Canna_Science_and_Technology4 ай бұрын

    It is funny.

  • @thygrrr
    @thygrrr3 ай бұрын

    WORST OPSEC for a hacker. :D

Келесі