Stable Warpfusion Tutorial: Turn Your Video to an AI Animation
Фильм және анимация
The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/mdmz06231
Learn how to use Warpfusion to stylize your videos. Discover key settings and tips for excellent results so you can turn your own videos to Ai Animations
Tech support: / discord
📁Warpfusion Settings:
bit.ly/42rJLPw
🔗Links:
Warpfusion v0.16(FREE & recommended): bit.ly/3pBh5X3
Warpfusion v0.14: bit.ly/42HozoG
DreamShaper: civitai.com/models/4384/dream...
Stable WarpFusion local install guide: • Stable WarpFusion loca...
Another local install guide: github.com/Sxela/WarpFusion/b...
Best Custom Stable Diffusion Models stablecog.com/blog/best-custo...
How to get good prompts: bit.ly/3IEAzjQ
How to use Luma AI: • Create FPV-Like Videos...
Disclaimer: Some links in the description are affiliate links. If you make a purchase through them, I may earn a small commission at no extra cost to you.
©️ Credits:
Stock video: www.pexels.com/video/energeti...
James Gerde: / gerdegotit
Marc Donahue: / permagrinfilms
Markus Paolo Pe Benito: / markuspaolo_
Alex Spirin: / defileroff
Noah Miller: / noahrobertmiller
Willis Hsieh: / willis.visual
Diesellord: / diesel_ai_art
Stefano Knoll: / steknoll
Josh Doctors: / fewjative
patchesflows: / patchesflows
Yüksel Aykilic: / designyukos
Oleh Ibrahimov: / drimota.ai
nointroproductions: / nointroproductions
Positive Prompts:
"0": [
"realistic female beautiful statue of liberty is a rocky statue dancing, manhattan city skyline in the background, the environment is new york city in day time, realism, hyper detailed, cinematic lighting, photograpny, High detail RAW color art, diffused soft lighting, sharp focus, hyperrealism, cinematic lighting, unreal engine, 4k, vibrant colours, dynamic lighting, digital art, winning award masterpiece, fantastically beautiful, illustration, aesthetically, trending on artstation, art by Zdzisaw Beksiski x Jean Michel Basquiat, high quality, 8k, "
]
Negative prompts:
"0": [
"smoke, fog, lowres, (bad anatomy:1.2), EasyNegative, multiple views, six fingers, black & white, monochrome, (bad hands:1.2), (text:1.2), error, cropped, worst quality, low quality, normal quality, jpeg artifacts, (signature:1.2), (watermark:1.3), username, blurry, out of focus, amateur drawing, colored, shading, displaced feet, out of frame, massive breasts, large breasts ,((ugly)), nude nsfw"
]
⏲ Chapters:
0:00 Introducing Warpfusion
0:34 How to start with Warpfusion
1:08 Google colab: local vs online runtime
2:01 How to transform a video
2:34 What's an AI model?
3:06 Settings
8:35 How to run Warpfusion
9:23 Animation preview
9:30 How to change GUI settings
12:06 How to export the animation
12:36 Get featured
12:49 Warpfusion + Luma AI
Support me on Patreon:
bit.ly/2MW56A1
🎵 Where I get my Music:
bit.ly/3boTeyv
🎤 My Microphone:
amzn.to/3kuHeki
🔈 Join my Discord server:
bit.ly/3qixniz
Join me!
Instagram: / justmdmz
Tiktok: / justmdmz
Twitter: / justmdmz
Facebook: / medmehrez.bss
Website: medmehrez.com/
#warpfusion #ai #stablediffusion
Who am I?
-----------------------------------------
My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects.
Пікірлер: 415
Update: I recommend using Warpfusion v0.16: bit.ly/3pBh5X3 Update 03/04: Just re-tested the same exact steps in the tutorial using v0.14 and Dreamshaper 8 model, it works perfectly! The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/mdmz06231 For tech support and other questions: discord.gg/YrpJRgVcax Don't forget #mdmz when you post your Warpfusion videos 😉🥳
@juanjuanchen6814
9 ай бұрын
the problem is if I pay you, can I use it on a free colab or free kaggle account? if not, seeming useless
@kelvinpatricio8842
9 ай бұрын
I'm using v0_16_13 and the script is giving an error on Generate optical flow and consistency maps 🙁
@kelvinpatricio8842
9 ай бұрын
Can someone help me?
@KREOGHOSTOFFICIAL
3 ай бұрын
YOU ARE CONFUSING THE SHIT OUTTA ME BRO
📁Warpfusion Settings: bit.ly/42rJLPw If you keep getting errors, use Warpfusion v0.16: bit.ly/3pBh5X3
@qwax
Жыл бұрын
What are the GPU requirements/VRAM requirements for Warpfusion?
@BoomBoomMac
11 ай бұрын
Does it work with M1 MacBook OR any apple computers?
@MREDZ
10 ай бұрын
Hey man, thanks for your in-depth tutorials on stable diffusion and warp fusion, they've helped me understand the software greatly. Unfortunately I am having an issue when trying to create a warp fusion, specifically at the 'define SD + K functions, load model' section. I keep getting this error no matter what I do. NameError Traceback (most recent call last) Cell In[8], line 6 4 import argparse 5 import math,os,time ----> 6 os.chdir( f'{root_dir}/src/taming-transformers') 7 import taming 8 os.chdir( f'{root_dir}') NameError: name 'root_dir' is not defined Any help would be much appreciated, as there is nothing online that comes up when searching for a solution. Thanks.
@Rishivlogs551
10 ай бұрын
1:19
@MREDZ
10 ай бұрын
@@Rishivlogs551 Ah okay thanks, I should've checked that out before I stated the process. I am now getting a different type of error when trying to run through a hosted runtime, under the Install and import dependencies. ImportError: cannot import name 'isDirectory' from 'PIL._util' (/usr/local/lib/python3.10/dist-packages/PIL/_util.py) Any idea what could be causing this? :\
I'm definitely going to give it a try and experiment with different settings.
amazing and it really does look good
Very good, thanks !!!
very nice and I always wondered how it was done, not easy but the output is impressive
@MDMZ
9 ай бұрын
Thank you! Cheers!
great tutorial, I have followed another tutorial to train my own AI model using rendered images of a character and used it, my first try wasn't so successful ( not sure if the reason is the video or the model) , any chance you can perhaps create a tutorial on creating our own AI models and using it on warpfusion?
@MDMZ
Жыл бұрын
I followed this once before and it worked great!: kzread.info/dash/bejne/nXeXutSmhs6XdpM.html
@saraeljamal5009
Жыл бұрын
@MDMZ, Thank you for your assistance! I managed to train my AI model and achieved some progress. However, I'm still struggling with maintaining consistency in masking the female's head throughout each frame. Initially, the mask works for a few frames, but then it starts to take on the form of the original face in the video.
@saumyajeetbhowmick7803
11 ай бұрын
which video tutorial did you use
thanks for the awesome tutorial! Looks amazing, only thing is mine keeps changing the subject's aesthetic looks and especially the face within a couple frames... is there a way to make it keep the same look as the first frame?
@MDMZ
Жыл бұрын
you can try to fix that by scheduling
That's impressive!!
@MDMZ
Жыл бұрын
🙏
In the "define SD + K functions, load model" section should I select CPU or GPU for the 'load_to' variable?
Please do a tutorial for the cola shorts clip it's so amazing
Wonderful 👍👍
Cool bro !! 🔥
@MDMZ
Жыл бұрын
🙏
Amazing !!!!
ty vv much legend❣
This is an awesome tutorial ❤❤❤
@MDMZ
8 ай бұрын
Thank you! Cheers!
Awesome. Great Tutorial, ❤
@MDMZ
Жыл бұрын
Thank you! Cheers!
Which is better, Warpfusion v0.14 or Stable WarpFusion v0.5.12 ?
I tried to follow your instruction here with my own video clip, but I seem to get errors all the time. Maybe it's because there are new versions up and running now that behave different. What I'm looking for is to use the video clip I have (it's me in front of a green screen). I would like to change myself into something fun, like some kind of animation, but not all different. Just making me look animated. And still have the Green Screen in the background in the final output. Maybe it's not possible in WarpFusion or what do you think? Should I look at something else or is it possible to make this with the right prompt and right model? Just can't find any tutorials about it. And I thought your video was great.
@MDMZ
9 ай бұрын
it is possible, I have instructions on how to keep the background untouched in this same tutorial, shooting on a green screen will definitely help with the separation. and YES, you should look into using a newer version
Hi MDMZ, my run stopped at 'Video Masking' with the issue of 'NameError: name 'os' is not defined'. Would be amazing if you can help, thank you.
@AnnaBednarek
10 ай бұрын
Same here. Can somebody help us, please? :(
Best vid. Thanks
@MDMZ
7 ай бұрын
Glad you liked it!
Would you recommend using this to a horizontal 1080p video? I have an NVIDIA 3070.
@MDMZ
Жыл бұрын
both will work fine, depends hwo you plan to use the output, if for IG/tiktok just go with vertical
question, will this tutorial basically work if i run it locally? Im not familiar with colab pro but i have a 4080.
@MDMZ
Жыл бұрын
yes same process right after you connect to local run
Awesome tutorial!! Quick question, I do have a windows pc, but was wondering will this work on a macbook as well?
@5XM-Film
Жыл бұрын
Obviously not for mac. Also would prefer if he would mention this right at the beginning 🤷🏻♂️
@MDMZ
Жыл бұрын
It actually works on the cloud! So your OS doesnt matter
@MDMZ
Жыл бұрын
I think you are referring to the local method, this is the online one 😉
@DearVMON
Жыл бұрын
@@MDMZ ey hedheke ch7abit nafhm bch n3rf ala ena pc nkhdm kn juste tst7a9 fazt l. Collab w local install yhmch thtd b a relief hh, thank you for the info^^
@5XM-Film
Жыл бұрын
Can anybody help how to get this done with a mac?
How does this compare to using stable diffusion image to image batching for creating a stylized look for videos?
@MDMZ
11 ай бұрын
this is much more consistent
Took about 4 hours to render 4 seconds but man it looks buttery smooth. My 1080ti was really trying🤣
@MDMZ
Жыл бұрын
glad it worked for you 😁
@AhvaBidu
Жыл бұрын
970 here. I envy you! AhaHaHa
@Twigslap
Жыл бұрын
About to try this today wish me luck lol
@Tamannasehgal19
Жыл бұрын
I,ve GTX 1650 would it be okay?
@AhvaBidu
Жыл бұрын
@@Tamannasehgal19 Yes. Better than a 970. But will take time. Oh, I think it's ok. I don't really know. Your card is better than mine, so... I will just shut up now.
Thank you so much! Great video! Does this also work for cartoon characters with different human proportions?
@CYBERNORM
Жыл бұрын
Aah, sorry, I think we r out of cartoon characters.
If I have AMD GPU is it still safe to use the online version only/its the same as not having strong enough hardware?
Nice
Hey! I'm considering buying a new PC of 8GB VRAM. Since Warpfusion seems to require more than that(wich means I'd have to pay for Colab Pro anyway), is there any benefit of buying a better 8GB VRAM PC, or should I just stick with my Laptop?anks for the tutorial.
@MDMZ
Жыл бұрын
depends on what you intend to use it for, 8GB is a bit low for SD
How you increase the trails effect?
whats the song that people use for stabled diffusion
Quick Question. If I want to try to keep the original background which options do I select?
@MDMZ
9 ай бұрын
I actually explain that in the video
Loved your video! Super Super Helpfull. Is there a way or a prompt to achieve a better lipsync or mouth movement? I'm struggling with this.
@MDMZ
11 ай бұрын
not yet!
Is this not part of stabled diffusion a1111 web ui, like an extension? This is it's own thing? Also, i have 12 gb vram. Does anyone have any input if similar ram worked for them? Thx
@MDMZ
11 ай бұрын
this is its own thing
Do you need the later versions of warpfusion or can you use the earlier ones?
@MDMZ
10 ай бұрын
It's best to use the latest
Does anyone know many time does it take to make a 30 seconds video with warp fusion? I need to understand this in order to present in on a live activation! Many Thanks in advance!
@MDMZ
Жыл бұрын
no one will be able to give you the correct answer, it depends on so many factors and it's pretty much impossible to predict until you run it.
You're a handsome man!!! I've been really looking forward to this video. And there is also a question, how to process VR1803D video in this way? After all, we cannot get consistently the same result for both lenses. (left and right) Please let me know if you have a guide for such a solution with style generation in VR180 3D video. Thank you. We will be following your news, with our whole small team.
@MDMZ
Жыл бұрын
I'm not so familiar with VR, but you can try using the same seed for both videos, or render both videos side by side in a single file then run it through Warp, if that makes sense
@FirstLast-tx3yj
11 ай бұрын
@@MDMZeverytime i run it locally i get the vram error And i could not find a way to install xformers to it (everything out there is about stable diffusion) How can i install xformers so that I lower the ram usage? Also it shows when running the code "no xformers module found" so it must work with xformers i just dont know what to change to activate it Please help
@johnnyc.31
11 ай бұрын
Use A1111 and Deforum or Deforumation. You can control camera angles and more.
where can I find the stable_warpfusion_settings_sample document for the default_settings_path?
Can I use my own GPU or do I need to pay for Google Colab? Can you achieve the same results with Temporal Kit?
hey, how to only diffuse the background but keep the object original? whats the setting for this masking, thanksss
@MDMZ
9 ай бұрын
I have covered that in the video
im using the free version of google colab so it doesent let it run do i need colab pro ?
@MDMZ
Жыл бұрын
Hi, as explained in the video, colab pro will give you access to more resources
I'm 2 minutes in and I'm like 🤯 ... so many steps and it feels so complicated
@MDMZ
10 ай бұрын
it only takes a bit of patience, you can do it!
You are a monster, man! And I own a GTX970 😂 so, some others tutorials are more "for me"
@MDMZ
Жыл бұрын
Enjoy!
Is there a way I could use warpfusion locally with automatic 1111? . Please make a tutorial on it 🙏
@MDMZ
Жыл бұрын
you can use stable diffusion locally both with A1111 and warpfusion as well, I do have a stable diffusion tutorial on how to install it with A1111
@theartforeststudio8667
Жыл бұрын
@@MDMZ thankyou!!! You mean a tutorial on using warpfusion with automatic 1111 , not Google colab. Right?
@MDMZ
Жыл бұрын
@theartforeststudio8667 Pretty much the same things just different platforms. warpfusion on google colab is used to run stable diffusion A1111 is used to run stable diffusion on your browser Both are set up and work differently, so it depends on which one u r more comfortable with
I have a trouble about not having really good consistency, is there a tutorial about the settings to make it perfect?
@MDMZ
Жыл бұрын
if you're seeking perfect consistency, we're not there yet! I suggest playing with the settings I covered, try enabling fixed_code, etc...
Hi super video..however I have been trying since 2 days..it disconnected at 20% .Is there any fix for that? Thank you in advance :)
@MDMZ, While Processing Video Input setttings, I got the following error: NameError: name 'generate_file_hash' is not defined Please Guide
Can we used for photo ??
Hi..thank you for the amazing videos ....but it keeps disconnecting after a few hours and it goes back to square one! how do I keep the connection alive?
@MDMZ
5 ай бұрын
I usually play a 10 hour youtube video on another tab 😅 you gotta keep your computer active
bro, if you don't mind telling us, how many compute units did you use per video on average? especially that video you just showed?
@reubzdubz
Жыл бұрын
I burnt like 20 units just for a 13s vid lol
@radstartrek
Жыл бұрын
@@reubzdubz wow man! thats some expensive job :D
@reubzdubz
Жыл бұрын
@@radstartrek that is if you follow the resolution in the video tho. I went down to 540x960 afterwards.
@radstartrek
Жыл бұрын
@@reubzdubz ok, so it would cost even more compute units on something like 720p.
@MDMZ
Жыл бұрын
honestly I have never documented as I was experimenting regularly with different resolutions and settings which affects the rendering time heavily, but yes the lower the resolution, the faster it runs
Thanks it was really usuful. When I save my video and run the last cell it tooks almost 1 hour to complete though the video that I diffused(out put video) would be almost 1 second. I don't really know what is wrong.
When I hit "run all' it can't get passed the "1.4 Install and import dependencies" section, says it's missing some modules (timm, lpips) been scouring discord and see others with this problem but no solutions. I'm using colab pro remotely on a Mac.
@MDMZ
Жыл бұрын
did you try re-running? or using a different version ?
@MikeBishoptv
Жыл бұрын
@@MDMZ yeah I fixed it by downloading the latest version and not the one in your tutorial
@MDMZ
Жыл бұрын
@@MikeBishoptv cool !
Can you do a tutorial for Deforum Stable Diffusion for google colab Because my installed version is not working
@MDMZ
Жыл бұрын
will look into it
this is probably the most complicated ai program i used by far. so many errors you cant find a fix for online and confusing settings you got to learn on your own because nobody has a full setting explanation for it. it took me almost 300 renders to understand what most settings do but i feel like its all going to be worth it once i get it all down.
@MDMZ
6 ай бұрын
it's definitely challenging and can be frustrating at times, keep an eye on updates, newer notebooks are much more stable
@koa8299
6 ай бұрын
@@MDMZ lol turns out all i needed to do was tweak was the controlnet settings to get the output i desire. i had no clue consistency and controlnet correlated with eachother
Why my colab always reconnecting, when i reconnect all my settings will be back to default settings and i cant go back to the 1st i made
Do you have the local tutorial?
Does A111 stable diffusion capable of this output?
@MDMZ
Жыл бұрын
technically yes, but warpfusion is way way easier
Will it be on mobile?
Is there anyway to create videos like this on an iphone?
Which one you prefer? This Warpfusion or Difussion with it's Auto1111 interface? I tried this with stable difussion, got similar results and what's most important, it's free.
@MDMZ
Жыл бұрын
I find this more consistent, perhaps I need to play around with A1111 a bit more
@BeetjeVreemd
Жыл бұрын
What do you need exactly to make these kind of videos for free in Stable Diffusion?
@SultanHz
Жыл бұрын
@@BeetjeVreemd did you find out how
@BeetjeVreemd
Жыл бұрын
@@SultanHz Unfortunately no i didn't :(
@kubagacek7352
Жыл бұрын
@@BeetjeVreemd did you find out by now ?
Hi does this work on MAC M2 chip?
Can u model a specific image instead of copying known ones like statue of liberty? I want to dance an image of myself for example ?
@MDMZ
Жыл бұрын
in the example of using your own image, you will probably need to train a model first using your images, there are plenty of tutorials on how to do that on youtube
1.4 import dependencies, define functions Runtime error
Anyone know of a free alternative to Warpfusion
Can you please discuss about some "free ai site" for video
@MDMZ
Жыл бұрын
sure
On average how much does it cost to make a 30 second video? Supposing it's 1080 vertical and you use the online processing option
@MDMZ
11 ай бұрын
very difficult to predict
How i can standby the process , turn off my laptop and continue later from the last frame generated?
@MDMZ
Жыл бұрын
try using the resume_run feature
I am having issues connecting to google colab to local host.... i have posted into discord on the issue
@goldalemanha6330
Жыл бұрын
Is it possible to do this on your cell phone or do you need a computer?
After getting any error or server disconnection, is there a way to continue from the latest frame without running all the process again?
@MDMZ
10 ай бұрын
You can use the resume run festure
Can the generated video be used commercially
hello I followed your video step by step until the launch of all the scripts but an error is displayed at optical map settings and it tells me NameError: name 'os' is not defined can you help me vp (I have already tried 3 times but still the same and I have took the warpfusion 0.16) )
@MDMZ
11 ай бұрын
hi, check the pinned comment
@Deviiiiiilllll
11 ай бұрын
I still have to pay another subscription to make warpfusion work?
Does anyone know, can this be done using another image as reference instead of a text prompt?
@MDMZ
7 ай бұрын
I believe it's possible now with IPadapter
First time please help, got error 1.2 Pytorch - 'No such file or directory: 'nvidia-smi'' Followed the entire tutorial with no luck. None of them talk about switching the Notebook settings Hardware Accelerated from None - to GPU. I have no idea if im suppose to do that. but thats the only way I can get the error to go away and keep the runtime going past 1.2 . However, with this GPU setting, it finish down to the GUI cell then disconnect my runtime and would not connect. I then switch the Notebook setting back to None, and it connected to the runtime. but now I am back at square 1 with the 1.2 Pytorch Nvidia smi error. Please help!
@MDMZ
Жыл бұрын
hi, check the pinned comment
Does the AI have the capability of animating a drawing that I created (do I need to create the same subject in several angles?), and applying that drawing to a video, dance, walk or jumping video clip?
@MDMZ
4 ай бұрын
you can try image to video, I have a video on that
Do you need CUDA and Visual Studio installed to run this locally on Win 10
@MDMZ
Жыл бұрын
you can follow the installation guide, the pre-required tools are listed there
Hello dear sir, can I do it with Mac studio?
@MDMZ
Жыл бұрын
Yes, you can! this works on the cloud so your computer's brand/model is irrelevant 😊😉
@bigdaddysho962
Жыл бұрын
@@MDMZ Thank you very much, stay healthy🙌
So, after trying a few times and getting all types of different errors i realized that the problem were not withing my settings, but with the unstable free GPU provided..once i signed upfor ColabPro i ran the same notebook and it worked.
@MDMZ
Жыл бұрын
glad it worked
which runtime should i use on colab? T4 or V100
@MDMZ
8 ай бұрын
I recommend u try both, one will cost you more over the other, but u get more speed
Getting an error msg failing at the Load a Stable tab saying; ModuleNotFoundError: No module named 'jsonmerge'. Even after getting a fresh install file and manually installing jsonmerge using pip install jsonmerge. Anyone else had this issue and managed to solve it?
@MDMZ
9 ай бұрын
hey, please visit Alex's discord for technical support, link in the description
Are subscription members allowed unlimited use of generation
is it not possible to do the same with stable diffusion?
@MDMZ
Жыл бұрын
warpfusioin results are much more consistent
so, Do I have to pay on patreon to have acess online Warpfusion ? I did´t undersand how acess it. Can I buy it ? I can´t run on my PC. I have a poor 3070.
@MDMZ
Жыл бұрын
you dont need your local GPU for this method
I can't do it because google colab disconnects all the time in the 5th, 6th step so I have to start again. Is there any way to solve that?
@MDMZ
10 ай бұрын
try using the latest version of warpfusion
will this work on a Mac m1?
@MDMZ
Жыл бұрын
this is the online method, it should work, I suggest you try it out u have nothing to lose
Hi, i used this tutorial and i have a question, why is my video at the end only 4 second if i uploaded video on 16 sec, did i do something wrong? i'm new in AI :(
@MDMZ
10 ай бұрын
probably, check the step at 7:36 and make sure you set the right frame range, [0,0] to process all frames
🎉🎉
i have an error says OS is not define how to fix it? tia
Can this also work with still images or is it only video to video?
@MDMZ
10 ай бұрын
for images i suggest you use stable diffusion on A1111, it's free and easier to use
Please bring a mobile option. I don't have a PC and I wanted to do this on my phone 😢
is there any free alternative?
what about how to install to PC (Auto1111 )not google navigate?
@MDMZ
Жыл бұрын
I have another tutorial on A1111, but this method works better in many scenarios
@valideliyev8243
Жыл бұрын
@@MDMZ no I need this effect how to make in from pc not in google navigate.
I followed the video step by step, But i generated a video of 4 seconds. Any tips on how to get a longer video ??
@MDMZ
Жыл бұрын
did you change your end frame from 0 to another number ?
Will it also work when using a Macbook?
@MDMZ
11 ай бұрын
i suggest you try, cause this is the cloud method
I tried to link my video after I uploaded the file but I get "FileNotFoundError: [WinError 2] The system cannot find the file specified: '/FILENMAME'". I linked it just like you did in the video. Any help is appreciated!
@MDMZ
10 ай бұрын
can you try the process from scratch? it might be referring to another setup file
@stevopatiz
10 ай бұрын
@@MDMZ I've uninstalled and reinstalled everything the local guide said to install. It seems it has trouble finding the video? I put everything in the same folder.
Hey! my run crashed at line 4: controlnet_multimodel = get_value('controlnet_multimodel',guis) NameError: name 'get_value' is not defined Could you help?
@MDMZ
Жыл бұрын
hi, check the description
there is an error , "NameError: name 'get_value' is not defined". how do I fix this. please help !
@MDMZ
Жыл бұрын
hi, check the pinned comment for technical support
@mdmz I Guess i know the answer because of the GPU, but can i somehow use this with my Surface Pro? and does anybody have maybe an alternative app or programm?
@MDMZ
Жыл бұрын
this runs online, your hardware doesn't matter here
@blurise
Жыл бұрын
@MDMZ so why is everybody in the comments talking about the hardware and how long rendering takes?
@MDMZ
Жыл бұрын
@@blurise cause people nowadays don't even bother watching, the info is literally in the video
@blurise
Жыл бұрын
@@MDMZ okay you got me 🥲
@MDMZ
Жыл бұрын
@@blurise 🤣
hi there, is 4070ti with 12gb vram will work? for local runtime?
@MDMZ
Жыл бұрын
yep should work fine
@jaknowsss
Жыл бұрын
@@MDMZ do you think 4070ti 12gb is faster than the one with the colab plan?
@MDMZ
Жыл бұрын
@@jaknowsss I'm not sure 😅, anything stopping you from trying it out ?
@MDMZ
Жыл бұрын
I suggest you try it locally first since u have 12gb, before paying for colab pro
Hi! Is this thing works with stable_warpfusion_v0_14_14.ipynb version?
@MDMZ
Жыл бұрын
it should, you can always move on to the newest version, settings shouldnt be much different