TEMPORALKIT - BEST EXTENSION THAT COMBINES THE POWER OF SD AND EBSYNTH!
Ойын-сауық
This is a tutorial on how to install and use TemporalKit for Stable Diffusion Automatic 1111. This extension uses Stable Diffusion and Ebsynth.
HOW TO SUPPORT MY CHANNEL
-Support me by joining my Patreon: / enigmatic_e
_________________________________________________________________________
SOCIAL MEDIA
-Join my discord: / discord
-Instagram: / enigmatic_e
-Tik Tok: / enigmatic_e
-Twitter: / 8bit_e
- Business Contact: esolomedia@gmail.com
_________________________________________________________________________
TemporalKit: github.com/CiaraStrawberry/Te...
7-zip: 7-zip.org/download.html
Ciara: / ciararowles1
Ciara Tutorial: • TemporalKit + Ebsynth ...
Tokyojab: / tokyojab
TroubleChute: • How To: Download+Insta...
Install SD
• Installing Stable Diff...
Install ControlNet
• New Stable Diffusion E...
Chapters
0:00 Intro
0:47 What is TemporalKit?
1:59 Installing TemporalKit
05:17 Settings
10:51 IMG2IMG
14:17 Exporting
15:05 Ebsynth
16:58 Experimenting
18:48 Longer Videos
Пікірлер: 261
Thanks for all your hard work making these tutorials - always excited to see your vids when they come out!
I've been waiting so long for a solution that makes this process a bit more easy and reliable. Thanks for sharing, man! 🙏😊
Since you posted the news on twitter I have been checking here frequently for this tutorial video. Finally we got the consistency we were looking for🙂Thanks mate🙏
As always, an amazing tutorial! Thanks for your good work
For anyone wondering why the Temporal-Kit tab isn´t showing up in the Web-ui: You have to install moviepy, too. Just had the issue...after installing moviepy, everything worked fine.
@pastuh
Жыл бұрын
I think it's auto installed after you reload cmd
@cyril1111
Жыл бұрын
thanks dude, i had the issue too!
@daffertube
Жыл бұрын
I just had to close and restart the CMD window.
The video I was looking for, thank you!!
I appreciate that you include the keyboard shortcut tidbits (and the tutorial in general)
Great video ! If you select a part of your prompt, hold control and press arrow up or down, you can directly change the weights of the keywords in your prompt :)
@enigmatic_e
Жыл бұрын
🤯 I gotta try that!
Did not expect to see myself. Thanks for the shoutout! Busy messing around with vid2vid, everything I've used so far brings way too much of the original video through (SD-CN-Animation), or creates something incredibly fickery. Busy following this guide :)
@enigmatic_e
10 ай бұрын
Hey!! Thanks for the tut, it helped me out so much. Hopefully this tut is helpful for you. I may need to update it since a lot of people say they’ve recently had a lot of issues.
always fun watch your tuts! ❤❤😍😍
Love you man! Funny, interesting, very helpful!
great video, very thorough and helpful, thanks!
So cool! Thanks for the great tutorial.
I wonder if it's possible to increase the consistency between frame groups by including previously generated frames which are masked on img2img step For example, let's say we extract 2x2 groups of frames from the video and include 2 previous frames: 1) We stylize the first group: 1 2 => *1 *2 3 4 => *3 *4 // Numbers are frame indices, "*" means stylized frame 2) Append two last frames to next group: *3 *4 => *3 *4 // *5 *6 7 8 => *7 *8 3) Repeat for each group *7 *8 => *7 *8 9 10 => *9 *10 11 12 => *11 *12 In the end we end up with one 2x2 group and several 3x2 groups, which are (hopefully) more temporally coherent than regular 2x2 groups I would like to try this myself, but my PC is a potato that can barely handle 768x768 generation, and you obviously need a lot more power to do this trick with several ControlNETs :(
@aitz3vil
Жыл бұрын
I think you just talking about "Border Key Frame" right?
@heyitsjoshd
Жыл бұрын
Yes, this works. This is how the multi frame renderer A1111 script works. One thing to add is you should regenerate the initial frames with this process too.
@mik3lang3lo
Жыл бұрын
Great reasoning, I would like to add that if you prepare a Lora with your character you will achieve great consistent there too
@strangelaw6384
8 ай бұрын
If this works, I wouldn't know why. Running through img2img removes style information of the original image. They style before and after stylizing should not be dependent unless you're using a low denoise... which goes against your original intention of stylizing the image. On the other hand, I think that you can use the same extra noise for every img2img to further improve consistency.
Nice tutorial! Thanx!!
@enigmatic_e
Жыл бұрын
Thanks for checking it out!
Thank you for this! I literally started an img2img batch last night for a video and woke up to this new temporalkit. I was having the same issue last night with controlnet producing the canny and open pose images. throwing my # sequence off. Anyways that batch is now obsolete thanks to this new temporalkit! The power of AI! Evolving so fast! If you find out how to stop the depth output from controlnet please lmk and thanks again for this tutorial!
awesome video 🙂
Dude ! Awesome explaining.. :P
@enigmatic_e
Жыл бұрын
Thank you!
great video
I will confirm this, but if you use a good denoiser after this, the software will interpret these variations as noise and will improve a lot. As the davinci resolve flicker will polish a lot as welll :)
@enigmatic_e
Жыл бұрын
Yes! Please let me know, I would love to find a solution to this.
@matbeedotcom
Жыл бұрын
interesting, it could introduce blurriness but with some of these new AI implementations of sharpening and upscaling it could be a moot point
Yess love
hey chef thanks for the very detailed awesome video, can we use inpaint also ? because on the batch header we have a field under in out directory named : Inpaint batch mask directory (required for inpaint batch processing only)
Try putting more frames in the spritesheet, but then use Controlnet Tile Upscaler to make it huge & stylize in one step, is there a limit to the size ebsynth can work with? even if still smallish ending video size, you could upscale the video at the end with some other software perhaps. GLHF 😜
Suggestion for higher quality - there's an extension called Tiled VAE or something similar, it lets you generate high res images by splitting up the pictures into tiles. Haven't tried it using this method but it could help
@Chrono..
Жыл бұрын
It is actually a new model within the ControlNet 1.1 extension, it was released about 1 week ago and can, with the help of the Ultimate SD Upscaler, make an image not only have a much higher resolution, but also much more details.
@The_Art_of_AI_888
Жыл бұрын
@@Chrono.. can it keep up with the consistency? or the details changing every pics ?
@merodiro
Жыл бұрын
@@Chrono.. These are different things. Tiled VAE can work similar to Ultimate SD Upscaler with controlnet but it can also work independently and allow you to generate images with higher res than you can usually generate with your card without getting OOM error
@Chrono..
Жыл бұрын
@@merodiro So you're saying that the sole purpose of Tiles is to give the ability to upscale, on cards with less vram? That is, if I have a good video card, Tiles isn't necessary?
@ViensVite
Жыл бұрын
@@Chrono.. tiles doesnt mean your graphic card is bad, is just splitting a lets say mountain in pieces. you still go faster with better card, actually il increase by a lot render times at least in 3D renders :)
thanks for the tutorial, i tried it but got reall yweird frmaes in the "frames" folder (all weird gray and pixelated) - what am I doing wrong?
완벽한 튜토리얼!!
there's a lot that could be improved here 1. in ffmpeg you can extract key frame per scene instead of frame count(means lower keyframes = faster process) 2. you can use a photo enhancer to enhance low quality grids(means you can fit more tiles = more consistency) 3. lastly enhance video with another AI for quality and fps 4. yeah its tedious but the result is nice
@david_ce
Жыл бұрын
Could you please give software that can be used for each of these steps?. I’m new to this and it would help greatly
@RiiahTV
11 ай бұрын
LETS SEE YOUR RESULTS BRO WHERE CAN I FIND?
Great tool, its like EbSynth Utility, but without mask, and its a bit faster.
Interesting it looks like I was running into issues because of some issues with the dimensions of what I was working with, that and using too many images in a grid. I was pretty sure I had my dimensions matched up correctly but I’m wondering if I need to start with a 512x768 or square grid to work with in this method. I know Stable Diffusion does some weirdness if not following those ratios.
@strangelaw6384
8 ай бұрын
Typically, you want 512x512 for SD2.0 or below. Bigger should work fine. Different aspect ratio should work fine until the ratio exceeds around 1.3. HOWEVER if you're running tiled images, none of what I said applies for sure anymore.
Great Tutorial! To solve the quality problem, wouldn't be possible to take the grid of images and upscale them individually with low denoise and then run them with temporal kit? just an idea. great video anyway!
@enigmatic_e
Жыл бұрын
Yea there has to be method to make the quality better. Ill be messing around with it more.
I finally had the eureka moment at 0:50, so THATS how it works. I totally didnt understand why it compiles a grid, but then I realized the seed and diffusion are working in the same pass so it'll be extremely close in output per grid
@matbeedotcom
Жыл бұрын
The initial noise size, from what I understand, is 64x64 and then the area (512x512 etc) is then filled with the noise/tensor shape
I have the same problem on my output with the controller preview images. How do you solve this problem?
Very cool
I did everything like you, but my control network does not work with a batch, that is, it does 1 frame as it should, and then does not select the next one in the list. This only applies to the control network, how to fix it?
When I recombine I get a blank crossfade video file. Everything works up that point and Ebsynth made all the frames and folders. Input video looks good. I made sure all of automatic 111, ffmpeg, and Ebsynth were up to date. Any ideas?
Does anyone know if it's possible to install FFMPEG to runpod? I've downloaded TemporalKit, Everything works except the output looks like a TV Satellite losing signal effect. I'm assuming it's because I didn't install FFMPEG correctly.
when I dragged a 16x9 video into the temporalkit, the video kind of covers the text in the UI entirely, making it unusable to make any settings. is the a bug with 16x9 videos with temporalkit?
Could you use the many frames in a plate but then upscale?
good video, however when i run an img2img batch it makes only one image and i get this error: IndexError: list index out of range any idea how to fix? i tried: limiting input frames to 20 or less, enabling split video, setting the sides or keyframes lower and non worked. edit: when disabling controlnet it works, but that kind of defeats what i wanted to do. Now i dont get the desired results. it also doesnt get in a 2x2 grid anymore, i now have inconsistant frames as a grid input is seen as one image ------------ i wanted to test it anyway and tried putting the frames and such into ebsynth, however the window gets off screen to the bottom and theres no way to scroll, so i cant run it
Hi! Does anyone know if it's possible to use this with a Google Colab notebook? I have to use Google Colab Pro since my GPU doesn't meet the required 16 GB VRAM for Stable Warpfusion.
my ebsynth is not loading the keyframes under it when i drag the keys folder, any idea?
Thank you for amazing tutorial! Everything working, but for some reason after recombine it's show very low quality mp4 file(500kb file size). But separate shot in output folders have decent quality. How to fix this?
this is very interesting however I really hope EBSynth evolves to support more keyframes. This can become tedious very fast for anything beyond 3-5 seconds.
hmmm what if we used topaz on the the grid to and then after on the grid?
@enigmatic_e
Жыл бұрын
I haven’t tried that but it would interesting to test it
Hi!, love the videos, Ebysinth its not workin for me, i get and error, something about missing a file 0001 or something like that.
Hello I try to install temporalkit and I have the error and no open anymore my A1111 I try and try and count find the solution this error ModuleNotFoundError: No module named 'tqdm.auto'
Additional step that could help with the consistent img2img is to turn ON the, "Apply color correction to img2img results to match original colors."
@pastuh
Жыл бұрын
I would skip this.. If you want to transform forest to hell forest, it will be impossible. Everything will be in green color
Why my "Run all" Button on ebsynth dissapeared when i drag keys?
when i send the input file (2x2) to img2img, it generates it as a "single person" i did not use "1boy" or "1girl" or "solo". but it still doesnt generate 2x2 for me.
Im getting issues when batch processing Img2Img from the seq created by the preprocessing stage of temporal kit. Their seq defaults to this: 0and0.png and then 1and0.png and it looks like batch likes normal file seq like name_001.png So when I run the img2img batch its skipping certain frames. Has anyone here figured out how to fix this or a temp fix for this?
why my EBSynth box to big..icant make it smaller..i cant see the run all button....
I have a problem that can't find they way to fix it in the internet... my temporalkit's extension tab doesn't show up:) please help me i'm loosing my mind.
Its amazing! I've been stuck for the past 40days. looks like a lot have happend but not enough on the video consistency using Stable diffusion. What GPU are you using? I've got RTX3060 12gb but struggle with the limited Vram. I want to add another RTX3060 12gb but don't know if it will work? Any advice. Also my Video clips is between 4-8 seconds, with the longest clip around 20 seconds. have you been successful over a longer time? My idea is to do a full style transfer with very high CFG Scale between .6 to 1. i was able to get reasonable coherence but not as good as this short clips from you, however I was able to maintain consistence over 20 seconds. I did it about 2 months ago, so a lot have changed.
@enigmatic_e
Жыл бұрын
I currently use RTX3080 10gb and it's pretty good. I do run into issues when I start adding controlnet and make the resolution high.
@ekke7995
Жыл бұрын
@@enigmatic_e I'm excited and want to test temporal kit. But even before starting I can already tell the frustration ahead. I'd say the problems I run into is less about A1111 and more about the AMD system. I use rasen7 CPU and mother board. So there's always problems with CUDA, drivers and FFMPEG.
Ebysinth its not workin for me, i get and error, something about missing a file 0001 or something like that.
Hey man, thanks for the video!! Do you know when EBSynth is saying my gpu is unavailable under the advanced tab? Running a ubuntu cloud machine with an A100, so gpu shouldnt be an issue.
@enigmatic_e
Жыл бұрын
oh man, I don't know. I have no experience with cloud machines. Sorry
Wonderful Videos!! thank you for the great stuff. I too am having the same issue as another,, My frames are all the same when I hit Run, any idea ?
@jbiziou
Жыл бұрын
Got it to work, I had to make sure my video was mp4, ( And at 24 fps ), it did not like 23.976.
@enigmatic_e
Жыл бұрын
Good to know.
You can set the sides dimension to 1 instead of 2, 3 etc...
That was really great tutorial! Only thing I would say is that for a lot of the things I see people do with this, like a toon style or something is just easier to fire up after effects and apply some filter. I wish you have succeeded to make it a robot, now that is not something you can filter in after...
great video, will there be a part 2 for loger video explaining split video setting?
@enigmatic_e
Жыл бұрын
I explain it at the end of the video at around 18:48.
@envoy9b9
Жыл бұрын
@@enigmatic_e ty ty
Success with your tutorial 🥳. Maybe if the quality bad we can enhance the video. Thank you
@My123Tutorials
Жыл бұрын
DaVinci Resolve Studio 18.5 Beta now has an AI video upscaling feature. You have to own the studio version but it's worth it anyway when you create video content regularly.
@ganemonster
Жыл бұрын
@@My123Tutorials thanks 🙏
Hi do you think EBsynth will be able to do more than 20 keyframes ? It's a mess for now to do long video manually 😅
@enigmatic_e
Жыл бұрын
I hope so. But there the work around that I mention at end of video.
I"m running into the issue where it says "Missing frame 0001" anyone know of a fix? - i tried to rename 2 to 1, that did not work, only created one out folder with 1 image, i also copied 2 and renamed it but still no luck.
@KratomSyndicate
Жыл бұрын
same issue, got all the way into this video and then it was like oh ya download ebsyth and I did it exactly like shown and none of the images populate in ebsyth and click any button just error 0001.png, ect not found but folder and file locate is good and filename is correct.
Can you please fix the problem here In the ebsynth it is not working for me and it not showing the keyframes and the directory is also also showingin the frames tab and key tab but not in the project directory
@chrisbraeuer9476
Жыл бұрын
same here..
when I put my input image of 4 slides in the img2img, it generates only one image, not 4, how can I make it generate 4 results?
@erende44
Жыл бұрын
controlnet canny
[HELP] I use Temporakit, and when I reach the "Ebsynth_process" step after pressing "Prepare Ebsynth," I don't see any files in the Keys folder; it's completely empty. What could be the issue?
@FirdausHayate
10 ай бұрын
i try rename img from folder output ''0and0'' same ..it work for me
Hi all! I have a problem. When I put my video(25fps) in INPUT and hit run, I keep getting an error. Please tell me how to solve this problem?
@Dynamicgreasemonkey
Жыл бұрын
me too. :( someone please help
I have been trying to install Temporal Kit to Stable Diffusion but when I install and update in the browser I get the tqdm error and can no longer run Stable Diffusion, unless I delete Temporal Kit from my extension folder and delete the venv folder completely. Does anyone have a solution for this? Or know a reason why it is happening? I can see online I am not the only one who has had this issue.
@VeiledVerities
8 ай бұрын
I have the same issue
For some reason, I don't generate any keys for my Temporal Kit. Did I miss a step somewhere? I have the frames and I have the output.
@chrisbraeuer9476
Жыл бұрын
me too...
I don´t know why my first keyframe is called keys003 and because of it I get a keys0001 missing on EbSynth
It would be great if I could prepare a prompt for each frame that will be generated using the style Now looks like need go 1 by 1
When I drag the key folder into ebsynth, it won't automatically set batches for me, and the number of keyframes less than 20. what’s wrong with that?
@enigmatic_e
Жыл бұрын
I would just try to see if there’s an update that might fix the issue. Other than that I’m just sure why it’s not working for you sorry.
@Han-ds9yy
Жыл бұрын
@@enigmatic_e Well, thank you for your reply anyway
@erende44
Жыл бұрын
select "split video" in EBsynth settings tab (Temporal Kit)
You can always upscale the low quality image, can you not?
@redot9914
Жыл бұрын
U can
@Jarod45
11 ай бұрын
Yeah, it doesn't always work that well though
You gotta increase the strength of your prompt. So instead of just cyberpunk robot, put (cyberpunk robot) to give it more strength. Or even stronger like (((cyberpunk robot))). The strongest you can go I think is (((cyberpunk robot:1.5))). Anyway, nice video. I'm gonna go have fun with Temporal Kit now. 😃
Maybe you can do the more consistently possible and after upscale the result with AI tool like topaz labs?
@enigmatic_e
Жыл бұрын
Mmm not sure, haven’t tried it
im getting error, video_stream = next((stream for stream in video_info['streams'] if stream['codec_type'] == 'video'), None) TypeError: 'NoneType' object is not subscriptable anyone can help?
15:50 you lost me... are you hitting a button here? It doesn't create outputs for me. Edit: For some reason the first keyframe was named 0002. It needed to be renamed to 0001 before the synth button would start the processing work. edit 2: the list of keyframes isn't showing up. You Jumpcut and say "it creates all these outputs" but that doesn't happen on my end and I can't see how you did it.
@DorothyJeanThompson
Жыл бұрын
did you figure it out? I'm having the same issue :(
@likeyouknowwhatever2811
Жыл бұрын
having same issue as well. no output folders
oh well this sucks...stable diffusion working fine, installed temporalkit, restarted and i get the error ImportError: cannot import name 'auto' from 'tqdm'...i don't have a clue how to fix it so i guess i'll now have to delete and reinstall everything
what happened between 15:57 and 15:58 my ebsynth says the naming is off how do i fix that?
@enigmatic_e
Жыл бұрын
Did you make sure to click on ebsynth mode and batch?
@MondoMurderface
Жыл бұрын
@@enigmatic_e Yes, what did you skip?
what about for mac?
Can this be used on mac?
Need your negative prompts as my default 😅
tyvm for the video :D luv it, but what about this error when trying to run: I have webUI version: v1.2.1 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: N/A • gradio: 3.29.0 • checkpoint: cf489251a5 Temporl kit error , line 86, in _init_ '-r', '%.02f' % fps, TypeError: must be real number, not NoneType
Nice! Works in Google Collab?
After installing and reloading the UI, had to close command prompt and reboot the bat file to get the Temporal-Kit tab to show. It's there now.
@enigmatic_e
Жыл бұрын
👍🏽
@kianma8381
11 ай бұрын
i did the same. even i restart my computer but still doesn't show up ....
Hey, does anyone know why Ebsynth won't list the outputs automatically? Or is that supposed to be a manual process? I have a lot of keyframes, it would take loads of time to set the ranges.
@DanielSimon-em2pe
Жыл бұрын
at 16 mins
@enigmatic_e
Жыл бұрын
@@DanielSimon-em2pe did you make sure to click split video?
@kartikashri
11 ай бұрын
same happened to me did you know how to automate it?
@DanielSimon-em2pe
11 ай бұрын
thank you for the reply @@enigmatic_e ! I combed through ebsynth buttons, but I can not find 'split'. I have sequences in the right place and everything is in order. There is a cut before you say 'then its gonna create all the outputs' and I lose track, 'cause manually filling all those info is not optimal. I left this workflow, but getting good results with the tempolarNET controlnet model combined with one or two other controlnet tab.
The Shao Khan laugh lol
@enigmatic_e
Жыл бұрын
so happy someone caught that! 😂
There are so many extensions now for Auto1111, how do you decide which ones you need? I tried loading them all and bogged everything down.
@enigmatic_e
Жыл бұрын
yea i think you just have to choose the ones that make sense for the kind of stuff you want to create.
@matbeedotcom
Жыл бұрын
You can use vladmandic's fork of a1111 and disable checking for updates/etc, it should speed up your launch time
Anyone figure out what was causing the controlnet images to be saved as well?
@kewk
Жыл бұрын
NVM I figured it out, in the A1111 settings go to controlnet and click do not append detectmap to output.
@ppn7
Жыл бұрын
@@kewk thanks you saved me from trouble !
for some reason Ebsynth isn't creating the keyframes (15:56) what am I doing wrong?
@m3dia_offline
Жыл бұрын
Facing the same issue here as well and once I run all, it says the keys are not starting from 0001 :/
@user-yj3mf1dk7b
11 ай бұрын
probably too many files. try to delete some. i've using Ebsynth Utility - it split everything into max 18 frames.
can anybody make a long video and share the result?
Anyone know what the other tabs do in TemporalKit(the warp tabs)? Also, does TemporalKit also use TemporalNet at some point? Was wondering if using 1side has any effect at all other than just preparing for EBSynth. Having issues when the faces are further away and I can't raise the resolution of the grids enough (VRam)
I dont know if anyone else is getting this error, but everytime i click on "Run", after its done the frames in the "input" folder are exactly the same, it ignores the rest for some reason. And inside the target folder, there's an input_video.mp4 which is a video of the same frame, like if it is frozen
@jbiziou
Жыл бұрын
Got it to work, I had to make sure my video was mp4, ( And at 24 fps ), it did not like 23.976.
@camprey
Жыл бұрын
@@jbiziou darn it, that's probably it. Mine was 23.976 as well. Thank you!
@cynbot7814
Жыл бұрын
Thanks! It also has the same problem with 29.97 fps.
@simonbronson
Жыл бұрын
cheers - 24fps works!
Can I ask you what gpu you use?
@enigmatic_e
Жыл бұрын
Rtx 3080 10gb
my brother. why my temporalkit does not stop processing? i cant find the error. maybe you can help me :) thanks in advance brother more power and godbless!!
respect for suggesting other channels. thats the way, people pulling each other up. other channels will never mention another channel and delete any reference to another channel in comments. insecure and longterm fail.
@iamYork_
Жыл бұрын
yeah enigmatic is one of the good ones... One of the main reasons I stopped making tutorials was because so many other channels were repackaging my techniques and taking credit for it... Digital copycats... Not much can be done but im always happy to see people give credit and help out other creators...
@enigmatic_e
Жыл бұрын
Thanks. Yea I believe in giving credit where credit is due.
@matbeedotcom
Жыл бұрын
Yep, it's in fact smart to do, as it seeds connections to other creators in the search history, which means youre more likely to be surfaced in suggested videos as they have the similar audience. Keep it up, refer other creators, and it absolutely pays off.
I also wonder what happens when you use Normal Map as the second ControlNet... 🤔 Does someone here tried it already?
@enigmatic_e
Жыл бұрын
Definitely try it! I just used those for demonstration, but I would definitely try other ones to see what kind of results I could get.
Well done, how I can cancel the controlnet output files?
@enigmatic_e
Жыл бұрын
Settings/controlnet/check do not append detect map to output.
@m_sha3er
Жыл бұрын
Thanks bro🙏🏻
Awesome tutorial! but I'm having this error after installation of TemporalKit and restarting SD "ImportError: cannot import name 'auto' from 'tqdm' (D:\Work\Projects\AI\stable-diffusion-webui\venv\lib\site-packages\tqdm\__init__.py)" Can someone help me please? I'm not familiar with Python or any coding programs. I'm using Python 3.10.11 for this and it's installed in default location. I can run SD before installing Temporalkit.
@PaulSprangersCityLimits
11 ай бұрын
Same issue
@kartikashri
11 ай бұрын
did you find solution I got same now SD not opening???? edit found the solution issues with dependences, you can fix manually. 1.open cmd 2.cd YOURPATH\venv\Scripts 3.activate.bat 4.python -m pip install --upgrade tqdm or any packages corrupted.
@PaulSprangersCityLimits
11 ай бұрын
whoa you figured it out! Thank you! I'm gonna try this! Just to be clear, are you running python 3.10.6 ? Or the newest version of python? You're running auto1111 as well, right? @@kartikashri
@PaulSprangersCityLimits
11 ай бұрын
Can you explain why this is happening to us and not others? @@kartikashri
@PaulSprangersCityLimits
11 ай бұрын
unfortunately that doesn't work. I did the upgrade tqdm and i just got a message that says "requirement already satisfied. tqdm in c:\MYPATH\ven\lib\site-packages @@kartikashri
i get everytime error list index out of range
does anyone has an idea why recombine ebsynth does not work for me? everything else worked
@enigmatic_e
Жыл бұрын
Have you tried seeing if there’s an update? Sometimes the development makes changes.
@salemation1099
Жыл бұрын
I'm stuck in the same phase, recombine gives me Error and doesn't say why, I could recombine manually, but it's gonna take little bit longer .. I hope there's a solution.