How to fix Automatic1111 DirectML on AMD 12/2023! Fix broken stable diffusion setup for ONNX/Olive

Ғылым және технология

Update March 2024 -- better way to do this
• March 2024 - Stable Di...
Currently if you try to install Automatic1111 and are using the DirectML fork for AMD GPU's, you will get several errors. This show how to get around the broken pieces and be able to use Automatic1111 again.
Install Git for windows:
gitforwindows.org/
Install Python 3.10.6 for windows:
www.python.org/downloads/rele...
be sure to add to path!
Clone automatic1111 Directml:
copy url for .git repo
github.com/lshqqytiger/stable...
run automatic1111 to create virtual environment
run webui-user.bat file -- it will give an error
fix errors:
venv\Scripts\activate
pip install -r requirements.txt
pip install httpx==0.24.1
edit webui-user.bat file inside of automatic1111 folder and add command line arguments and save:
--use-directml --onnx
Inside of automatic1111 folder
find modules\sd_models.py file, edit it
comment out lines 632 - 635 by putting a # in front of the lines and save file
close out Automatic1111
Now you can run Automatic1111 by double-clicking on the webui-user.bat file from windows, or make a shortcut to it if you prefer.
Automatic1111 should now work the way it used to and should allow optimizing ONNX models.

Пікірлер: 689

  • @scronk3627
    @scronk36275 ай бұрын

    Thanks for this! I ended up not having to comment out the lines in the last step, the optimization worked without it

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are very welcome! And that is awesome. I’m seeing mixed comments about it. Some people still run into it. Others seem to not run into it. Probably differences of what code people have pulled. But I’m glad it worked for you and you didn’t have to put in that hacky fix. Thank you for watching!

  • @ewokfrenzy4406
    @ewokfrenzy44066 ай бұрын

    Thanks for the tutorial, it's the best one I've seen so far and everything works great

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    You are welcome. The code changed a few days ago and most peoples stuff broke. And depending on what you had it could be fixed several ways. But this seemed the most bulletproof to make a video saying do this and it should work.

  • @EscaExcel
    @EscaExcel5 ай бұрын

    Thanks, this was really helpful it was hard to find a tutorial that actually gets rid of the torch problem.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Glad this helped and worked! I agree. It’s difficult to find good information and things that actually work.

  • @chris99171
    @chris991716 ай бұрын

    Thank you @FE-Engineer for taking the time to make this tutorial. It helped!

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Glad that it helped! Thank you for watching and supporting my work. It means the world to me!

  • @dangerousdavid8535
    @dangerousdavid85356 ай бұрын

    You're a life saver i couldnt get the onnx optimization to work but now its all good thanks!

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Yea. I suddenly started getting a lot of comments about things being broken. So as soon as I really could dig in and figure out how to at least get people up and running I tried to get something to help people get stuff at least with a shot of working for now.

  • @PhilsHarmony
    @PhilsHarmony5 ай бұрын

    Thanks so much for this video, much appreciated! Finally a tutorial that actually got me past the "Torch is not able to use GPU" error. For programmers that might all be easy and self-explanatory, for everyone else it's a real hustle to stand in front of these errors that tell us nothing if we don't speak code. What I cannot wrap my mind around is why a multi-billion dollar company like AMD doesn't attach a fix like this at the bottom of their stablediffusion tutorial. They must be aware there's issues for many users during install. Anyways, we luckily got helpers like FE-Engineer.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are very welcome! Thank you for the kind words and support on KZread! I am hoping to be able to one day have a working relationship with AMD to be able to help folks even better with AI things as software and changes occur in the fast moving world of AI. Maybe one day? :)

  • @MrRyusuzaku

    @MrRyusuzaku

    5 ай бұрын

    Tbh even programmers might not be able to get it at a go. Especially if Python is not their thing. One here tho I had a tiny clue, but this video helps a lot

  • @kampkrieger

    @kampkrieger

    4 ай бұрын

    @@MrRyusuzaku even if python is their thing, you don't just know how this is supposed to work. I get the error that it can not find venv/lib/site-packages/pip-22.2.1-dist-info/metadata, i have no folder site-packaged and I don't know what it is or where it comes from

  • @patdrige
    @patdrige5 ай бұрын

    you Sir are the MVP. You not only showed how to install but also showed how to trobule shoot errors step by step. Thanks

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are welcome! I’m glad it helped. Thank you for watching!

  • @patdrige

    @patdrige

    5 ай бұрын

    @@FE-Engineer do you have a guide or plan to have a guide for text2text AI for AMD ?

  • @user-ni7gv2ty2o
    @user-ni7gv2ty2o6 ай бұрын

    Thank you! After 2 days of struggling the problem is gone!

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    I’m glad it helped! Thank you for watching!

  • @lenoirx
    @lenoirx5 ай бұрын

    Thanks! After 3 days of trying workarounds, this guide finally worked out!

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Yea the changes they made really kind of were irritating and while they are documented. A lot of people didn’t really see how to fix it easily.

  • @joncrepeau3510
    @joncrepeau35106 ай бұрын

    This is the only way with windows and an amd gpu. Other tutorials get stable diffusion running, but it is only on the cpu. I was seriously about to give up hope until i watched this. Thank you

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Glad it worked for you and you were able to get up and running! Thanks for watching!

  • @xCROWNxB00GEY
    @xCROWNxB00GEY6 ай бұрын

    you are honestly my hero. I am still getting alot of wierd errors but everthing is working.

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Yea. I mean. Fair warning. This literally disables some logic for lowvram flag. Like for real. Stuff could break. But maybe some things potentially breaking seems better than “well it straight won’t work” 😂

  • @xCROWNxB00GEY

    @xCROWNxB00GEY

    6 ай бұрын

    I do prefer it running with constant warnings instead of errors which prevent me from running it. Do you still use it this way or are you using an alternative? I just started with AI Image and could use any input. But because I have an 7900XTX I feel like there are no options.@@FE-Engineer

  • @le_crispy
    @le_crispy5 ай бұрын

    I never comment on videos, but you fixed my issue of stable diffusion of not using my GPU. I love you.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I’m glad it helped and fixed your problems! Thank you so much for watching!

  • @ml-qq5ek
    @ml-qq5ek4 ай бұрын

    Just found out about olive/onnx, Thanks for the easy to follow guide, unfortunately it doesn't work anymore. Will be looking forward to see the updated guide.

  • @Thomas_Leo
    @Thomas_Leo5 ай бұрын

    Thank you so much! This was the only video that helped me. Liked and subscribed. 👌

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I’m glad this helped! Thank you so much for your support!

  • @orestogams
    @orestogams6 ай бұрын

    Thank you so much, could not get this maze to work otherwise!

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    You are welcome! Glad it helped! Thanks for watching and supporting my work!

  • @yannbarral7242
    @yannbarral72426 ай бұрын

    Super helpful, thanks a lot!! The --use-directml in COMMAND ARGS was what I was missing for so long. You helped a lot here. If it can help others with random errors during installation and 'Exit with code 1' , what worked for me was turning off the antivirus for an hour.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Interesting about the antivirus. Which antivirus do you use? Glad this helped. Most folks could probably just swap their command line arguments to -use-directML and it would probably work. Unfortunately when I make a video in order to avoid a mountain of “doesn’t work” comments I try to balance between what will fix it for most folks and I try hard to include information that should fix it entirely for 99.99% of folks. And of course. People have different code from different points in time, different systems, different python versions etc. so I try hard to make sure that if nothing else. If you blow away and start over. This should work and fix your problems. Hence why even when a video could be like 1 minute with 1 small change. It can easily become 10+ minutes with the handful of “and if you happen to see this…” pieces. :-/ it is a difficult balancing act.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Thank you for the kind words, I am glad this helped you. Thank you for watching!

  • @MasterCog999
    @MasterCog9996 ай бұрын

    This guide worked great, thank you!

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    You are welcome! Thank you for watching!

  • @rikaa7056
    @rikaa70565 ай бұрын

    thank you man all other tutorials on youtube was useless. CPU was at 99% now you fixed my gpu rx 6600xt is doing the heavy lifting

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Nice! Glad it helped! Thank you for watching!

  • @tomaslindholm9780
    @tomaslindholm97806 ай бұрын

    You were quick in some parts, but the "entire" server restart (terminate batch job Y/N) just hit Ctrl C Thank you so much for this fix the guide fix guide. Hero!

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    😂😂 I was not going to make a video. But I decided to start from scratch and figure out all the trouble spots and I was like…mmmm…I’ll get too many comments about people having weird troubles and it’s hard to explain some of it over text. And yea. I try not to go too fast but I also try to avoid pointlessly lingering. I tend to record and get a bit too in depth and off topic and in editing I usually cut most of that out. Just the way I naturally talk versus the cleanest way to really do a how to. It’s a process. Plus I really am trying to get it down to more of a reflex and more natural for me to be able to do these without going too far off and also not going too fast. :-/

  • @tomaslindholm9780

    @tomaslindholm9780

    6 ай бұрын

    Well, as a former system engineer I understand you must have a great deal of confidence to do what you did, considering the promising title of your video. Brave and good! Thank you for sharing your skill to the rest of us kamikaze engineers. (BTW, its inside a VM, just make it or break it seems like a good approach) @@FE-Engineer

  • @user-kj5ux9ms6q
    @user-kj5ux9ms6q5 ай бұрын

    Thank you so much. You have helped so many people with this video!

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I’m glad it helped you!! Thanks so much for watching!

  • @zengrath
    @zengrath5 ай бұрын

    Dude, you have no idea how long i been trying to get automatic on windows with my 7900xtx and conclusion always has been use linux from everywhere I go. but I seen AMD's post about how it works with windows with olive yet it wouldn't work for me and tried for hours. Your video finally got it working for me. The key part for me was not using the skip cuda command, nothing anywhere i've seen had showed me how to proper fix this until your video. I funny enough didn't have some of errors you did after that but maybe they updated some things since this video or i already installed some of those things already, not sure. thank you so much. I been using Shark and it's such a pain to use, every model change, every resolution change, requires recompiling, every lora and so on, it's a nightmare and it doesn't appear to have as many options as automatic. I hear that we still can't do lora training and all but hopefully that comes later.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Yea. Honestly. I love that shark kinda just works. But I can not stand using it. It takes forever. If you want to just load a model keep an image size and just generate image after image it’s ok. But if you wanna jump around, change models, change images sizes. Then shark is crazy slow. You are very welcome! I’m glad you got it working, thank you so much for watching!

  • @zengrath

    @zengrath

    5 ай бұрын

    @@FE-Engineer I actually switched to comfyUI also thanks to your other video and while it may be a little slower, it's still good enough for 7900xtx and inpainting, img to img, lora's, and all that works which didn't on the automatic one. So much better for me then automatic on windows so far. but hoping it improves even more, i noticed some plugins not working when following a tutorial but at least basics work.

  • @DarkwaveAudio
    @DarkwaveAudio5 ай бұрын

    Thanks man you helped a lot. much appreciated for your time and effort.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are welcome! Thanks so much for watching!

  • @jordan.ellis.hunter
    @jordan.ellis.hunter5 ай бұрын

    This helped a lot to get it running. Thanks!

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are very welcome! Thank you so much for watching. Glad it helped!

  • @pack9694
    @pack96945 ай бұрын

    thank you for helping me fix the olive issue you are amazing

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I’m glad this helped! Thank you so much for watching!

  • @metaphysgaming7406
    @metaphysgaming74065 ай бұрын

    Thanks so much for this video, much appreciated!

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are welcome I hope it helped! Thanks for watching!

  • @miosznowak8738
    @miosznowak87385 ай бұрын

    Thats the only solution I found which actually works, thanks :))

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I’m glad it helped and got it running :). Thanks so much for watching!

  • @NA-oe5jj
    @NA-oe5jj5 ай бұрын

    you solved the exact problems i had. thanks for the true best tutorial.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are welcome, I am glad it helped! Thanks for watching

  • @NA-oe5jj

    @NA-oe5jj

    5 ай бұрын

    @@FE-Engineer woke up today to it no longer working. why computers be like this. :D when i attempt to use webui-users it says installing requirements then *** could not load settings. then tries to launch anyway and starts to complain about Xformers and Cuda. i think this settings load is the issue. ima fiddle at lunch and then after work tonight, i will do a complete reinstall again using your handy guide.

  • @nourel-deenel-gebaly3722
    @nourel-deenel-gebaly37223 ай бұрын

    Thanks a lot for the tutorial, it worked but without the onnx stuff unfortunately, patiently waiting for your new video on this matter.

  • @FE-Engineer

    @FE-Engineer

    3 ай бұрын

    It’s so much better too!

  • @FE-Engineer

    @FE-Engineer

    3 ай бұрын

    Sorry about the wait though. Sick daughter. Sick son. Surgery for son. Hospitalization for son. It’s…busy. Plus work and life and all that. Still I do apologize whole heartedly for the wait.

  • @nourel-deenel-gebaly3722

    @nourel-deenel-gebaly3722

    3 ай бұрын

    @@FE-Engineer no need to apologize you're literally amazing, hope all goes well for you, although i'll still be using this old and slow method since the new video is for higher cards and I have more of a potato than a gpu 😅, but hopefully I upgrade soon and benefit from this ❤️

  • @Daxter250
    @Daxter2506 ай бұрын

    that was... the best AND ONLY tutorial i found that worked. my 5700xt had no problems with stable diffusion half a year ago and then suddenly puff, some bs about tensor cores which i dont even have. all those wannabes on the internet simply said to delete venv and it will sort itself out. NO IT DOESNT. this tutorial here does! thanks for the work you put in! btw. with those onnx and olive models i even turned the speed from fucking seconds per iteration to 2 iterations per second O.o, while also increasing the image size!

  • @DGCEO_

    @DGCEO_

    5 ай бұрын

    I also have a 5700xt, just curious what it/s you are getting?

  • @Daxter250

    @Daxter250

    5 ай бұрын

    @@DGCEO_ 2 it/s as written in the last sentence. image is 512x512.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I’m glad this helped! Thank you so much for the kind words! :) and thank you for watching!

  • @evilivy4044
    @evilivy40446 ай бұрын

    Great tutorial, thank you. How do you go about using "regular" models with the --onnx argument? Do I need to convert them, or should I look for and use only ONNX models?

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Have to convert them basically. Occasionally you can find some models in ONNX format but it is not really super common…

  • @amGerard0
    @amGerard05 ай бұрын

    This is great! Thanks for the excellent video, I went from ~4s/it to ~2it/s on a 5700XT! so *much* faster!

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Yay! I’m glad it helped! Thanks so much for watching!

  • @sanchitwadehra

    @sanchitwadehra

    5 ай бұрын

    my 6600xt went from 1.75 it/sec to 2 it/sec did you do something else could you please give me some recommendations on how you increased it so much

  • @amGerard0

    @amGerard0

    5 ай бұрын

    @@sanchitwadehra Make sure you have no other versions of Python, only 3.10.6 When I had other versions it just didn't work, maybe if you have another version it's slowing it down? Other than that I'm not sure I only use: set COMMANDLINE_ARGS=--use-directml --onnx If you're using medvram or something, remove it and try again? Depending on the model it can be slower - if you're using a really big model that can affect it, certain sampling methods are faster than others too. Likewise, if you are trying to generate images bigger than 512x512 (i.e. 768x512) then it will struggle. Try another model and see if it's just that, then try every sampling method availible (about 5 worked for me, the others were a total artifact ridden mess).

  • @sanchitwadehra

    @sanchitwadehra

    5 ай бұрын

    @@amGerard0 maybe it's the python version problem as my pc has latest python version and i installed a1111 using a conda environment with python 3.10.6 and i also have comfyui on my pc in a different conda environment with python 3.10.12 maybe i will try doing the whole process again by deleting everything from my pc thx for sharing

  • @Ranfiel04
    @Ranfiel044 ай бұрын

    If you're having problems with ONNX tab missing use this command one the stable diffusion folder: git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25 That revert back the new update that have the problem with the ONNX

  • @tmsenioropomidoro7243

    @tmsenioropomidoro7243

    4 ай бұрын

    This actually helped. You have to load in your created virtual environment (mine is automatic1111_olive), then go to the folder path by cd (mine is F:\stable... etc) then use this git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25 F:\stable...(the rest of the folder`s name). Then you have to do everything shown in the video again (will be much faster because most of the stuff is downloaded already, but requirements and webui-user.bat needs to be edited again)

  • @nielsjanssen2422

    @nielsjanssen2422

    4 ай бұрын

    You two fine gentlemen have gained my respect THANKYOU bro i struggled for hours

  • @user-uz5cg9bu4r

    @user-uz5cg9bu4r

    4 ай бұрын

    @@tmsenioropomidoro7243 well i thought it worked, the onnx and olive tabs are back, but now i'm getting the error: "onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MatMul node. Name:'MatMul_460' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(2476)\onnxruntime_pybind11_state.pyd!00007FFE8EC9B33F: (caller: 00007FFE8EC9CAA1) Exception(6) tid(1a7c) 80070057 The parameter is incorrect." when i try to generate txt2img,

  • @tmsenioropomidoro7243

    @tmsenioropomidoro7243

    4 ай бұрын

    Well I got similar issue, it's not generating yet - shows some errors. Trying to figure out what is wrong @@user-uz5cg9bu4r

  • @Azure1Zero4
    @Azure1Zero45 ай бұрын

    Thanks a lot. Something to note is if you don't want onnx mode enabled just exclude it from the arguments.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    This is true. Removing ONNX allows the other samplers to be used. But for AMD users. The performance hit is a big one.

  • @Azure1Zero4

    @Azure1Zero4

    5 ай бұрын

    That's true. When I try running ONNX converted models it wont let me adjust the size of the image for some reason and they don't seem to be producing results nearly as good as non-converted.@@FE-Engineer

  • @Azure1Zero4

    @Azure1Zero4

    5 ай бұрын

    I think I might have figured out my issue. I think I'm maxing out my ram and its crashing the CMD prompt mid optimizing. Do you think you could do me a favor and tell me about how much system ram you use when going through the optimization process? Going to upgrade and need to know how much.@@FE-Engineer

  • @Azure1Zero4

    @Azure1Zero4

    5 ай бұрын

    In case anyone need to know I required 32GB of ram to optimize models. So if you don't have that much your going to need to upgrade or download an already optimized model. Something I had to learn the hard way. Hope this helps someone.

  • @dr.bernhardlohn9104
    @dr.bernhardlohn91044 ай бұрын

    So cool, many, many thanks!

  • @FE-Engineer

    @FE-Engineer

    4 ай бұрын

    Glad it helped! Thank you for watching!

  • @LeitordoRedditOficial
    @LeitordoRedditOficial4 ай бұрын

    If you get the error: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check then add "--use-directml --reinstall-torch" to the COMMANDLINE_ARGS in the webui-user.bat file through notepad this way SD will run off your GPU instead of CPU. after use one time, remove --reinstall-torch, remember, is without " ". please share in more videos for help more people.

  • @TPkarov

    @TPkarov

    2 ай бұрын

    Obrigado amigo, você é um amigo !

  • @LeitordoRedditOficial

    @LeitordoRedditOficial

    2 ай бұрын

    @@TPkarov de nada amigo, sendo sincero com você, o melhor mesmo é gerar imagens 512x512 tenho uma rx 6800 xt e varias vezes quando ponho algo maior que isso, quando está em 99% dá erro, e esperei aquele tempo por nada kkkkkkkkk, mas se for da serie 7000 da amd pode dar certo com imagens maiores.

  • @lucianoanaquin4527
    @lucianoanaquin45276 ай бұрын

    Thanks for the amazing tutorial bro! I only have one question, watching other videos I noticed that they have more sampler options, what do I have to do to have them too?

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    The other samplers don’t work in this version with onnx and directml. So options are. Run ROCm on Linux. Or wait for ROCm on windows when we can just use the normal automatic1111 without needing directML and onnx.

  • @Meatbix75
    @Meatbix755 ай бұрын

    thanks for the tutorial. It certainly got SD working for me, which is excellent. however the Olive optimisation doesn't seem to have any effect. I could run the optimisation even without modifying sd_models but it made no difference to performance- I'm getting around 3.3 it/s with either the standard or optimised checkpoint. I've gone ahead and modified sd_models but to no effect. GPU is an RX6700 10GB. CPU is i5 12400F, 32GB RAM.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Hard to say. I’ve found a lot of issues with the optimization. It’s tricky to even get it to work a lot of the time. But if you aren’t seeing any performance increase with it running then my guess is that the model is optimized. If you grab other models you might end up seeing the performance boost. It just probably is that the one you have is already optimized. You are welcome, thank you so much for watching. Sorry I don’t have a better answer to this.

  • @RobertJene
    @RobertJene4 ай бұрын

    10:20 use Ctrl+G to jump to a specific line in notepad

  • @ktoyaaaaaa
    @ktoyaaaaaa3 ай бұрын

    Thank you! it worked

  • @FE-Engineer

    @FE-Engineer

    3 ай бұрын

    :):) glad you got it working! Thank you for watching!

  • @franknmt4435
    @franknmt44354 ай бұрын

    hi, I did just as the video and I got this problem "launch.py: error: unrecognized arguments: --onnx". Anyone got and fixed this?

  • @CANDLEFIELDS

    @CANDLEFIELDS

    4 ай бұрын

    Been reading all comments for the past half hour...somewhere above FE-Engineer says that it is not needed and you should delete it....I quote it: Remove -onnx. They changed code. It is no longer necessary.

  • @nangelov

    @nangelov

    4 ай бұрын

    @@CANDLEFIELDS if I remove --onnx, I no longer have the onnx and olive tabs and can't optimize the models

  • @ca4999

    @ca4999

    4 ай бұрын

    @@nangelovSame problem sadly.

  • @nangelov

    @nangelov

    3 ай бұрын

    @@ca4999 I surrendered and decided to buy an used 3090. There are plenty available in Europe for about 600 Euro and it is like 30 times faster, if not more.

  • @ca4999

    @ca4999

    3 ай бұрын

    @@nangelovThe sad thing is, I got it somehow to work after 5 hours of work just to realize that the hires fix doesnt work currently with ONNX. Should've went the linux route from the beginning. Thats a very solid price for a 3090, congrats ^^ Just out of curiosity because I'm also located in Europe, where exactly did you buy it?

  • @nienienie7567
    @nienienie75672 ай бұрын

    Hey man! Great tutorial! Got any ideas for VRAM Usage optimazitation on AMD? I'm using a modified BAT like below: set PYTHON= set GIT= set VENV_DIR= set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 set COMMANDLINE_ARGS=--use-directml --medvram --always-batch-cond-uncond --precision full --no-half --opt-split-attention --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --disable-nan-check --use-cpu interrogate gfpgan codeformer --upcast-sampling --autolaunch --api set SAFETENSORS_FAST_GPU=1 it helps a lot but i still wanna squeeze out more, I'm using RX 7600 8gb vram, 32gb ram

  • @mjtech1937
    @mjtech19375 ай бұрын

    This a great tutorial. The it/s speeds I'm getting with my AMD 7900 XTX are sick, faster than Midjourney, The only question I have is; has anyone got inpainting working? Otherwise this is an amazing solution for AMD users.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    It works without issues if you use ROCm on linux, speed overall for myself takes maybe 10% hit or so. Unfortunately this is using DirectML and ONNX with a lot of optimizations in place. Those same technologies though are somewhat less developed as far as extensions and things just working. So basically, until ROCm is on windows, you kind of have to pick your poison. Dual boot system and running linux, or the variations of different ways to do it on windows of which all have some serious drawbacks.

  • @davados1
    @davados14 ай бұрын

    Thank you for the tutorial. So I got the webui to load up but I don't have ONNX and Olive tab at the top just not there oddly. Would you know why has webui changed and removed it?

  • @nickraeyzej578
    @nickraeyzej5784 ай бұрын

    This worked great in 12/2023. Latest automatic conversion changes simply do not work and end up corrupted at random. Even when it does work, it makes automatic conversions for every single switch you make to the image resolution. Is there a way to git clone the project version from when this method was perfectly fine, back when we had the ONNX/Olive conversion tab, and one conversion per safetensor covered all resolutions on it's own?

  • @Doomedjustice
    @Doomedjustice5 ай бұрын

    Hello! Thank you very much for the tutorial, it really helped. I wanted to ask is there any way to use generic sampling methods that are usual for Automatic1111?

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You have to drop ONNX. But you will take a big performance hit. Or use ROCm on Linux.

  • @mgwach
    @mgwach6 ай бұрын

    Thanks!! Got everything up and running. Question though.... do you know if LoRAs are supposed to work with Olive yet?

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    No idea. My guess would be no. And to be clear. I am 99% sure ONNX does not care but automatic1111 with directML is probably not setup to support it most likely.

  • @mgwach

    @mgwach

    6 ай бұрын

    @@FE-Engineer Gotcha. Okay, thanks for the response. :) Yeah it seems that whenever I select a LoRA it's not recognizing it at all and none of the prompts make any difference for it.

  • @aadilpatel6591
    @aadilpatel65915 ай бұрын

    Great guide. Thanks. What are the chances that we will be able to use reactor (face swap) or animatediff with this repo?

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are welcome! Thank you for watching! My guess is not very good…most of the extensions don’t play well with ONNX and directml. Plus my guess is that no one is really working on trying to get them to work with ONNX and directml really. :-/ You can always try. I just have had very little luck with very many extensions that like “do things”.

  • @aadilpatel6591

    @aadilpatel6591

    5 ай бұрын

    @@FE-Engineer will they be usable once ROCm is ready for windows?

  • @Grendel430
    @Grendel4305 ай бұрын

    Thank you!

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    No problem! Thanks for watching!

  • @michaelbuzbee5123
    @michaelbuzbee51235 ай бұрын

    I was having trouble with my A1111 being slow so searching around I found your fix video and decided to do just a clean install. I already downloaded a bunch of models though, how does one run them through onnx? And I am assuming I can no longer just add the models to the stable diffusion folders anymore? I think my PC specs are the same as yours.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    So you need to optimize them for Olive and ONNX. I have a pretty short video about this. You should be able to just optimize them from your normal models folder. Once optimized they will be in onnx or olive-cache I think are the folder names. But yes you can use them. Just not SDXL models. I have yet to get SDXL to work correctly with directML and ONNX. :-/

  • @arcadiandecay1654
    @arcadiandecay16545 ай бұрын

    This has been a lifesaver, thanks! One thing I did notice after I got this working (perfectly, actually) is that there are some sampling methods missing, like DPM++ SDE Karras. Do you know if that's that something that could be manually installed? I tried doing a git clone of the k-diffusion repo and doing a git pull but that didn't get them to show up.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Yea. They don’t work with ONNX. :-/

  • @arcadiandecay1654

    @arcadiandecay1654

    5 ай бұрын

    Oof lol. Thanks! Well, I'm going to count my blessings, since I was floundering before finding this tutorial. I have Linux on a couple other disks and one of them is Ubuntu, so I'm going to install it on that, too.

  • @adognamedcat13
    @adognamedcat134 ай бұрын

    I was wondering if you could help me with interesting issue. So after following the steps, it kept telling me that the --onnx was an unknown argument. I heard somewhere that with the newest update onnx didn't need to be included as an argument. So I deleted it from the webui-user.bat args line. To my surprise the webui booted as normal, though, there was no sign of olive or predictably onnx. Now I'm getting around 1.5its/sec and I have the same exact card as you. on the plus side I have dmp++ 2M Karras now, and it does *technically* work, but the speeds are ridiculously slow. Thanks for any/all help and thanks a million for making this series, you're the man! Update: to clarify the error I get if I try to launch it the way you described is ' launch.py: error: unrecognized arguments: - '

  • @Vasolix

    @Vasolix

    4 ай бұрын

    I have same error how to fix that ?

  • @FE-Engineer

    @FE-Engineer

    4 ай бұрын

    Remove -onnx. They changed code. It is no longer necessary.

  • @williammendes119

    @williammendes119

    4 ай бұрын

    @@FE-Engineer but when sd start we dont have Olive tab

  • @whothefislate

    @whothefislate

    4 ай бұрын

    @@FE-Engineer but how do you get the onnx and olive tabs then?

  • @tomlinson4134

    @tomlinson4134

    4 ай бұрын

    @@FE-Engineer I have the exact same issue. Do you know a fix?

  • @GabiVegas-dj
    @GabiVegas-dj3 ай бұрын

    Thanks man

  • @sanchitwadehra
    @sanchitwadehra5 ай бұрын

    wow thanks dhanyavad

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are very welcome! Thanks so much for watching!

  • @user-cw8pm3ox1q
    @user-cw8pm3ox1q5 ай бұрын

    Thanks so much for the Video! I wonder why do I need Internet connection when converting "normal" models(with safetensors file extension name). Due to my poor Network, python always raises "ReadTimeout" error whenever I click the "Convert & Optimize checkpoint using Olive" button. Do I need to download something else to convert a model? I think I only need my own GPU to compute.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    That is interesting. I did not know it needed to get anything from the internet. I am not sure to be honest. Are you running it on like an old spinner hard drive? Is it possible that the read timeout is from your disk drive?

  • @chrisc4299
    @chrisc42996 ай бұрын

    Hello thank you very much for the video I have a question how I could use vae with the optimized models you have to transform them I appreciate your help since placing the vae in the regular folder does not apply to the generation

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    You will need to run ROCm in Linux to get full functionality like that.

  • @amrkhaled5806
    @amrkhaled58065 ай бұрын

    Great Video. Finally, it works after three days of watching tutorials and searching the internet. I have a small issue though. When generating images it uses my iGPU instead of my AMD GPU, I've tried adding this argument --device-id 1 to the webui-user file, now it uses my AMD GPU however I've noticed in the task manager that it spikes to 100% for a second then it returns back to 0% then back to 100% and so on after that the AMD software pops up with a report an issue button and the image comes out grey. What causes this problem and how do I fix it? P.S. I have an AMD Radeon 530 GPU

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Might try some of the settings like medvram. Is it just the GPU that is spiking hard? It sounds like it is actually overloading the GPU and then the GPU is basically crashing. I have not encountered this personally. So it is hard for me to say for sure. But try some of the other vram settings and also potentially ram setting to see if that helps.

  • @tmiss17
    @tmiss176 ай бұрын

    Thanks!!

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    You are very welcome! Thanks for watching!

  • @magnusandersen8898
    @magnusandersen88984 ай бұрын

    I've followed all your steps up untill the 8:00 minute mark, where I after running the webui-user.bat file, get an error saying "launch.py: error: unrecognized arguments: --onnx". Any ideas how to fix this?

  • @FE-Engineer

    @FE-Engineer

    4 ай бұрын

    Remove -onnx

  • @Krautrocker
    @Krautrocker6 ай бұрын

    Soooo, i initially installed automatic1111 using your first video on the matter, which was troubleshooting the official guide. Before i tear that down and reinstall the whole jazz, what exactly is different? Does this fix lift the limitations (like high res stuff not working) or is it 'just' about running it more stable?

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    No, there was an update recently -- for many folks it broke. In the video description I tried to be clear saying if your setup works fine, don't bother with any of this. This is just to get things working for folks who got a new github update to the code and everything entirely broke and they were not able to use it at all.

  • @NXMT07
    @NXMT074 ай бұрын

    Thanks for the tutorial, it really did worked with my rx580, albeit very slow. Can you please make a tutorial on how to use huggingface diffusers with automatic1111? I've tried to find the safetensors file and even converted the diffusers into one but to no avail.

  • @FE-Engineer

    @FE-Engineer

    4 ай бұрын

    Last I knew. Most of the additional pieces of automatic 1111 will not work with ONNX. They might work with only directml. But it has a big performance penalty. Overall for AMD. Your best bet right now is ROCm on Linux. Slightly slower than onnx and olive but all the functionality works correctly. Also nice that you don’t have to fiddle with converting to onnx and the headache that comes with all of that and what does and does not work etc. :-/

  • @NXMT07

    @NXMT07

    4 ай бұрын

    @@FE-Engineer well I heard that Zluda is enabling CUDA on amd GPU so OONX shouldn't be a problem after a period of development in WindowOS. I have managed to play around with it and can confirm it does indeed work with CUDA-related programs, haven't got it to work with Automatic1111 though. Still, my trouble with the huggingface diffusers remains unsolved, I think it is a entirely new problem

  • @JustinLamb141
    @JustinLamb1415 ай бұрын

    This is the only tutorial that has worked.

  • @JustinLamb141

    @JustinLamb141

    5 ай бұрын

    New issues. I managed to generate an image of a car though.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are seeing new issues?

  • @JustinLamb141

    @JustinLamb141

    5 ай бұрын

    Regarding the only valid links being hugging face.@@FE-Engineer

  • @JustinLamb141

    @JustinLamb141

    5 ай бұрын

    After optimizing I receive this error: InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from C:\AI\stable-diffusion-webui-directml\models\ONNX-Olive\stable-diffusion-v1-5\unet\model.onnx failed:Protobuf parsing failed.@@FE-Engineer

  • @user-db9pl9oh4b
    @user-db9pl9oh4b5 ай бұрын

    Thanks for the video. May I ask what's your GPU and how's the performance? Cheers!

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    7900 XTX :) Onnx/olive - 22 it/s ROCm - 18 it/s DirectML non ONNX - 6 it/s

  • @user-db9pl9oh4b

    @user-db9pl9oh4b

    5 ай бұрын

    @@FE-Engineer I'm interested to know if you really need an Nvidia GPU or AMD. Perhaps a good video to make in the future where you compare the two GPU makers? Thanks!

  • @alexisvri1758
    @alexisvri17584 ай бұрын

    for all the person that have the problem where it says "launch.py: error: unrecognized arguments: --onnx press any key" remove the --onnx in the args and if after that it says something like "stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Accs refus:" add "--reinstall-torch" in the args, launch webui-user.bat, and after it start, remove the " --reinstall-torch" !

  • @waltherchemnitz
    @waltherchemnitz25 күн бұрын

    What do you do if when you run Venv you get the message "cannot be loaded because running scripts is disabled on this system"? I'm running the terminal as Adminstrator, but it wont let me run venv.

  • @DigitalID234
    @DigitalID2346 ай бұрын

    work and thanks

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    You are welcome! Thanks for watching! :)

  • @mojlo4ko998
    @mojlo4ko9986 ай бұрын

    legend

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    😂 thank you! I hope this helped!

  • @mrhobo7103
    @mrhobo71036 ай бұрын

    great tutorial, mine stopped working a few days ago and coudlnt find a fix anywhere, although for some reason generating an image makes my pc slow to a crawl and it didn't do that before it broke. the image generation itself is still fast though. 6600XT

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Image generation makes your pc slow down? Interesting. Did you previously use any unusual flags?

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    I would not be surprised about this during like model optimization. But image generation it does surprise me a bit…

  • @macnamararj

    @macnamararj

    6 ай бұрын

    @@FE-Engineer same here, it slow down too, the non onnx/olive this didnt happens.

  • @gyrich
    @gyrich4 ай бұрын

    Thanks for this. I can actually run SD on my AMD pc but it doesn't seem that it's using the GPU (RX 6600 8gb) at all. I can render individual images in ~30-60 secs. None of the solutions I've found online make it use the GPU. Do you know how I can get it SD to use the GPU so I can generate more/faster?

  • @FE-Engineer

    @FE-Engineer

    4 ай бұрын

    Yes. Stay tuned. I have a new video coming out because the code has changed a decent amount and there is a better way now!

  • @Guillermo-th4dh
    @Guillermo-th4dh6 ай бұрын

    Hello sensei, I tell you that the entire tutorial is 10 out of 10... I just wanted to ask you, I have a problem when I want to optimize SDXL models, even other custom ones from "civitai" and they give me an error. What could it be? thanks

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    I have not been able to get SDXL working on this setup with automatic1111 directml. I tried a few months ago and could not get it working. I have not honestly tried with it recently. I also run ROCm on Linux and that setup just basically works for everything. So I did that to get SDXL and bypass all the complexity of what you can and can not do with directML and ONNX.

  • @nextgodlevel4056
    @nextgodlevel40566 ай бұрын

    great tutorial, but i have a doubt that when I try to optimize some other stable diffusion model, they optimize correctly but the output image they gave is not very clear its always generate some foggy images. Also I can't able to generate images which have size greater then 512 x 512, and the otherway I do this is by upscale the resolution of 512x512 images within in the stable diffusion and its giving very good output aswell. my GPU: 6750XT

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    If the image looks foggy like that. It likely means you need to run a vae with the model. I don’t remember offhand if I was ever able to get a vae to work properly with auto1111 on windows though. Sorry.

  • @nangelov
    @nangelov4 ай бұрын

    Sorry to bother you. I've done everything so far, except that when I staart webui, the interface loads but there's no ONNX or OLIVE tabs. Everything is slow on the RX6800XT (1.3s/it) If I enable onnx in the settings, I get missing positional arguments error and I can't generate anything. Someone mentioned to roll back to an older UI Version, I don't see how to do that - there are no different versions for this fork.

  • @matthieu3967
    @matthieu39675 ай бұрын

    Thanks for the video but do you know how to add sampling methods ?

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Don’t use ONNX or go ROCm in Linux. You can’t use the other samplers with ONNX.

  • @wilcoengelsman8159
    @wilcoengelsman81594 ай бұрын

    Thank you for the guide, it is however already slightly outdated. I did manage to get everything working though using this tutorial. When I use Olive/ONNX instead of just directml my image has a lot more noise, even on the same sampler. Is there something i can do about that? Also, generation larger than 512x512 crashes the onnx implementation.

  • @FE-Engineer

    @FE-Engineer

    4 ай бұрын

    So you don’t need to use -onnx anymore in the command argument when launching. When using ONNX it has a lot of peculiarities and most things other than generating an image do not work properly with ONNX sadly.

  • @macnamararj
    @macnamararj6 ай бұрын

    Thanks for the tutorial! I saw an decrease on the generation time, but it still showing around 3.0it/s, either on optimized and not optimized models. Anything I can do to improve the generation? And how can I add the new sampling methods like DPM++ 2M SDE Karras?

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Those other samplers don’t work in this with ONNX. I forced them on. It broke. :-/ Hmm that is strange that you don’t see a change in speed on optimized vs unoptimized. Makes me think something is fishy.

  • @macnamararj

    @macnamararj

    6 ай бұрын

    @@FE-Engineer I was breaking my head to make the sampling work, but no success. I've tried a fresh install, and the it/s still around 3its/s, there is a huge difference in speed using --onnx, it used to take 1min to generate a 512x512 image, now it takes around 6s. So I think its a big win! Again thanks for the video.

  • @gkoogz9877
    @gkoogz98775 ай бұрын

    Great video, Any tips to use more than 77 tokens on this method? It's a critical limitation.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You can use it without ONNX but performance takes a big hit. Or run it with ROCm on Linux. Or wait for ROCm on windows whenever that will be.

  • @BOIWHATmusic
    @BOIWHATmusic5 ай бұрын

    Im stuck on installing the requirements line, taking a really long time. Is this normal?

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Depends on internet connection and some other things. But yes. It is not exactly fast.

  • @user-mg7fv9cx8b
    @user-mg7fv9cx8b5 ай бұрын

    Thanks for your video. You are my hero :-) I thought I never get SD on my AMD running - until I saw your video... I tried also to use another checkpoint - stable-diffusion-inpainting. I was able to download the model, the log says: Model saved: C:\..\sd-test\stable-diffusion-webui-directml\models\ONNX-Olive\stable-diffusion-inpainting When I try to use that model I get RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Conv node. When I try to optimize the model I get "...\sd_olive_ui.py", line 358, in optimize assert conversion_footprint and optimizer_footprint AssertionError Is it somehow possible to use inpainting model on AMD? Or what am I doning wrong?

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    So you can definitely do it with ROCm on Linux. In windows I haven’t been able to get inpainting working properly.

  • 2 ай бұрын

    Any method works for me, I have this error: AttributeError: module 'onnxruntime' has no attribute 'SessionOptions'

  • @lake3708
    @lake37084 ай бұрын

    An excellent guide, but I have a question: there's a .safetensors checkpoint that has a config attached in the .yaml format. After optimization, the program stops seeing the config and generates noise. Do you have any idea how to fix this problem?

  • @FE-Engineer

    @FE-Engineer

    4 ай бұрын

    Ohh. Not sure on that one. But I have a new video coming out with a much better way of doing this!

  • @livb4139
    @livb41394 ай бұрын

    can you make vid on how to make ollama run on rx 7900xtx

  • @enriques2009
    @enriques20095 ай бұрын

    All I can tell you is... THANK YOU.

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    You are very welcome! Thank you for watching, I am glad it helped! :)

  • @bitkarek
    @bitkarek3 ай бұрын

    have no onnx or olive tab... is it gone (not supported) with some update?? I read its optimizing automatically, do i understand it right? There are some Olive/onnx things in settigs which iam not sure if they work on AMD and this kind of build.

  • @FE-Engineer

    @FE-Engineer

    3 ай бұрын

    I’m not sure what exactly they are doing now. I doubt it is optimizing automatically. I would recommend looking into using zluda. I have a video on it. Overall I like it much more.

  • @carterstechnology8105
    @carterstechnology81055 ай бұрын

    also curious how to optimize my iterations per sec. currently running 3.08it/s (AMD Ryzen 9 7940HS w/Radeon 780M graphics, 4001Mhz, 8 Core(s), 16 Logical Processor(s)64 GB RAM

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Using your GPU is the first step to optimizing. Using arguments like no half are required for some people or some models but will hurt performance usually. Remember that even the top of the line amd 7900xtx gets about 20 it/s currently. So 3 is not necessarily bad and depending on the resolution of the images might be very good

  • @terraqueojj
    @terraqueojj5 ай бұрын

    Good evening, thanks for the Video, but the problems with ControlNet and Image Dimensions continue. Do you know if there is any update for this in the pipeline?

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I do not. Although I am somewhat unsure how much more support overall this fork of automatic1111 will get ultimately. I think it’s just a bit of a waiting game for rocm on windows

  • @n3mesis633
    @n3mesis6333 ай бұрын

    Question: When my cmd opens after I put the torch direct ml script, it says press any key to continue. However, whenever I do that, it closes itself. Any thoughts?

  • @FE-Engineer

    @FE-Engineer

    3 ай бұрын

    Read the video description. The code has been updated. You might want to use zluda if you are on AMD.

  • @catnapwat
    @catnapwat6 ай бұрын

    Thank you for this! Is there any way to get DPM++ 2M SDE Karras working? It doesn't seem to be available on the directml version of A1111

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    No. I literally forced them on. It breaks. So there are other software things that need to get fixed before those will work in onnx I believe.

  • @catnapwat

    @catnapwat

    6 ай бұрын

    @@FE-Engineer thanks, understood. Do you also know if there's a way to get ComfyUI to run at the same pace as A1111? 6700XT here and I'm seeing 4.5it/sec with A1111 but only 1.1s/it with Comfy

  • @markdenooyer
    @markdenooyer6 ай бұрын

    Has anyone gotten past the 77 token limit ONNX DirectML on the prompt? I really miss my super-long prompts. :(

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Not with this version on windows yet. :-/

  • @pyrageis9928
    @pyrageis99284 ай бұрын

    I get an error stating "AttributeError: module diffusers.schedulers has no attribute scheduling_lcm. Did you mean: 'scheduling_ddim'?" edit: just had to delete venv folder

  • @obiforcemaster
    @obiforcemaster2 ай бұрын

    This no longer woks unfortunately. The --onnx comand line argument was removed.

  • @user-uz5cg9bu4r
    @user-uz5cg9bu4r4 ай бұрын

    hey, i don't get the onnx and olive tabl shown in my automatic111 do i need to manually install them? i see that onnx is running and olive just not at all, i followed every step but idk man

  • @FE-Engineer

    @FE-Engineer

    4 ай бұрын

    The ONNX argument is no longer necessary. They changed code. Things are kind of wacky. I need to go and figure out what all has changed.

  • @Maizito
    @Maizito4 ай бұрын

    I finally manage to run SD with your tutorial, I have an Rx7000, it didn't let me run with --onnx, I saw that in the comments they mention that that command is no longer necessary, so I removed it from the user-bat, and it opens the SD , but it goes very slow, it works between 1.5 and 2.5 it/s, any solution to make it go fast?

  • @W00PIE

    @W00PIE

    4 ай бұрын

    That's exactly my problem at the moment with a 7900XTX. Really disappointing. Did you find a solution?

  • @Maizito

    @Maizito

    4 ай бұрын

    @@W00PIE No, I haven't found a solution yet :(

  • @CESAR_CWB
    @CESAR_CWB6 ай бұрын

    thx bro

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Glad this helped! I was getting a lot of comments from people about it being broken so figured I would find an interim step to get it working for people.

  • @Wujek_Foliarz
    @Wujek_Foliarz4 ай бұрын

    stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'C:\\Users\\igorp\\Desktop\\crap\\stable-diffusion-webui-directml\\venv\\Lib\\site-packages\\onnxruntime\\capi\\onnxruntime_providers_shared.dll' Check the permissions.

  • @alexisvri1758

    @alexisvri1758

    4 ай бұрын

    add "--reinstall-torch" in the args and launch webui-user.bat, after the ui launched, delete the arg " --reinstall-torch", hope it help

  • @Hozokauh
    @Hozokauh5 ай бұрын

    at 7:00, you got it to skip the torch/cuda test error finally. for me however, it did not resolve the issue. went back and followed the steps twice over and same result. still getting the torch cuda test failure. any ideas?

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    -use-directml in your startup script

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I did not skip the torch and cuda test. From my experience if you are having problems and skip it. It will never work because that test is designed to simply check if it thinks it can run on the GPU.

  • @Hozokauh

    @Hozokauh

    5 ай бұрын

    @@FE-Engineer thank you for the timely feedback! You are the best. Will try this out!

  • @nanangsoloist6398
    @nanangsoloist63985 ай бұрын

    You are running torch 1.13.1+cpu. The program is tested to work with torch 2.0.0. To reinstall the desired version, run with commandline flag --reinstall-torch. can we upgrade to torch 2.0.0? or is not compatible yet with AMD GPU?

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    I do remember trying to force torch 2.0. But I don’t believe I was able to get it to work properly. I don’t remember specifically what happened. But I remember trying and I vaguely remember getting stuck or errors and it simply not working.

  • @matrixace_8903
    @matrixace_89036 ай бұрын

    Hello, the webui don't have DPM++ 2M Karras sampler, is there anyway to add that?

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Nope.

  • @frqncklin
    @frqncklin6 ай бұрын

    Hello thank you for the tutorial ! I did everything like you but i can't load Stable Diffusion. It says "RuntimeError : Couldn't clone Stable Diffusion", and when i try another time, it says "RuntimeError : Couldn't fetch Stable Diffusion" - Error code : 1. Do you have any idea why ? :(

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    ?? Nope this is definitely the first I have heard of that error. Sounds either like permissions issue which would be a bit odd. Or like networking and internet issues.

  • @sturk6528
    @sturk65285 ай бұрын

    Огромное спасибо из России! Отличная инструкция, всё так просто)))

  • @FE-Engineer

    @FE-Engineer

    5 ай бұрын

    Thank you for watching!

  • @SigSoWavy
    @SigSoWavy6 ай бұрын

    so i was trying to install checkpoints on here is there a specific way to do it on this build?

  • @FE-Engineer

    @FE-Engineer

    6 ай бұрын

    Convert to onnx exactly like I do in the video…

  • @TheBrainAir
    @TheBrainAirАй бұрын

    i do all steps and / AttributeError: module 'torch' has no attribute 'dml'

Келесі