Photoshop x Stable Diffusion x Segment Anything: Edit in real time, keep the subject

Ғылым және технология

What if we could generate composite images in Stable Diffusion while working in Photoshop? And what if we could keep the subject intact? And in real time as well?
In this tutorial, we'll see how Segment Anything can help us blend the original subject of an active image in Photoshop into a generated image in Stable Diffusion, all in real time, while we're editing and comping in Photoshop.
If you like my contributions, you can buy me a coffee here: ko-fi.com/risunobushi
Ever wondered how to use Stable Diffusion in your traditional workflows in professional environments as a creative?
Stable Diffusion for Professional Creatives is a series of videos about how we can apply generative AI in production environments, with real life examples and guides.
Useful links (by category)
IMPORTANT: some models may have different filenames as the ones showed in the video. That's because I rename them after downloading in order to remember which model does its thing. Refer to the instruction in this description to properly set everything up.
Workflow:
- json file, import in comfyUI and install any missing nodes: pastebin.com/tzPWd8s2
Nodes (if you haven't installed them via importing the provided workflow using ComfyUI Manager):
- Photoshop to ComfyUI: github.com/NimaNzrii/comfyui-...
- ComfyUI Manager GitHub: github.com/ltdrdata/ComfyUI-M...
- IPAdapter Plus v2 GitHub: github.com/cubiq/ComfyUI_IPAd...
- Segment Anything / Grounding Dino GitHub: github.com/storyicon/comfyui_...
Models:
- RealVis XL (SDXL Lightning used in this video): civitai.com/models/139562/rea...
- SDXL LoRA (I'm using the 4 steps LoRA): huggingface.co/ByteDance/SDXL...
- ControlNet SDXL: huggingface.co/lllyasviel/sd_...
- Segment Anything / Grounding Dino models (same GitHub, go down the page to find them):
github.com/storyicon/comfyui_...
IPAdapter (Just in case your v2 download doesn't come with IPAdapter models preinstalled or you'd like to install your own):
- IPAdapter Plus XL model. Place it into your "\models\ipadapter" folder and use it in your Load IPAdapter Model node: huggingface.co/h94/IP-Adapter...
- IPAdapter Plus 1.5 model. Place it into your "\models\ipadapter" folder and use it in your Load IPAdapter Model node: huggingface.co/h94/IP-Adapter...
- CLIPVision ViT-H model (works with IPAdapter *PLUS*, both 1.5 and XL, only. For IPAdapter *STANDARD*, not used in this video and kind of deprecated, you need a ViT-G model). Place it into "\models\clip_vision" and use it in your Load CLIPVision Model node: huggingface.co/h94/IP-Adapter...
Troubleshooting:
If your PopUp node is acting up, try the following:
- close comfyui
- go inside your comfyui folder, in comfyui - custom_nodes - comfyui-popup_preview - window
- open the folder inside the terminal
- type "pip install -r requirements.txt" without the ""
- press Enter
- wait for the requirements to be installed
- try launching comfyui again
This worked for me on the one time I could replicate one of the issues you commented about.
Timestamps:
00:00 - Intro
00:54 - Workflow Overview
01:55 - Downloading the SDXL Lightning models
03:06 - Building the workflow (Core)
04:55 - Building the workflow (IPAdapter)
07:21 - Building the workflow (ControlNet)
10:02 - Building the workflow (Segment Anything)
13:09 - Building the workflow (Blending the images)
15:04 - Setting up Photoshop
15:54 - Explaining what's happening in the background
17:37 - Editing in Photoshop x Stable Diffusion
18:56 - Complex Backgrounds
22:19 - Troubleshooting
23:16 - Outro
#stablediffusion #stablediffusiontutorial #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #moodboards #reference #sdxl #sd #risunobushi #andreabaioni

Пікірлер: 51

  • @paultsoro3104
    @paultsoro31042 ай бұрын

    Great Video! Thank you for developing this workflow. I followed the steps and it works great! Thanks for sharing!

  • @ChloeLollyPops
    @ChloeLollyPops2 ай бұрын

    This is amazing teaching thank you!

  • @ppbroAI
    @ppbroAI2 ай бұрын

    Great video, ty for the effort you put into this. 👍

  • @Sergiopoo
    @Sergiopoo2 ай бұрын

    So glad I found this channel, really good info

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    Thank you for the kind words!

  • @AriVerzosa
    @AriVerzosa2 ай бұрын

    Sub! Enjoyed the detailed explanation starting from scratch. Keep up the good work!

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    Thank you! I try to not leave anyone behind, so explaining everything takes time but it pays off in the end I think.

  • @Kafkanistan1973
    @Kafkanistan19732 ай бұрын

    Well done video!

  • @houseofcontent3020
    @houseofcontent302029 күн бұрын

    Such good video!

  • @Onur.Koeroglu
    @Onur.Koeroglu2 ай бұрын

    Thank you for this Tutorial. Your video title matches with the information in it.. I like that 😅💪🏻 I have to try that. Photoshop meets ComfyUI sounds great. 🙂👍🏻

  • @henryturner4281
    @henryturner42812 ай бұрын

    THANK YOU!!!!

  • @JavierCamacho
    @JavierCamacho2 ай бұрын

    Thanks!!!! I appreciate the effort you added to this video after I asked about this. God bless you!!! I'll try it and place the watch on some ai female models .

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    I don’t touch on this in the video, but if you want to keep two subjects you can duplicate the SAM and then blend the two images and mask together so you keep both a person and a watch for example

  • @fabiotgarcia2
    @fabiotgarcia22 ай бұрын

    I can´t wait for NimaNzrii update his node to see if it work for mac.

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    They did commit something to a private repo a couple of days ago, and apparently they’re working on a new release, but they’re not one of the most communication-oriented devs out there. There’s not even proper docs tbf. Still I feel like its simplicity is unparalleled, and it’s exactly what’s needed in order to work alongside photoshop in a simple and intuitive way. so here’s to hoping they can push some more updates in the future.

  • @fabiotgarcia2

    @fabiotgarcia2

    2 ай бұрын

    @@risunobushi_ai thanks for reply me

  • @baceto-jp4fz
    @baceto-jp4fz2 ай бұрын

    do you think this workflow and the pop-up will work for Photopea? (open-source Photoshop alternative) also, is it possible to run this workflow without photoshop at all? great video!

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    I’m not well versed in Photopea, but if you want a free alternative (for which you would need to develop a different workflow or wait for one, since I’d like to make one) you can look at Krita, which has a SD integration

  • @baceto-jp4fz

    @baceto-jp4fz

    2 ай бұрын

    @@risunobushi_ai thanks! a video would be great!

  • @jkomno5809
    @jkomno58092 ай бұрын

    hi! what node should replace the input from photoshop, if I want the input to be just a selected image from my local drive?

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    A load image node would be what you need

  • @andree839
    @andree8392 ай бұрын

    Hi, thanks for a very helpful video again. I have a one problem though appearing in the workflow. I am using the SD1.5 checkpoint model since i dont have that much VRAM. When running Segment anything, I get an error for OUT of memory. Reading the error message it seems the memory capacity is large enough, but "PyTorch limit (set by user-supplied memory fraction)" is way to high. Any suggestions how to solve this? I tried with the very small "mobile_sam" model and it actually worked, but the mask was not precise at all.

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    Yeah, Mobile SAM is not great for the kind of result we want here. Since yours is a hardware limitation issue, if you haven’t tried this yet, I would, in order: - turn off IPAdapter completely; - look for lightweight ControlNet depth models; - check if other ControlNets are more compact (e.g. if lineart has a lighter model than depth. You miss out on depth but you still get the same spatial coordinates as the photoshop picture) - reduce latent image size

  • @andree839

    @andree839

    2 ай бұрын

    @@risunobushi_ai Thanks for the suggestions! I already tried most of them and even if i reduce the latent image to extremely low, I still get the error. Seems to be very hard to figure out. The entire message i get is like this "Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 2.85 GiB Requested : 768.00 MiB Device limit : 4.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB" So the strange part is that the sum of the requested memory is less than the Device limit.

  • @purerikki1
    @purerikki111 күн бұрын

    where can you find the image blend by mask node, Ive cloned a WAS suite repository but it failed, is there anywhere else to get it? Many Thanks

  • @risunobushi_ai

    @risunobushi_ai

    11 күн бұрын

    Have you tried a “try fix” in the manager for the WAS suite? I’m not at home right now and can’t check if there’s other blend by mask nodes (I’m sure there are though)

  • @purerikki1

    @purerikki1

    9 күн бұрын

    @@risunobushi_ai Many Many Thanks solved it, however I am now looking for how to connect my Photoshop to comfy UI node, it seems to have been upgraded. There is no password field in the node any longer, not sure how they speak to each other?

  • @risunobushi_ai

    @risunobushi_ai

    9 күн бұрын

    @@purerikki1 the dev told me both nodes (old and new) should be available, but I can't find it myself in the updated repo. Anyway, you can downgrade it by using "git checkout" and then the version of the repo before it got upgraded to the new nodes.

  • @thewebstylist
    @thewebstylist2 ай бұрын

    Just showing the UI at 1:30 is why I still haven’t chosen to use Stable D

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    Well, I do try my best to explain why and how to use each and every node, to help anyone understand what they do and how they can use them easily

  • @jkomno5809
    @jkomno58092 ай бұрын

    i followed the tutorial and built your workflow from scratch but without the photoshop node as i'm on macos. i replaced it with a normal "load image" node, that gets to the resizer just as how photoshop node goes through. I get error "SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)" ... can you help me out with it? ComfyUI Manager doesn't say that I have missing nodes

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    What are you using instead of the photoshop node? A load image node? At which node does the workflow throws an error (usually the one that remains highlighted when the queue stops)?

  • @jkomno5809

    @jkomno5809

    2 ай бұрын

    @@risunobushi_ai Error occurred when executing SAMModelLoader (segment anything): Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

  • @jkomno5809

    @jkomno5809

    2 ай бұрын

    @@risunobushi_ai i'm running this on M1 Max 32 core GPU, 64 RAM: Error occurred when executing SAMModelLoader (segment anything): Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    Do you mind uploading your json workflow file to pastebin or any other sharing tools? I’m going to see if I can replicate the issue on my MacBook

  • @jkomno5809

    @jkomno5809

    2 ай бұрын

    @@risunobushi_ai yes of course! can i have your discord or something?

  • @xColdwarr
    @xColdwarrАй бұрын

    This doesnt work in Google Colab but if it does pls help me

  • @risunobushi_ai

    @risunobushi_ai

    Ай бұрын

    I’m not versed in Google Collab, so I’m not sure whether a connection between Photoshop, which acts as a local server, would be able to work with Collab. You’d need to find a way to forward photoshop’s remote connection to the Collab I guess.

  • @zizhdizzabagus456
    @zizhdizzabagus4562 ай бұрын

    The only problem is it doesn't actually blend lighting to the subject.

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    Sometimes it does, sometimes it doesn't - the solution would be applying a normal map controlnet as well, but that slows things down a bit, and normal maps extracted from 2D pictures are not great. We can only wait for better depth maps, so that the light can be interpreted better, or we can generate more pictures so that we get coherent lighting eventually. For example, sometimes it generates close to perfect shadows, whereas sometimes it doesn't. At its core, it's a non-deterministic approach to post processing, so it will always have some limitations, but going forward I expect those to become less and less impactful.

  • @zizhdizzabagus456

    @zizhdizzabagus456

    2 ай бұрын

    @@risunobushi_ai does it has to be normal map? I thought depth and normal give pei much same results?

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    Long story short, the latest depth maps can do what normal maps would do, but since it’s all just an approximation of a 3D concept, we’re still not quite there for coherent *and* consistent lighting.

  • @zizhdizzabagus456

    @zizhdizzabagus456

    2 ай бұрын

    @@risunobushi_ai oh you mean that if I do use the real one from 3d editor it woild make a difference?

  • @risunobushi_ai

    @risunobushi_ai

    2 ай бұрын

    @@zizhdizzabagus456 it would and it wouldn’t. Normal maps derived from 2D pictures are an approximation, so they’re at best a bit scuffed. Also, apparently generative models weren’t supposed to be able to “understand” normals. For a more in depth analysis, take a look here: arxiv.org/abs/2311.17137

  • @brunosimon3368
    @brunosimon33685 күн бұрын

    Thanks for this wonderful tutorial. I've downloaded your json file, but it doesn't work for me. After installing all the different files, ComfyUI blocks on the IPAdapter. I get the following message : IPAdapter model not found. File "C: ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C: ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C: ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C: ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 515, in load_models raise Exception("IPAdapter model not found.") If you have any idea, you're welcome 🙂

  • @risunobushi_ai

    @risunobushi_ai

    3 күн бұрын

    have you installed all the models needed for IPAdapter to work? They're on the IPAdapter Plus github github.com/cubiq/ComfyUI_IPAdapter_plus

Келесі