Magnific AI Relight is Worse than Open Source

Ғылым және технология

Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs!
Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below.
REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon.
COUPON CODE: RCABP10 (Expires July 31)
Workflow (RunComfy): www.runcomfy.com/comfyui-work...
Workflow (Local): openart.ai/workflows/nSqO2P2Z...
Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
Relight better than Magnific AI and for free, locally or on the cloud via RunComfy!
(install the missing nodes via comfyUI manager, or use:)
IC-Light comfyUI github: github.com/kijai/ComfyUI-IC-L...
IC-Light model (fc only, no need to use the fbc model): huggingface.co/lllyasviel/ic-...
GroundingDinoSAMSegment: github.com/storyicon/comfyui_...
SAM models: found in the same SAM github above.
Model: most 1.5 models, I'm using epicRealism civitai.com/models/25694/epic...
Auxiliary controlNet nodes: github.com/Fannovel16/comfyui...
IPAdapter Plus: github.com/cubiq/ComfyUI_IPAd...
Timestamps:
00:00 - Intro
01:10 - Workflow (Local)
03:50 - Magnific vs Mine (First Test, global illumination)
04:31 - Magnific vs Mine (Second Test, custom light mask)
06:45 - Workflow (Cloud, RunComfy)
16:22 - Workflow Deep Dive (How it works)
19:15 - Outro
#magnificai #stablediffusion #comfyui #comfyuitutorial #relight #iclight

Пікірлер: 98

  • @risunobushi_ai
    @risunobushi_ai22 күн бұрын

    Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs! Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below. REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon. COUPON CODE: RCABP10 (Expires July 31) Workflow (RunComfy): www.runcomfy.com/comfyui-workflows/comfyui-product-relighting-workflow?ref=AndreaBaioni Workflow (Local): openart.ai/workflows/nSqO2P2ZmDQGwohEbgl3

  • @I.Am.Nobody

    @I.Am.Nobody

    19 күн бұрын

    so, show us how to install locally, for the folk who dont care about your bias to your sponsor?

  • @risunobushi

    @risunobushi

    19 күн бұрын

    @@I.Am.NobodyI do that starting at minute 1:10 (running locally). You just need to download the workflow from the link in the description or in the comment you replied to, and import it into your comfyUI instance.

  • @MaghrabyANO

    @MaghrabyANO

    18 күн бұрын

    @@I.Am.Nobody Bro, Andrea isn't bias to his sponsor at all. I mail him and send him over social media for any inquiries, and he helps more than other developers/creators. He is not entitled to explain how to install comfyui locally because basically you can google/youtube search it and you'lll find tons of help. He is NOT trying to hide a secret information to drive the masses into using runcomfy. its just not worth it to waste time on his 100th video about AI to speak about how to locally install comfyui

  • @johntnguyen1976

    @johntnguyen1976

    15 күн бұрын

    Would you be able to run something as bespoke and customized as LivePortrait on RunComfy?

  • 22 күн бұрын

    Dropping the mic. Love to see a simplified UI for these workflows. That is the biggest selling point of the paid platforms - the convenience. Great showcase as usual Andrea.

  • @risunobushi_ai

    @risunobushi_ai

    22 күн бұрын

    Thanks! I've been tinkering around with the idea of a "Control Room" for a client who'll find it easier to have everything in one place, and while I'm still not sold on get / set nodes as they are not clear to newcomers, I think this is a good approach towards ease of use. And yeah, it is the main selling point of SaaS platforms right now. On the one hand, normal users don't want to see the node tangle in the backend, but on the other, those who watch this channel are kind of power users, so I need to strike a bit of a balance when designing workflows.

  • @agusdor1044
    @agusdor104422 күн бұрын

    thank you Andrea!

  • @LeonhardKleinfeld
    @LeonhardKleinfeld22 күн бұрын

    got the workflow up and running in 5mins. Great work thank you!

  • @risunobushi_ai

    @risunobushi_ai

    22 күн бұрын

    Great to know, I tried to structure it so it's the easiest possible solution I could come up with that could still give the user a degree of choice in the final results!

  • @bjj_sk5491
    @bjj_sk549122 күн бұрын

    Nice work! I love it ❤

  • @risunobushi_ai

    @risunobushi_ai

    22 күн бұрын

    Thank you!

  • @DanielSchweinert
    @DanielSchweinert19 күн бұрын

    Just a suggestion. I created a couple of product shots and saw that the edges are not always perfect. The mask is good but I realized that it generates in example a bottle that is slightly bigger than the product and if it is blended you can see the edges of the original generation that is lying under it. Would it be possible to put a "Lama Inpaint" node to remove the generated bottle and create a cleanplate and only after that to paste or blend the product photo into the generation? Hope it makes sense. Will try it myself but I just begun to work with comfyui yesterday. LOL

  • @risunobushi_ai

    @risunobushi_ai

    19 күн бұрын

    Yea it would be possible to, but then it would become a VRAM issue (if you’re running it locally), a cost issue (if you’re running it on the cloud), and a market issue (if you’re a SaaS that hopes your generations take 30 seconds to serve). The main issue with 1-click solutions is just that, there’s a ceiling somewhere and different users will have different needs / hardware specs / time / expectations. So it’s all a matter of presenting a working, albeit limited, solution and then leave the fine tuning to the individual user.

  • @appdeveloper3895
    @appdeveloper389521 күн бұрын

    Feel lucky knowing your channel. It is really painful that credits for this amazing work goes to someone else and he is even making money out of it. On top of that, they are not doing it as good as you did. And I think they will improve their product after watching your video without even crediting you or anyone involved in the open source community. Thank you for all the work you do.

  • @risunobushi_ai

    @risunobushi_ai

    20 күн бұрын

    Thank you for the super kind words!

  • @johntnguyen1976
    @johntnguyen197620 күн бұрын

    Wwonderful! Your channel keeps getting more and more useful by the day (and you were useful from day one)

  • @risunobushi_ai

    @risunobushi_ai

    20 күн бұрын

    Thank you!

  • @TheRoomcleaner
    @TheRoomcleaner22 күн бұрын

    Love the salt. Video was hilarious and informative 👍

  • @risunobushi_ai

    @risunobushi_ai

    22 күн бұрын

    I'm allowing myself one salty, personal video every four months, as a treat :)

  • @mikelaing8001
    @mikelaing800120 күн бұрын

    i tried magnific and it was terrible, wondered if i'd missed something tbh.

  • @ted328
    @ted32820 күн бұрын

    This channel is a gift to creatives and artists everywhere. Can't thank you enough.

  • @risunobushi_ai

    @risunobushi_ai

    20 күн бұрын

    Thank you so much!

  • @wholeness
    @wholeness20 күн бұрын

    Nice! Now is there a local Magnific/Krea upscaler flow that can produce similar results that you know of? This would be a video we are all looking for!

  • @risunobushi_ai

    @risunobushi_ai

    20 күн бұрын

    praise where it's due, there is no open source upscaler I've found that it's as good as magnific tbh

  • @hoangucmanh299
    @hoangucmanh29914 күн бұрын

    how to make it generate a new background based on prompt and make sure it's suitable for the foreground?

  • @aminebenboubker

    @aminebenboubker

    14 күн бұрын

    Looking for this answer too. Please enlighten us, Andrea! Fantastic job, by the way.

  • @risunobushi_ai

    @risunobushi_ai

    13 күн бұрын

    This workflow in particular uses a reference image for generating a background alongside a prompt. For pure prompting, without reference images, you’d need to up the denoise to 1 at all times and then disable the IPAdapter responsible for using the reference background to influence the generation

  • @MaghrabyANO
    @MaghrabyANO18 күн бұрын

    Another question, in the Regenartor Box, you get 2 results, right? one of them is AI-generated, and the other using the object image to be masked over the AI-generated object. the masked/overlayed object is usually pixlated for me. Im not sure why, but maybe because the input object resolution is 768x512 and the generated outccome is 1536x1024, so probably the image got stretched and pixlated. So how to sustain the same Image size as the object? no resizing needed

  • @MaghrabyANO

    @MaghrabyANO

    18 күн бұрын

    Alright, I retried and I found out that adding a light mask will run the workflow to its preservation details box. but, its still stretched (pixelated) the object, how can I avoid that?

  • @risunobushi_ai

    @risunobushi_ai

    18 күн бұрын

    Is your source image 768x512?

  • @MaghrabyANO

    @MaghrabyANO

    18 күн бұрын

    @@risunobushi_ai Yes. My source image is 768x512

  • @johnyoung4409
    @johnyoung440917 сағат бұрын

    Would it work for portrait?

  • @user-nd7hk6vp6q
    @user-nd7hk6vp6q19 күн бұрын

    Does this work for people too ? , let's say I want to change the background or place a person on a new background, would it work?

  • @risunobushi_ai

    @risunobushi_ai

    19 күн бұрын

    It can work for people, but full body shots have a hard limit in the actual number of pixels fine details that amount for those details, otherwise they get lost in the Detail Preservation stage. For products, people tend to notice inconsistencies a bit less than with people. It works pretty well for close up portraits and half body shots, but it’d need a much higher res and not as much relighting for full bodies.

  • @yuvish00
    @yuvish0011 күн бұрын

    Hi Andrea Great workflow! I tested it with a bottle of perfume and the forest background and the result not so good. Meaning, the size of the perfume with respect to the forest background was not proportional. Any suggestions how to improve this? Thanks!

  • @risunobushi_ai

    @risunobushi_ai

    11 күн бұрын

    Relative scale is always an issue with diffusion models. If the generation have no way of knowing the relative size of the subject relative to the background, you’re basically rolling a dice every time you’re generating. That’s why using a background that is “close enough” in scale to the picture you want to get, and setting a lower denoise than 1, usually helps. But yeah, the model needs some sort of guidance to understand and force scale in some way.

  • @yuvish00

    @yuvish00

    11 күн бұрын

    @@risunobushi_ai Gotcha! I understand. So even if the in CLIP text I say "perfume bottle", it is not going to help?

  • @vincema4018
    @vincema401818 күн бұрын

    One question: If I want to upscale the output image, should I insert the ultimate SD upscaler before or after the frequency seperation and color matching notes or after them?

  • @risunobushi_ai

    @risunobushi_ai

    18 күн бұрын

    it depends on if your starting image is bigger than the resulting image. If it is, you should try to hold on to as many details as possible from the original, so you'd want to upscale before the FS and color matching. If it isn't, you can just do that after, and then if the upscaler generates some details you don't want you can do a new FS using the details from the original (upcaled).

  • @vincema4018

    @vincema4018

    18 күн бұрын

    @@risunobushi_ai thanks Andrea, that’s a very practical suggestion. Let me work it out and add it into your workflow. I think most of the product images have much higher resolution than the resulting image, it’s better to upscale them before the FS and color matching. But is it necessary to resize the original image to the upscaled resolutions before conducting FS and color matching? Hmm… I think color matching may still be okay with higher or lower resolutions, but FS may require the same resolutions?

  • @risunobushi_ai

    @risunobushi_ai

    18 күн бұрын

    everything that passes through a "blend image by mask" node need to be at the same res, otherwise you get a size mismatch error. what you'd do is you bisect the resizing at the beginning, and get a higher res of the original for later use after upscaling, and a lower res for all the regen / relight ops.

  • @veenurohan3267
    @veenurohan326715 күн бұрын

    Hey Andrea, do you know where this node is coming from: "class_type": "Float" I cant locate any node in github which is of Float

  • @risunobushi_ai

    @risunobushi_ai

    15 күн бұрын

    can you check at which node the process stops? usually it's circled in purple.

  • @veenurohan3267

    @veenurohan3267

    11 күн бұрын

    Thanks a lot for the tutorial

  • @PaulVang-vf7fm
    @PaulVang-vf7fm19 күн бұрын

    Is this Ilyia person a real person? they've made literally 90% of all stable diffusion extensions/apps. Dude has coding super powers.

  • @risunobushi_ai

    @risunobushi_ai

    19 күн бұрын

    I know right? Illya is a godsend, moreso because most of the time they release a sandbox that users can then apply to a ton of different things, not just a single-case, one-use thing.

  • @MaghrabyANO
    @MaghrabyANO18 күн бұрын

    I tried using your genius workflow (thanks for it) it works with no error... but it doesnt generate a result at the "results" gox or at the "Option #1" or "Option #2" boxes... and the generated result (in the regeneration box) seems a bit cropped up, not blended, like .,...I guess the whole "Preserve Details" box doesn't run at all, neither the custom/global light boxes, let me give you a screenshot, Hope you can help, sent ya n email with the screenshots

  • @MaghrabyANO

    @MaghrabyANO

    18 күн бұрын

    Alright, nevermind this whole inquiry, the said boxes ran when I added a mask of light, But how can I use the global light mask? i.e I dont wanna add a light mask and I want the workflow to run to its completion (to the preservation area box)

  • @risunobushi_ai

    @risunobushi_ai

    18 күн бұрын

    Sorry I was out of office. I replied, but it seems you already figured it out! Global light is set in the switch where the user inputs are, so while at least a placeholder image is needed for all three inputs, you can use global light by selecting “False” on the “did you add a light mask?” Switch

  • @obi-wan-afro
    @obi-wan-afro21 күн бұрын

    Excelente vídeo, como siempre! ❤️

  • @risunobushi_ai

    @risunobushi_ai

    21 күн бұрын

    Thank you!

  • @DanielSchweinert
    @DanielSchweinert22 күн бұрын

    Ok I really want to give it a try. Just installed portable ComfyUI + Manager + missing nodes and loaded your workflow but the screen is empty. Any clues?

  • @risunobushi_ai

    @risunobushi_ai

    22 күн бұрын

    That's weird, do you have any checkpoints installed / redirected to comfy? Did you get any messages when you imported the json?

  • @risunobushi_ai

    @risunobushi_ai

    21 күн бұрын

    weirdly I can't see your latest question, but I got notified via email, so: if you see the "x" error, you need to either load a missing image (even if you're not using it, use a placeholder, comfy has no way to bypass it even if it's bypassed by the switch), or you're using a non-jpeg, non-png format

  • @DanielSchweinert

    @DanielSchweinert

    21 күн бұрын

    @@risunobushi_ai Thank you, figured it out some stuff was missing "bert-base-uncased". Now it works but the final image is always squished. Have to check the resolution on the nodes.

  • @risunobushi_ai

    @risunobushi_ai

    21 күн бұрын

    I've updated the workflow by remapping all the width & height connections, it seems like the ints nodes were reverting to a slider input for some users.

  • @neoneil9377
    @neoneil937711 күн бұрын

    Thanks for this amazing video, this is best ai content channel for professionals. Just one question. Does re-light support SDXL yet? Thanks in advance.

  • @risunobushi_ai

    @risunobushi_ai

    8 күн бұрын

    Hi! IC-Light is 1.5 only, but in this workflow for example we can use SDXL in the first generation phase (for the background for example), and let 1.5 handle only relighting.

  • @dropLove_
    @dropLove_22 күн бұрын

    Appreciate you and your work and your workflow.

  • @risunobushi_ai

    @risunobushi_ai

    21 күн бұрын

    Thank you!

  • @yuvish00
    @yuvish004 күн бұрын

    P.S Can our final image be same size as our background image?

  • @risunobushi_ai

    @risunobushi_ai

    4 күн бұрын

    Hi! no, the way this workflow works is by using the background image as a reference for an IPAdapter pipeline, it's not using that as a proper background by itself. So the aspect ratio and the dimensions, as well as the positioning of the subject relative to the background, are set by the subject image.

  • @sellertokerbo
    @sellertokerbo21 күн бұрын

    As a beginner, I really appreciate the clarity of the explanation ! I'm gonna try this one for sure !

  • @risunobushi_ai

    @risunobushi_ai

    20 күн бұрын

    Thank you! I try to always explain as much as I can without becoming too boring

  • @ismgroov4094
    @ismgroov409420 күн бұрын

    thanks sir

  • @denisquarte7177
    @denisquarte717722 күн бұрын

    Nice, came across your workflow last weekend and was about to experiement with some things. eg. you didn't subtract low freq from the image but instead added inverted with 50%. Still not sure why though. But before I tinker needlessly I take a look what you have already cooked. Thanks a lot for sharing and as well to the people helping you out developing this.

  • @risunobushi_ai

    @risunobushi_ai

    22 күн бұрын

    Thanks! If you're talking about a previous version, where we were using various math nodes to bruteforce frequency separation, we moved away from that and I wrote a frequency separation node that handles the HSV apply method like in PS. No more need for weird math, it's all handled with python in the custom nodes.

  • @denisquarte7177

    @denisquarte7177

    22 күн бұрын

    @@risunobushi_ai Just took a look at it and yes. Big improvement I'm almost sad that I now don't have any need for doing that myself anymore 😋. But this is a feature of open source no matter the problem, there is a high chance someone else already ran into the same issue and figured it out. Good job.

  • @risunobushi_ai

    @risunobushi_ai

    22 күн бұрын

    you can still make it better! for example, my frequency separation node oversharpens the final image by something like 1-2%. I can't figure out why, maybe you can

  • @denisquarte7177

    @denisquarte7177

    22 күн бұрын

    @@risunobushi_ai Well, my first guess would be that your high frequency separation is now so good that it leads to overemphasizing but i will surely play around with it anyway :)

  • @denisquarte7177

    @denisquarte7177

    22 күн бұрын

    @@risunobushi_ai tried something quick and dirty, upscale high frequency 4x lanczos, blur radius 2px, rescale .25x., not perfect but helpful

  • @agusdor1044
    @agusdor1044Күн бұрын

    Hey Andrea, Thank you so much for sharing all this valuable knowledge with us! I’ve been following you closely for a while now, and I was wondering if you could recommend any websites where I can find all these new nodes you use to dig into them and apply them to real-life tasks as you. I know many have usage restrictions due to commercial rights, and I’ve noticed from some comments you’ve responded to that you focus on using nodes, models, and tools that allow commercial use. I don’t have the knowledge or infrastructure to use them for that purpose, but it would be really helpful to know if there are better nodes out there (even if they have commercial restrictions) since they would be excellent for my learning, studies, and hobbies. Thanks FOREVER!

  • @risunobushi_ai

    @risunobushi_ai

    Күн бұрын

    Hi! Thank you, unfortunately I don’t have any better tools for finding out about nodes than the rest of us: the manager, Reddit, GitHub, LinkedIn, and sometimes discord (but I don’t like discord). And in order to know about licenses, what I do is looking at the GitHub license for each set of nodes and the licenses of the models they use. If something looks too good to be free for commercial use, I double check.

  • @sb6934
    @sb693420 күн бұрын

    Thanks!

  • @hartmanpeter
    @hartmanpeter21 күн бұрын

    I was just about to subscribe to Magnific. Thank you!

  • @risunobushi_ai

    @risunobushi_ai

    20 күн бұрын

    Magnific is still pretty great for their upscaler, it's the best around and I've found no open source alternative that is as good as theirs, so it might be worth subbing just for that - but relight is not where it's at.

  • @hartmanpeter

    @hartmanpeter

    20 күн бұрын

    @@risunobushi_ai I find that the upscaler changes the subject too much. The upscaler in ComfyUI suits my needs better. I was going to sub because the Relight feature was the added value I needed. I'm sure I'll sub in the future once I can find a business use, but for now, I'm a happy camper. Thanks again.

  • @AshT8524
    @AshT852422 күн бұрын

    I'm early

  • @thewebstylist
    @thewebstylist20 күн бұрын

    Magnific makes it sooo easy though but is overrated, they of course only showcase their best of best examples

  • @risunobushi_ai

    @risunobushi_ai

    20 күн бұрын

    Yeah, UX is paramount, and I'd honestly be inclined to let subpar products go their merry way if the issues weren't as glaring regardless of ease of use.

  • @Kal-el23
    @Kal-el2317 күн бұрын

    Would love to see a few more real world or useful example, such as with people instead of a Roomba lol

  • @risunobushi_ai

    @risunobushi_ai

    17 күн бұрын

    I focused on products instead of people because people relighting is a very niche market, while product relighting for e-commerce purposes is a trillion dollar industry. But yeah, it can work with people too, with these limitations we have found here: kzread.info/dash/bejne/c3-C3NeBnsu1ks4.html

  • @Kal-el23

    @Kal-el23

    17 күн бұрын

    @@risunobushi_ai I get you. I suppose it depends on what industry you're in. If you're a portrait or composite photographer you might find scene transfer or relighting very useful.

  • @ok-pro
    @ok-pro15 күн бұрын

    The worst KZread channel ever The reason for this is that you don't show clear examples and clear comparison at the beginning of the video You must show us at least more than five examples , very bad pro

  • @risunobushi_ai

    @risunobushi_ai

    15 күн бұрын

    Thanks for the feedback, although you could choose your words a bit better next time. I'm rather new to KZread (I've been doing this for three months now, not a lot), I'll try to show more examples in the beginning next time.

  • @ok-pro

    @ok-pro

    15 күн бұрын

    @@risunobushi_ai I apologize if what I said seemed inappropriate, but I did not mean that. My intention was constructive criticism Good luck in the next videos

  • @risunobushi_ai

    @risunobushi_ai

    15 күн бұрын

    No worries, it was a valid feedback after all!

  • @aliebrahimzade491
    @aliebrahimzade49117 күн бұрын

    wow its amazing workflow

Келесі