Andrea Baioni

Andrea Baioni

I'm a fashion and editorial photographer, and a mixed media artist.

This channel is dedicated to generative AI R&D, applied to fashion photography and other creative fields.
Tutorials, research, random projects and more.

Пікірлер

  • @AbdullahCross25
    @AbdullahCross252 сағат бұрын

    Nice Video have you considered trying to add in a custom background option?

  • @risunobushi_ai
    @risunobushi_aiСағат бұрын

    The background generation portion of this live originates exactly from the lackluster background feature that’s already integrated in the ic-light node. In my previous video, I talked about how using that feature results in a color cast that I personally don’t like, so circumventing that by generating / applying a background, reapplying the original subject, and only then relighting is a better option overall :)

  • @ronnykhalil
    @ronnykhalil2 сағат бұрын

    brilliant. thank you!

  • @remaztered
    @remaztered7 сағат бұрын

    So great video! But I have a problem with RemBG node - how can I install this node?

  • @risunobushi_ai
    @risunobushi_ai7 сағат бұрын

    you can either find the repo in the manager, on github, or just drag and drop the workflow json file in a comfyui instance and install missing nodes from the manager. let me know if that works for you

  • @yangchen-zd9zl
    @yangchen-zd9zl7 сағат бұрын

    Hello, I am a ComfyUI beginner. When I used your workflow, I found that the light and shadow cannot be previewed in real time, and when the light and shadow are regenerated to the previously generated photo, the generation will be very slow, and the system will report an error: WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])

  • @risunobushi_ai
    @risunobushi_ai4 сағат бұрын

    Sorry, but I’ll have to ask a few questions. What OS are you on? Are you using a SD 1.5 model or a SDXL model? Are you using the right IC-Light model for the scene you’re trying to replicate (fbc for background relight, fc for mask based relight)?

  • @aynrandom3004
    @aynrandom300414 сағат бұрын

    Thank you for explaining the actual workflow and the function of every node. I also like the mask editor trick. Just wondering why some of my images also changed after the lighting is applied? Sometimes there are minimal changes with the eyes, face etc

  • @risunobushi_ai
    @risunobushi_ai13 сағат бұрын

    Thanks for the kind words. If I were to make it easier to understand, the main issue with prompt adherence lies in the CFG value. Usually, you’d want to have a higher CFG value in order to have better prompt adherence. Here, instead of words in the prompt, we have an image being “transposed” via what I think is a instruct pix2pix process on top of the light latent. Now, I’m not an expert on instruct pix2pix workflows, since it came out at a moment in time where I was tinkering with other AI stuff, but from my (limited) testing, it seems like the lower the CFG, the more the resulting image is adherent to the starting image. In some cases, as we’ll see today on my livestream, a CFG around 1.2-1.5 is needed to preserve the original colors and details.

  • @aynrandom3004
    @aynrandom30043 сағат бұрын

    @@risunobushi_ai thank you! Lowering the cfg value worked. :D

  • @mohammednasr7422
    @mohammednasr7422Күн бұрын

    hi dear Andrea Baioni I am very interested in mastering Comfy UI and was wondering if you could recommend any courses or resources for learning it. I would be very grateful for your advice

  • @risunobushi_ai
    @risunobushi_ai13 сағат бұрын

    Hey there! I'm not aware of paid comfyUI courses (and I honestly wouldn't pay for them, since most, if not all of the information needed is freely available either here or on github). If you want to start from the basics, you can start either here (my first video, about installing comfyUI and running your first generations): kzread.info/dash/bejne/dXhlu66GedGslMY.html or look up a multi-video basic course, like this playlist from Olivio: kzread.info/dash/bejne/foKDzs1xn92Xnrw.html

  • @PierreGrenet-ty4tc
    @PierreGrenet-ty4tcКүн бұрын

    This is a great tutorial, thank you ! ...but how to use ic light with sd web UI. I have just installed it but it doesn't appear anywhere 😒😒 could help ?

  • @risunobushi_ai
    @risunobushi_aiКүн бұрын

    Uh, I was sure there was an automatic1111 plugin already released, I must have misread the documentation here: github.com/lllyasviel/IC-Light Have you tried the gradio implementation?

  • @JavierCamacho
    @JavierCamachoКүн бұрын

    Sorry to bother you, I'm stuck in comfyui. I need to add AI people to my real images. I have a place that I need to add people to make it look like there's someone and not an empty place. I've look around but I came up short. Can you point me to the right direction?

  • @risunobushi_ai
    @risunobushi_aiКүн бұрын

    Hey! You might be interested in something like this: www.reddit.com/r/comfyui/comments/1bxos86/genfill_generative_fill_in_comfy_updated/

  • @JavierCamacho
    @JavierCamacho22 сағат бұрын

    @@risunobushi_ai i'll give it a try. Thanks

  • @JavierCamacho
    @JavierCamacho22 сағат бұрын

    @@risunobushi_ai so I tried running it but I have no idea what I'm suppose to do. Thanks anyways.

  • @cekuhnen
    @cekuhnenКүн бұрын

    is this similar to tools like Krea which rendered so incredibly fast?

  • @risunobushi_ai
    @risunobushi_aiКүн бұрын

    I haven’t used Krea at all, so I can’t be of much help there, sorry :/

  • @dreaminspirer
    @dreaminspirerКүн бұрын

    I would SEG her out from the close up. then draft composite her on the BG. This probably reduces the color cast :)

  • @risunobushi_ai
    @risunobushi_aiКүн бұрын

    Yup, that’s what I would do too. And maybe use a BW Light Map based on the background remapped on low-ish white values as a light source. I’ve been testing a few different ways to solve the background as a light source issues and what I got up till now is that the base, non background solution is so good that the background option is almost not needed at all.

  • @houseofcontent3020
    @houseofcontent30202 күн бұрын

    Such good video!

  • @houseofcontent3020
    @houseofcontent30202 күн бұрын

    I'm trying to work with the background and foreground images mix workflow you shared and I keep getting errors, even though I carefully followed your video step by step. Wondering if there's a way to chat with you and ask you a few questions. Would really appreciate it :) Are you on Discord?

  • @risunobushi_ai
    @risunobushi_ai2 күн бұрын

    I'm sorry, but I don't usually do one on ones. The only errors screen I've seen in testing are due to mismatched models. Are you using a 1.5 model with the correct IC-Light model? i.e.: FC for no background, FBC for background?

  • @houseofcontent3020
    @houseofcontent30202 күн бұрын

    That was the problem. Wrong model~ Thank you :) @@risunobushi_ai

  • @cycoboodah
    @cycoboodah2 күн бұрын

    The product I'm relighting changes drastically. It basicaly keeps the shape but introduces too much of latent noise. I'm using your workflow without touching anything but I'm getting a very different results.

  • @risunobushi_ai
    @risunobushi_ai2 күн бұрын

    That's weird, in my testing I sometimes get some color shift but most of the times the product remains the same. Do you mind sending me the product shot via email at [email protected]? I can run some tests on it and check what's wrong. If you don't want or can't share the product, you could give me a description and I could try generating something similar, or looking up on the web for something similar that already exists.

  • @risunobushi_ai
    @risunobushi_ai2 күн бұрын

    Leaving this comment in case anyone else has issues, I tested their images and it works on my end. It just needed some work on the input values, mainly CFG and multiplier. In their setup, for example, a lower CFG (1.2-ish) was needed in order to preserve the colors of the source product.

  • @houseofcontent3020
    @houseofcontent30202 күн бұрын

    This is a great video! Thanks for sharing the info.

  • @antronero5970
    @antronero59703 күн бұрын

    Number one

  • @dtamez6148
    @dtamez61483 күн бұрын

    Andrea, I really enjoyed your live stream and your interaction with those of us who were with you. However, this follow up on the node, the technical aspects, and your insight as a photographer is Outstanding. Excellent work!

  • @risunobushi_ai
    @risunobushi_ai3 күн бұрын

    Thank you! I’m glad to be of help!

  • @uzouzoigwe
    @uzouzoigwe3 күн бұрын

    Well explained and super useful for image composition. I expect that a small hurdle might be when it comes to reflective/shiny objects...

  • @risunobushi_ai
    @risunobushi_ai3 күн бұрын

    I’ll be honest, I haven’t tested it yet with transparent and reflective surfaces, now I’m curious about it. But I expect it to have some issues with them for sure

  • @JohanAlfort
    @JohanAlfort3 күн бұрын

    Nice insight to this new workflow, super helpful as usual :) This opens up a whole lot of possibility! Thanks and keep it up.

  • @risunobushi_ai
    @risunobushi_ai3 күн бұрын

    Yea it does! I honestly believe that this is insane for product photography

  • @xxab-yg5zs
    @xxab-yg5zs3 күн бұрын

    Those videos are great, please keep them coming up. Im totally new to SD and Comfy, you actually make me believe it can be used in a professional, productive way.

  • @risunobushi_ai
    @risunobushi_ai3 күн бұрын

    It can definitely be used as a professional tool, it all depends on the how!

  • @StringerBell
    @StringerBell3 күн бұрын

    Dude, I love your videos but this ultra-closeup shot is super uncomfortable to watch. It's like you're entering my personal space :D It's weird and uncomfortable but not in the good way. Don't you have a wider lens than 50mm?

  • @risunobushi_ai
    @risunobushi_ai3 күн бұрын

    The issue is that I don't have anymore space behind the camera to compose a different shot, and if I use a wider angle some parts of the room I don't want to share get into view. I'll think of something for the next ones!

  • @pranavahuja1796
    @pranavahuja17963 күн бұрын

    Things are getting so exciting🔥

  • @risunobushi_ai
    @risunobushi_ai3 күн бұрын

    Indeed they are!

  • @errioir
    @errioir4 күн бұрын

    Спасибо за видео, начал понимать как работать с ComfyUI

  • @pandelik3450
    @pandelik34504 күн бұрын

    Since you don't need to extract depth from photos but from Blender, you could just use the Blender compositor to save depth passes for all frames to a folder and then load them into ControlNet from that folder.

  • @risunobushi_ai
    @risunobushi_ai4 күн бұрын

    Yeah I debated doing that, since I was made aware of it in a previous video, but ultimately I decided on going this way because my audience is more used to comfyUI than Blender. I didn’t want to overcomplicate things in Blender, even if they might seem easy to someone who’s used to it, but exporting depth directly is definitely the better way to do it

  • @merion297
    @merion2974 күн бұрын

    I am seeing it but not believing it. It's incredible. Incredible is a weak word for it.

  • @Shri
    @Shri4 күн бұрын

    This can be made way more efficient. For one: do away with live screen share of Blender altogether as it is taking a lot of compute. Instead just take screenshots or export image of scene on mouseup. Then have a custom ComfyUI node watch for changes to the export folder and re-render whenever the folder has a new file. The advantage of this method is that you have screenshots of all the various changes to the scene and can go back to any particular scene/pose you liked at any time. You have a history to go back and forth in ComfyUI. It would be even cooler to save the image of the rendered output so you can save all the metadata of the generated image (like seed, sampler config etc). That way you can correlate the saved screenshot with generated image and know which image can be passed to the control net to reproduce the same output (as you have saved both input image and generation metadata).

  • @risunobushi_ai
    @risunobushi_ai4 күн бұрын

    Yep, that’s basically the workflow from my previous video (except we don’t use a node that automatically expects new images, but just a load image node loaded with the renders). For this one I wanted to automate the process as much as possible, regardless of how unoptimized it would end up. But yeah I’m all for finding ways to optimize and make the workflow yours, so I agree there’s room for improvements depending on what one wants to do with it!

  • @MrBaskins2010
    @MrBaskins20104 күн бұрын

    wow this is wild

  • @aysenkocakabak7703
    @aysenkocakabak77034 күн бұрын

    İnteresting to watch but i still could not have the courage to try it out

  • @realtourarchviz
    @realtourarchviz5 күн бұрын

    Hi! great work as usual. please make something for architectural visualization. like making the render more realistic, upscaling, adding people and other foreground/background elements.

  • @risunobushi_ai
    @risunobushi_ai5 күн бұрын

    I’ve been wanting to do some archiviz stuff since I started doing KZread, half of my friends are architects so I keep hearing from them to do stuff about it. But the thing is, I’d like to do something usable in real life scenarios, and so far all the feedback I’ve got from them is that while gen AI is somewhat useful, they’d like to get real (not realistic) results in order to bring renderlike pictures to clients with real materials etc. so I’m still trying to figure it all out.

  • @realtourarchviz
    @realtourarchviz5 күн бұрын

    @@risunobushi_ai I agree. but for post production we really do not change anything on the structure or building itself, we just want to make the render more lively and realistic by adding foreground elements, people, cars,... I think what AI can do in this is to make the people, cars, trees more realistic by adding more detail or placing new ones by masking?

  • @realtourarchviz
    @realtourarchviz5 күн бұрын

    We usually add trees and people in 3d but they are very unrealistic.

  • @jasonkaehler4582
    @jasonkaehler45825 күн бұрын

    very cool! Any idea why it doesn't update when i rotate view in Blender? I have to manually 'Set Area' each time for it to regenerate the CN images (depth etc.). I dont want to use "Live Run" just kick off renders in COmfyUI manually. I would expect it to regenerate the CN images each time, but it doesn't.....any suggestsions? Thanks, great stuff! (oh, using Lightning WF)

  • @risunobushi_ai
    @risunobushi_ai5 күн бұрын

    I think that the way the screen share node is set up to work is by enabling live run only. I had the same “issue” when I first started testing it, and I didn’t realize that it wasn’t actually refreshing the image unless either of two things happened: using live recording, or resetting an area. So I think there’s no way around it.

  • @jasonkaehler4582
    @jasonkaehler45825 күн бұрын

    @@risunobushi_ai ah, got it. thanks for fast reply. This is unfortunate as the workflow is so close to being fluid / responsive.... very cool nonetheless. I setup a switcher for two different models at the front so move from Lightning to better SDXL easier...

  • @GeorgeOu
    @GeorgeOu5 күн бұрын

    What's the GUI for Stable Diffusion? ComfyUI?

  • @M4rt1nX
    @M4rt1nX5 күн бұрын

    Yes, but there is 1111 as well. Completely different though.

  • @henryturner4281
    @henryturner42816 күн бұрын

    Great work bro love your videos. Ever tried applying a Lora trained on the clothing item you want with a workflow like this for higher clothing accuracy? Or maybe warping the original image of the dress to fit the blender model with photoshop, then using controlnets to keep the print on the dress consistent? Personally I have been researching the best way for ecom shoots created with SD. I get highest quality images with trained LoRas on the clothing, combined with controlnets that have input lineart or canny of the original garment.

  • @risunobushi_ai
    @risunobushi_ai6 күн бұрын

    Yeah, I wanted to talk about it during the live but I forgot to - I think I talked about it a bit in the previous live, but I think it’s great for smal ecoms, advs and editorials, but it’s not a solution that’s scalable for mid to large sized ecoms. And the game right now is all about finding a scalable, one size fit all solution to the problem. But for these kind of things, it’s great!

  • @M4rt1nX
    @M4rt1nX6 күн бұрын

    I feel so rich when I create shots in my virtual studio.

  • @risunobushi_ai
    @risunobushi_ai6 күн бұрын

    Client called, said the lighting budget is too high. Please delete some area lights.

  • @M4rt1nX
    @M4rt1nX6 күн бұрын

    @@risunobushi_ai 🤣😂🤣. So funny. We should be the ones complaining about the electrical bills.

  • @mrJv2k7
    @mrJv2k76 күн бұрын

    my popup isn't popping up... after the render, there is no popup, no error, nothing. tried on firefox and chrome

  • @risunobushi_ai
    @risunobushi_ai6 күн бұрын

    What’s your OS?

  • @mrJv2k7
    @mrJv2k76 күн бұрын

    @@risunobushi_ai windows 10

  • @mrJv2k7
    @mrJv2k76 күн бұрын

    ​@@risunobushi_aiwindows 10. Firefox latest

  • @risunobushi_ai
    @risunobushi_ai5 күн бұрын

    Sorry, I didn’t see your answer. That’s weird, and it’s the second time I get this feedback from a windows user, where it should run without issues. I now the dev just updated his nodes, so I’m going to take a look at that next week. In the meantime, there’s a bit of a troubleshoot guide in the description of my other video about it, check if it works for you: kzread.info/dash/bejne/aXtllLWafNermaQ.htmlsi=7ZHTgzB6Xg0SaTfP

  • @asheshsrivastava8921
    @asheshsrivastava89216 күн бұрын

    Hey appreciate all the hard work! Definitely learn something from it!

  • @risunobushi_ai
    @risunobushi_ai6 күн бұрын

    Thank you! Happy you liked it

  • @Jacopo-jm3ch
    @Jacopo-jm3ch6 күн бұрын

    I'm testing IDM-VTON to generate the "first step" image, it works quite well for this purpose. Too bad the comfy-ui version generates much worse results than the API version and it has big problem with denim fabric.

  • @risunobushi_ai
    @risunobushi_ai6 күн бұрын

    IDM is probably the best "open" (very quote unquote) VITON we have at the moment, and as you say it's good as a first pass - but still, the limitations upon which the CLIPVision models used in IPAdapter are, in my opinion, non solvable regardless of how good a first step is, or how big of a resolution you use on a second step. A different approach to CLIPVision for VITONs must be taken in order to get to the results we need. But then again, I might be completely wrong, it wouldn't be the first time!

  • @pranavahuja1796
    @pranavahuja17966 күн бұрын

    Hey brother, in my case the preview image node is not working, it does not show any output, although when I use save image node the images are saved in the output folder of comfyUI. Can you please help me with this? Edit: Preview method in manager is set to "none" (just like yours in the video)

  • @risunobushi_ai
    @risunobushi_ai6 күн бұрын

    That's weird, I've never seen that happen before. Preview method shouldn't have any impact on this, as it's just how the preview image is rendered (none is a faster one that bypasses some bells and whistles the other types have, but it's still a preview type). What OS are you on? Have you got the (iirc) impact pack installed? if so, could you test the preview bridge node and check if you have the same issue with that type of preview node as well?

  • @pranavahuja1796
    @pranavahuja17966 күн бұрын

    @@risunobushi_ai Yes I have impact pack installed and same issue is faced in preview bridge node as well. I am using windows 11, 16GB ram, gtx 1650, 8gb vram

  • @pranavahuja1796
    @pranavahuja17966 күн бұрын

    I am trying figure out whether it can add some value to my workflow, I will then deploy it to some cloud service, if you think my pc specs are not good enough can u please suggest me some cloud service

  • @risunobushi_ai
    @risunobushi_ai6 күн бұрын

    no, I was just trying to figure out if you had some non standard hardware / OS config. I'm sorry I can't be of much help, I can't replicate the issue and honestly even by googling the symptoms I can't find a single similar issue :/

  • @xxab-yg5zs
    @xxab-yg5zs7 күн бұрын

    great watch, I hope you will do more of this

  • @risunobushi_ai
    @risunobushi_ai7 күн бұрын

    thank you! I had a lot of fun doing it, I don't think it will be a weekly thing but I'll do it again whenever I'll have some time to kill and experiment!

  • @grizzlybeaver528
    @grizzlybeaver5287 күн бұрын

    Great material, perfect pace, everything clearly explained.Thank You!!!

  • @risunobushi_ai
    @risunobushi_ai7 күн бұрын

    Thank you for the kind words!

  • @Jacopo-jm3ch
    @Jacopo-jm3ch7 күн бұрын

    Hi Andrea, thanks for all your insights. Share with us your playlist also! :)

  • @risunobushi_ai
    @risunobushi_ai7 күн бұрын

    My playlist as in the song in the background? That was a loop of a 4 minutes song I generated on Udio lol, and I realize now that it was waaaay too loud in the beginning. But then again, first ever stream, so live and learn!

  • @WhySoBroke
    @WhySoBroke7 күн бұрын

    Amazing session!! Looking forward to the finished workflow amigo!

  • @risunobushi_ai
    @risunobushi_ai7 күн бұрын

    Thank you! The workflow might take a bit longer since it's a bit harder than expected, but I'll keep you all posted.

  • @fabiotgarcia2
    @fabiotgarcia27 күн бұрын

    The preview pop-up is not working. Perhaps because the photoshop plugin got update. I don't know.

  • @risunobushi_ai
    @risunobushi_ai7 күн бұрын

    I'm not sure I've asked you before, but what OS are you on? Was it working for you before? Thanks for the heads up, I'll look into it. I've actually talked to the dev over the weekend, so I might get in touch with them as well.

  • @reimagineuniverse
    @reimagineuniverse8 күн бұрын

    Great way to steal other peoples work and make it look like you did it without learning any skills

  • @risunobushi_ai
    @risunobushi_ai8 күн бұрын

    If you're talking about the ethics of generative AI, we could discuss about this for days. If you're talking about the workflow, I don't know what you're getting at, since I developed it myself starting from Matteo's.

  • @moritzryser
    @moritzryser8 күн бұрын

    dope

  • @eias3d
    @eias3d8 күн бұрын

    Morning Andrea! cool workflow! Where can I find the Lora's "LCM_pytorch_lora_weight_15.safetensors"?

  • @risunobushi_ai
    @risunobushi_ai8 күн бұрын

    argh that's the only model I missed in the description! I'm adding it now. You can find it here: huggingface.co/latent-consistency/lcm-lora-sdv1-5

  • @eias3d
    @eias3d8 күн бұрын

    @@risunobushi_ai Hehe

  • @denisquarte7177
    @denisquarte71779 күн бұрын

    Error: has no attribute shape, expected 3 but got 4, lATenT sIzE AARGH.

  • @gimperita3035
    @gimperita30359 күн бұрын

    Fantastic stuff! I own more 3d assets that I'm eager to admit and using generative AI in this way was the idea from the beginning. I can't thank you enough . and of course Matteo as well.

  • @risunobushi_ai
    @risunobushi_ai9 күн бұрын

    ahah at least this is a good way to put those models to use! Glad you liked it!

  • @user-bj5cp8qy2p
    @user-bj5cp8qy2p9 күн бұрын

    Awesome workflow

  • @dannylammy
    @dannylammy9 күн бұрын

    There's gotta be a better way to load those key frames, thanks!

  • @risunobushi_ai
    @risunobushi_ai9 күн бұрын

    Yep there is, as I’m saying in the video it’s either this or using a batch loader node that targets a folder, but for the sake of clarity in the explanation I’d rather have all nine frames on video

  • @arong_
    @arong_9 күн бұрын

    Awesome stuff! Just wondering how are you able to use ipadapter plus style transfer with an sd 1.5 model like you're using? I thought that wasn't possible and it never works for me

  • @risunobushi_ai
    @risunobushi_ai9 күн бұрын

    Huh, I’ve never actually had an my issue with it. I tested it with both 1.5 and SDXL when it was first updated and I didn’t encounter any errors. The only thing that comes to mind is that I have collected a ton of clipvision models over the past year, so maybe I have something that works with 1.5 by chance?

  • @arong_
    @arong_9 күн бұрын

    @@risunobushi_ai ok maybe, I remember Mateo mentioned it also in his ipadapter update tutorial that it wouldn't work for 1.5 but maybe it works for some and yes maybe you have some special tool that unlocked it. Regardless this is great stuff, loving and learning a lot from your tutorials

  • @emanuelec2704
    @emanuelec270410 күн бұрын

    Grandissimo! Keep them coming.

  • @risunobushi_ai
    @risunobushi_ai10 күн бұрын

    Grazie!