Create mind-blowing AI RENDERINGS of your 3D animations! [Free Blender + SDXL]

I'll show you how you can render any 3D scene with AI!
If you like my work, please consider supporting me on Patreon: / mickmumpitz
And follow me on Twitter: / mickmumpitz
👉 You can download the free workflows & installation guide here: www.patreon.com/posts/free-wo...
👉 Download the Advanced Workflows and Bonus Files here: www.patreon.com/posts/advance...
👉 Here you can find a super long bonus tutorial for this workflow: www.patreon.com/posts/bonus-t...
SPECS
I tested this workflow on a machine with an RTX2070 SUPER 8GB VRAM. It works but I highly recommend better GPUs for a smoother experience!
CHAPTERS
00:00 Intro
00:36 Creating a scene in blender
02:09 Creating render passes
04:24 ComfyUI setup
06:13 Generating some images
08:44 Video workflow
10:58 Rendering a CG movie scene
SUMMARY:
For over a year I’ve been saying that AI is the future of Rendering.
So today I want to share a workflow with you that will allow you to render any 3D scene with AI in any style you want! You’ll also be able to create individual prompts for all the parts of the image allowing for full control over the final rendering whether it’s an Image or a Video!
On the way I want to create some amazing renderings for instagram and tiktok and also re-render a scene from my AI Pixar movie where I combined different generative tools to create a full short film.
The secret of this AI rendering pipeline is a combination of the free 3D animation software Blender and the node-based interface for stable diffusion called ComfyUI. The special feature of this AI workflow is that we do not transform a rendering with AI (vid2vid), but only pass on the 3D data of the scene in the form of render passes, which we use in Control Net. This means that the final rendering is completely prompt-based and we can create our own prompts for all areas of the image.

Пікірлер: 269

  • @CGDive
    @CGDive2 ай бұрын

    This is very cool. That is how I imagine working with AI. Writing a prompt and hoping for the best doesn't excite me. Directing the AI is where it is at! Thanks for the video and resources!

  • @aegisgfx
    @aegisgfx2 ай бұрын

    Hmmmm.. do I bookmark this under "AI" or under "Blender"?

  • @VipRo1

    @VipRo1

    2 ай бұрын

    ahh 😂

  • @zephilde

    @zephilde

    2 ай бұрын

    Ha ha! The same for me! Both! ;)

  • @mgardner70

    @mgardner70

    2 ай бұрын

    YES!!😂

  • @johntnguyen1976

    @johntnguyen1976

    2 ай бұрын

    The answer is Yes

  • @martinlentz-nielsen6361

    @martinlentz-nielsen6361

    2 ай бұрын

    Insert “Why not both?” Meme

  • @anubismacc8165
    @anubismacc81652 ай бұрын

    I like the speed and versatility that this Workflow using AI offers, but knowing and being able to do everything myself is more satisfying. This method can be used more in an abstract sense, in that it doesn't matter as long as it looks good, but it can't be used to get very specific results, well, it can, but it can take a long time to get the exact “Shader” of that we need instead of using a procedural or even hand-painted one. Furthermore, AI art is not really art and before the haters start commenting, think about this argument: If you go to a pizzeria, order a pizza and start taking out ingredients or adding extras, does that make you the cook? No, and by that same logic, going to a website or an app and asking for or removing “ingredients” that you want or don't want to see in an image doesn't make you an artist and generating images with AI shouldn't be called art. AI is a good aid tool and should not be used to overload the market and artists' websites such as ArtStation, which there was a huge protest some time ago on this exact subject.

  • @bbrainstormer2036

    @bbrainstormer2036

    Ай бұрын

    Art isn't a pizza. Art (from the American Heritage Dictionary): "The conscious use of the imagination in the production of objects intended to be contemplated or appreciated as beautiful, as in the arrangement of forms, sounds, or words." Does that definition apply to AI art? I would say so. You can try to redefine the term "art" all you want, but at the end of the day, my definition of art comes from the dictionary, and yours exists in your head. It's very true that AI art is not completely original. But the thing many fail to realize is that so is just about everything else. Hell, one of the most famous paintings of all time is a can of soup. At the end of the day, all you're doing is shooting yourself in the foot. AI is here to stay, and could probably be extremely useful to artists if they let it. But instead, they've chosen to throw a hissy fit. It's really a shame. "AI is hurting arists ability to..." yeah, until a "real artist" is using AI, and then the art community starts attacking them. It's honestly sad.

  • @BinaryDood

    @BinaryDood

    Ай бұрын

    ​@@bbrainstormer2036"imagination", therefore no, under the narrow definition you posited, Ai generated images do not fit it. They have no living author, you can't reduce the creative process to the abstractions of gradient descent and backpropagation. It ain't no tool by default: it can be used as one, but it can also be used as automation. Within the market of capital and attention, the later has infinitely more of an incentive advantage than the former, so people using it as a "tool" will find the web scape completely saturated with fully automated content. Read Heidegger on technology. And remember "technology is a useful tool but a dangerous master", considering the lack of meaningful input in ai generated content, it falls more into the later, the user is far more used than he uses.

  • @bbrainstormer2036

    @bbrainstormer2036

    Ай бұрын

    ​@@BinaryDood Just because the latter, to use your explanation, exists, doesn't mean that the former somehow doesn't or should be ignored. And your point about "incentives" is confusing, when lots of people don't make art in the hopes of some external reward, instead making it simply because they want to be creative.

  • @BinaryDood

    @BinaryDood

    Ай бұрын

    @@bbrainstormer2036 indeed, and those people share the same world as those with extrinsic incentives, including resources. Stuff the individual can never be completely separate from. And since AI targets every field, it's not unfathomable to think too many will be distituted, their own willingness to create genuinely being what deprives them of what is deemed productive. This does not create an environment for people to be educated on intrinsic motivation and positive liberty, hence, being molded by a socioeconomic sphere of narrow and short term reward functions: not the stuff which feeds creativity, but which clogs it.

  • @bbrainstormer2036

    @bbrainstormer2036

    Ай бұрын

    @@BinaryDood At this point, it's a red herring. Whether AI will have a positive effect on art as a whole (which seems to be the point you're making here) is very different from claiming that the use of AI disqualifies a work from being art. I'd also push back against the arguments you've made, but honestly, I don't feel like spending this much time on a red herring

  • @vjanomolee
    @vjanomolee2 ай бұрын

    Wow very cool. Also love the way you just did a separate render with flat emmissive materials to Roll your own version of Cryptomatte !!! Brilliant

  • @MBPerdersen
    @MBPerdersen2 ай бұрын

    Awesome video 😃👍 I am very excited to try this out! 😃

  • @AINEET
    @AINEET2 ай бұрын

    This really seems like the future of 3D work flows

  • @jkrwhy

    @jkrwhy

    Ай бұрын

    If it ever learns to finally stabilize and actually concentrate on keeping a real, consistent shot without so much warping designs, it might finally be of some use as a tool for indie creators.

  • @josevarela1593
    @josevarela1593Ай бұрын

    This channel is an absolute gem!

  • @FeAlmeida76
    @FeAlmeida76Ай бұрын

    Congrats! nice workflow Thanks for share

  • @vendacious
    @vendacious2 ай бұрын

    I went to do this when I realized you could put 'None' for the preprocessor and use your own depth images, but was a little surprised how hard it was to get a depth image from Blender. The Map Range node idea is new, as well as the Freestyle mode for Canny nodes, which I've never seen before in any tut. Thank you for creating such useful SD x Blender content! I laughed out loud when you said ComfyUI was easy to install. Maybe easy for a genius like yourself... Then again, it's probably easier to use than Deforum, but I like Blender -> Deforum -> Video AI -> Resolve for this sort of thing, as Deforum has a tools for controlling keyframing changes across time, interpolation and diffusion from frame to frame, like color coherence and frame blending. Also, the new Forge version is super fast.

  • @boredcook777

    @boredcook777

    Ай бұрын

    should I use Deforum for cycles animation passes? I totally failed with eevee and could not even manage to add missing nodes in ComfyUI 😅

  • @ilyanemihin6029
    @ilyanemihin6029Ай бұрын

    Thanks, this is amazing!

  • @mgardner70
    @mgardner702 ай бұрын

    Absolutely. Mind blowing. Bravo. Man. Bravo.

  • @cedrigo
    @cedrigo2 ай бұрын

    Amazing! Thank you!

  • @stephantual
    @stephantualАй бұрын

    This is the way - the tricky part is temporal consistency. You can obtain it by breaking down your movement into parts, train motion LORAs and match them to the final output using a schedule.

  • @MasterFX2000
    @MasterFX20002 ай бұрын

    Amazing work!

  • @Noplisu
    @NoplisuАй бұрын

    This is amazing!

  • @clemensbretscher7798
    @clemensbretscher7798Ай бұрын

    super cool, amazing work :)

  • @djdannyiLL
    @djdannyiLL2 ай бұрын

    Amazing work.

  • @digitalbase9396
    @digitalbase9396Ай бұрын

    Wow, nice workflow.

  • @FrostNova91
    @FrostNova91Ай бұрын

    Been working very hard to work with blender and understand it. As a 3d artist, as a short film writer, as an animator… then all this “AI does it for you and does it better” BS comes along. Feels like a huge slap in the face. Worst part is that I don’t understand it. Like… At all. Reply 💯 if you feel me 😔

  • @PACOBRYAN-cj9gf

    @PACOBRYAN-cj9gf

    10 күн бұрын

    hi

  • @blacksage81

    @blacksage81

    15 сағат бұрын

    I cant tell you how to feel, but my suggestion is that you add one more hat, that is director. AI is as good as its training data, and the person using it.

  • @chrisking5924
    @chrisking5924Ай бұрын

    Amazing work, thank you for sharing! Does this workflow exist for Mac or only PC?

  • @KnightRiderGuy
    @KnightRiderGuyАй бұрын

    It's so crazy how advanced this stuff is getting.

  • @ZooDinghy
    @ZooDinghy21 күн бұрын

    Wow! This is amazing!

  • @scobelverse
    @scobelverse16 күн бұрын

    Absolutely incredible video. I've never subscribed to someone's Patreon faster. Great job

  • @gridvid
    @gridvidАй бұрын

    That's what I also thought about... AI with control 😊 Thanks for sharing!

  • @BATCH3
    @BATCH32 ай бұрын

    Amazing!! Thanks a lot!

  • @daydoing
    @daydoing2 ай бұрын

    This is amazing. keep it up!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • @mattensix9091
    @mattensix9091Ай бұрын

    Wow! Amazing workflow that you worked out here! Thanks for all the inspiration!

  • @jamesriley5057
    @jamesriley50572 ай бұрын

    Guess I have to learn this if I am going to be able to live and keep my job.

  • @torq21

    @torq21

    2 ай бұрын

    Well, you'll need to learn SOMETHING new. This is going to be old news pretty soon I'd think.

  • @MrFrost-xh6rf

    @MrFrost-xh6rf

    2 ай бұрын

    I wish they would keep Ai out of art too..

  • @Peetoo6

    @Peetoo6

    2 ай бұрын

    Of course yes, it applies to literally everything and its inevitable. But... sooner you start learning and embracing it, the bigger advantage you will have :)

  • @jamesriley5057

    @jamesriley5057

    2 ай бұрын

    @@MrFrost-xh6rf seemed like the first thing they went for was art, and I'm surprised how quickly it got good at advanced 3D art. I admit I used meshy to create a strawberry shortcake model for me when I needed a food model. Can you imagine drawing realistic food in blender?

  • @alexmehler6765

    @alexmehler6765

    Ай бұрын

    i hope you undertand that this is all just alpha stage software and workflows and you cannot keep up with the pace of ai advancements in the end.

  • @MrVanyaK
    @MrVanyaK2 ай бұрын

    wow! It's look very promising!

  • @thesusguysubstar
    @thesusguysubstar2 ай бұрын

    I Must try this!! Every Video is a hit!

  • @stanpittner313
    @stanpittner313Ай бұрын

    epic stuff ❤

  • @proto2149
    @proto21492 ай бұрын

    great job, well done :)

  • @wackywolven6192
    @wackywolven6192Ай бұрын

    The knowledge you make with blender might actually be around and functional in a decade or two

  • @Creophagous
    @CreophagousАй бұрын

    omg that is friggen awesome. :)

  • @marco2015p
    @marco2015p2 ай бұрын

    Amazing !!

  • @davekite5690
    @davekite56902 ай бұрын

    'really like this.... good work.

  • @LRSKWTKWSK
    @LRSKWTKWSKАй бұрын

    Geiler Workflow!

  • @maciejgarlicki742
    @maciejgarlicki7422 ай бұрын

    super, thanks

  • @betterlifeexe4378
    @betterlifeexe437825 күн бұрын

    if your having trouble finding freestyle, go back too the setting he set to standard just before that, at 3:06. make sure freestyle option is checked above that. then freestyle options he uses later will show up.

  • @Kumodot
    @KumodotАй бұрын

    This method seems great to create some NPR style squences. That last one with the forest and the moss looks amazing at 9:59

  • @CyberwizardProductions
    @CyberwizardProductions2 ай бұрын

    nicely done

  • @herpderp6653
    @herpderp66532 ай бұрын

    really cool!

  • @foodjmadrigal
    @foodjmadrigalАй бұрын

    Amazing 👌

  • @andreimikhalenko1568
    @andreimikhalenko1568Ай бұрын

    hello. looks cool, but only the question if for each masked aread was generated separate image, and the it was just combined togetether, or ComfyUI can understand masks and promts on this masks and combine them in single generation?

  • @DanielPartzsch
    @DanielPartzschАй бұрын

    What is the motion model you're referring to at the end? Do you use animate diff for the animation? Thank you.

  • @psaicon0
    @psaicon02 ай бұрын

    Just a minor suggestion on your workflow… I would pass the end result as a latent on another ksampler with really low denoising to improve the final comp… Also maybe sd1.5 animatediff with lcm is a more interesting approach for your followers since it’s lower bandwidth, better consistency, better controlnets etc

  • @t480sLenovo-ci6wo
    @t480sLenovo-ci6woАй бұрын

    Hey Mick, great video! I've been learning Blender for a while and I'm interested in trying your approach of combining it with AI. I'm planning to build a new PC for this purpose. Do you mind sharing your PC's specifications? It would be helpful for me to know what kind of hardware can handle this type of workflow.

  • @dannylammy
    @dannylammy2 ай бұрын

    Very cool, would the addition of normal passes help with the consistency of the render? Would adding in a lighting pass work as well?

  • @mickmumpitz

    @mickmumpitz

    2 ай бұрын

    Oh yes, a normal pass would work fantastically. I have actually tested this with SD 1.5. The problem is that there is no "normal" control net for Stable Diffusion XL yet. The videos generated by SD1.5 were extremely coherent, but unfortunately also pretty ugly. So I gave up on it and focused on SDXL for this video. But as soon as there is a control net for it, this workflow should work even better

  • @alpaykasal2902
    @alpaykasal2902Ай бұрын

    Great work!

  • @aegisgfx
    @aegisgfx2 ай бұрын

    It would be nice to have a 'temporal coherence' checkbox in comfy that would force it to keep things consistent over the length of the video and not be so random.

  • @psaicon0

    @psaicon0

    2 ай бұрын

    That’s exactly what animatediff tries to do… it’s improving a lot lately

  • @audiogus2651

    @audiogus2651

    2 ай бұрын

    Yah not quite mind blowing yet

  • @dennisfassbaender
    @dennisfassbaender2 ай бұрын

    Really very cool 🎉

  • @JessicaTravieso
    @JessicaTraviesoАй бұрын

    Thanks!!!

  • @kimholder
    @kimholderАй бұрын

    ComfyUI Manager wasn't working until I put the extracted folder ComfyUI-Manager-main into the folder ComfyUI_windows_portable/ComfyUI/custom_nodes. I think it might be good to adjust the instructions on this point. Looking forward to playing with it!

  • @SaintAngerFTW
    @SaintAngerFTW17 күн бұрын

    Bruh... this is godlike

  • @theboyjohnny123
    @theboyjohnny123Ай бұрын

    Hi man, thanks for the great tutorial! i have been struggling with " Regional conditioning by color mask" , even if i am trying the simplest of exporting 4 different colors from photoshop it is not generating the black and white mask basis the hex code. any idea what could be the reason ?

  • @korypeters2059
    @korypeters20592 ай бұрын

    I'm going to accomplish an awesome video with this by Morning 🌄 🙌 TY TY TY!!!!

  • @3dvfxprofessor
    @3dvfxprofessor2 ай бұрын

    Great content! Thanks for the time and effort! Subscribed!

  • @binichnich8517
    @binichnich85172 ай бұрын

    Incredibly great work, dear Mick! Your video edits alone are evidence of detailed diligence and precision. One of the most amazing channels in the KZread ocean - fun to follow your workflows. Unfortunately - super ambitious and probably still too time-consuming for most enthusiasts. But - on a weekly basis - technology is evolving. The question is probably the skillful use and intellectual penetration of this potential. My sincere admiration for your perseverance, your diligence and the excellent presentation of your results! Ultimately, that is what it is: the optimization of the interface between "imagination" and the "digital world". Thank you!!

  • @AIartIsrael
    @AIartIsraelАй бұрын

    amazing!!!!

  • @jacquesbroquard
    @jacquesbroquard2 ай бұрын

    Wow this is amazing. so it doesn't mind coloring outside the lines (so to speak) I noticed the generation doesn't exactly match the alpha of the RGB mattes. Good to know! Keep it up!

  • @zephilde
    @zephilde2 ай бұрын

    Awesome! I already thought about this, but you have finally done it ... It works quite well, but the flickering is this there. You may have to integrate animation workflow like animateDiff or something. The next step is to ask a LLM to code a Blender script or addon to automate the export of frames to a Comfy worlflow ;)

  • @sams3493

    @sams3493

    2 ай бұрын

    Maybe also create a LORA (or whatever, I'm new to this) for more consistant subjects

  • @zephilde

    @zephilde

    Ай бұрын

    @@sams3493 This wont be enought, LoRAs are good for several still images and to have a character (or anything) consistency, but you can acheive this with IPAdapters too nowadays... Here the probleme is a temporal coherence between each frames and a video model is needed (animateDiff, SVD, ...) but I don't know exactly how to plug this with frames coming from Blender

  • @polatkemalakyuz2021
    @polatkemalakyuz20212 ай бұрын

    Insane

  • @BobDoyleMedia
    @BobDoyleMedia2 ай бұрын

    This was fantastic! Super inspiring. I don't use Blender, but the CONCEPT is what I needed. I'm sure there's got to be a way to make those render pass videos from within comfy from one video and not have to use image sequences. The segmentation of the elements and being able to prompt each of them separately is what I've been trying to find a simple demonstration of! Thanks!

  • @Malerghaba

    @Malerghaba

    Ай бұрын

    there is a node called Oneformer Coco Segmenter, which colors different objects in the video and then you can use a mask by color node

  • @deysurya
    @deysuryaАй бұрын

    Where Can we found the link of free video workflow ? Also it would be great to have your initial input images for testing. Thanks much!!

  • @SolveForX
    @SolveForX10 күн бұрын

    This is wonderful. I’m actually an illustrator who would like to aggressively speed up my workflow by putting my own designs in and using 3D models to generate (largely finished, line art only) images that I can then go on and tweak by hand for finishes. How would I, instead of pulling these prompts from just the general Ai system, use my own art work as the…Lora? I think it’s called? Do you have a tutorial on that using this same workflow? The workflow is perfect, but I just need an ability to upload my own pencils and inks. Thanks!

  • @micahhoang530
    @micahhoang530Ай бұрын

    Hey Mick! Is there any way to do this with an already rendered and composited C4D file? I want to use AI to try and test variations of render styles. How do I make the depth maps and linearts with AI and put those into your workflow? I don't use Blender.

  • @-SL
    @-SL2 ай бұрын

    Brilliant Workflow! Thanks for sharing :)

  • @Bugulab
    @BugulabАй бұрын

    2:50 the correct way of outputting passes like zdepth is to render image in linear format! not srgb, you set it in color management rollout panel. thanks for the tutorial!

  • @MrRGBable
    @MrRGBable2 ай бұрын

    cool thanx!

  • @yaladdin86
    @yaladdin862 ай бұрын

    It's amazing but scary, this might end up putting a lot of lighting artists, comp artists and rendering artists out of work very soon.

  • @knightofniini7772

    @knightofniini7772

    18 сағат бұрын

    90% will not be needed anymore. Because 98% of the people don't care if a artist made it or AI. Apple, Google , Amazon will all come with a AI cloud thing. Where you can make games, movies, images what ever in a simple click.

  • @kanto1971
    @kanto1971Ай бұрын

    Interesting. Is the final animation still has flickering in the background?max frame limitation?

  • @theflabergaster7394
    @theflabergaster739414 күн бұрын

    Hi ! Amazing tutorial but I get stuck when I need to find "ComfyUI Manager -> Install missing Custom Nodes" to install the required nodes

  • @High-Tech-Geek
    @High-Tech-Geek2 ай бұрын

    Great workflow! Thanks for sharing!

  • @ProjectMindbot
    @ProjectMindbot2 ай бұрын

    Does anyone know if there is a workflow that I can use to extract the masks for seg Having issues using video builder workflow get depth and line just fine but the others having issues with Anyone know if there is one that can color extract as needed

  • @aaagaming2023
    @aaagaming2023Ай бұрын

    Youre a wizard Harry!

  • @LayMeiMei
    @LayMeiMei2 ай бұрын

    Love your video. btw you can use looseControl which are able to turn cube to various objects :D

  • @ianwilmoth
    @ianwilmothАй бұрын

    I've been saying this since last year- this is going to be the render pipeline of the future.

  • @swannschilling474
    @swannschilling474Ай бұрын

    This is by far the best workflow I have seen so far!! 🤩

  • @iami0
    @iami02 ай бұрын

    Subscribed 💥

  • @leszekmielczarski
    @leszekmielczarski2 ай бұрын

    this is the way

  • @pixelagitato
    @pixelagitato6 күн бұрын

    hi. great tutorial! but i have one problem: When loading the graph, the following node types were not found: Integer. why? i can t fix it. thanks so much

  • @user-gs3iq7uj7o
    @user-gs3iq7uj7o2 ай бұрын

    Does this work with animediff ? for the consistency?

  • @SolveForX
    @SolveForX10 күн бұрын

    If I’m not mistaken, the blender bit can be removed entirely, no? Can’t you do the same separation in photoshop with an image? Or after effects with a video?

  • @SimplyAlteringMaterials
    @SimplyAlteringMaterials2 ай бұрын

    Beautifully done!! 💯🔥🔥🔥🔥🔥

  • @pocongVsMe
    @pocongVsMe2 ай бұрын

    awesomeee

  • @sergeygolubev3548
    @sergeygolubev3548Ай бұрын

    WOW!!!!!

  • @m3ss88
    @m3ss88Ай бұрын

    Have you picked up on the excitement surrounding VideoGPT? It's reshaping the landscape of video creativity.

  • @jeffyboi6969
    @jeffyboi6969Ай бұрын

    I could see this being used for independent artists music videos. Or media where less control is needed.

  • @poochyboi
    @poochyboi2 ай бұрын

    Wow.

  • @janehates
    @janehatesАй бұрын

    The the short really captures the feeling of being chased in a nightmare

  • @user-jn5gc8vo3o
    @user-jn5gc8vo3oАй бұрын

    why i have with same settings very terrible reults, 8 steps in KSampler node it's like one solid noise, but when i increse it to 100 looks better, but steel cripy and takes arround 1 minut to render it(

  • @stickmanunivers5si9d
    @stickmanunivers5si9dАй бұрын

    ok lookdev lighting texturing copositing you are fired !!!!!!!!!!!!!!!!!!!!!

  • @pixel325
    @pixel3252 ай бұрын

    This is some black magic stuff right here, really awesome video! Great AI workflow to get out even more of what you want and interesting to see how deep into this AI stuff you really are, thanks for sharing! :)

  • @alperomeresin3326
    @alperomeresin332619 күн бұрын

    Mr Mickmumpits can this comfy renders withh cpu too?

  • @tgtutorials
    @tgtutorials10 күн бұрын

    If Blender used ready-made AI models instead of calculating the physical behavior of lights and surfaces, it would certainly speed up rendering.

  • @TerenceKearns
    @TerenceKearnsАй бұрын

    Wow man. This workflow is brilliant. Well done.

  • @quemusic8
    @quemusic8Ай бұрын

    An error message appears when importing color mask and depth map sequence frames: Error occurred when executing VHS_LoadImagesPath: No files in directory 'C:\tmp\color'. File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute What might I have done wrong?

  • @Luxflux777
    @Luxflux777Ай бұрын

    Thanks,very informative, building a pc for blender right now, very excited to try this technique

  • @lexibyday9504
    @lexibyday9504Ай бұрын

    I've been hoping someone would make an AI that does this because it would a couple problems. The big two: 1 this AI does not use one prompt for the entire scene. That means you get a dog boy in a red jacket running in the woods, not a red jacket-dog-boy in a red dog boy jacket running in the red boy dog woods. Even the best single prompt AIs I have used use the same prompt for every part of a picture. This is probably why even the best AIs I have used sometimes produce hideous mutant monsters even with simple prompts like your request for an anime girl resulting in a woman with a second face in her chest. 2 using the 3D models I not only would be able to designate objects within the scene but have multiple characters in the scene and each character can have a full description without their apearances blending together as per the first point. I have wanted these two features since AI art aps first started flooding the internet. Some honerable mentions; AI I've used have struggled to understand certain anatomical concepts like "tusks" but with this the existence of the tusk is practically baked into the code. This isn't a payed website ap but a worklow on my own computer that I'm totally going to try.

  • @pseudopod77
    @pseudopod7718 күн бұрын

    when hitting 'que prompt' it throws a ton of errors. It woudl be great if you could go over how to solve those. I find that's the biggest issue of comfy. Downloading and install missing models etc for %70 of the time.