AI images to meshes / Stable diffusion & Blender Tutorial

Ойын-сауық

after my quick proof-of-concept experiment with this technique, i've got many requests to explain how I made these meshes and what actually stable diffusion do in this case. here is your guide
Zoe Depth model
huggingface.co/spaces/shariqf...
ShaderMap
shadermap.com/home/
Backgound music: (me jamming on elektron)
• downtempo beats - elek...
Follow me on:
IG: / sashamartinsen
TW: / sashamartinsen

Пікірлер: 261

  • @pygmalion8952
    @pygmalion8952 Жыл бұрын

    what is the purpose of this though? it can be used for distant object *maybe* but there are easier ways to make those. for general purpose assets, you really can't pass the quality standart of modern games with this tech. not to mention this is just the base color. and throw away the aesthetic consistency between models too. ai makes either nearly identical images if you ask or it just can not understand what you are trying to do at all. plus if you want symbolism in your game, there is additional steps on fixing this which i think is way cumbersome and boring than actually making the asset. i didn't even mention cinema since these kinds of assets are pretty low quality even for games. (just to add, it is still ethically questionable to use these in a profit-driven project) oh one more thing, usually, games require some procedurality in their textures for some of the assets they have. this can not produce that flexibility too. only thing that is beneficial is that depth map thing i guess. that is kinda cool.

  • @digital-guts

    @digital-guts

    Жыл бұрын

    Yeah, of course, nobody here says that this models can be used in AAA games or cinema as is, and I'm not brainless "ai-bro" to say that it will be. I myself work in gamedev for a while. But there are fields for 3d-graphics other than games and cinema, for example abstract psychedelic video-art or music videos, heavily stylized indy games, maybe some surreal party poster, etc. know what I'm sayin'. As for cinema and gamedev, I think it can be used in some cases as kitbash parts for concept-art, and with proper knowledge of how to build prompts and usage of custom made loras and stuff like this, you can get really consistent results with ai generations.

  • @luminousdragon

    @luminousdragon

    Жыл бұрын

    This is a proof of concept, its brand new. the process 100% can be sped up, streamlined, with ways to get better results as ai art improves. The description of this very video just says its a proof of concept, and people were asking for details. THis type of video is for professionals who want to explore different techniques, to build off of each others works, to stay informed about new techniques, and its just interesting. For instance, I make digital art, and one thing I have been experimenting with is making a 3d environment and characters as close as possible in style as some AI art Ive already generated, without taking vvery much time or effort, then rendering it as a video, then overlaying AI on top of it for a more cohesive look. This process could be very useful for this for multiple reasons: First, if Im using AI art to make the 3d, models, they are going to mesh very well when I overlay the second set of AI art over the 3d render. Second, because the AI art is going to be overlaid on the 3D model, I dont really care if the 3d models dont look perfect, kinda irrelevant. Lastly, Look at the game BattleBit which has gone viral recently. Or look at Amogus. or Minecraft. Not every game is aiming for amazing photorealism.

  • @AB-wf8ek

    @AB-wf8ek

    Жыл бұрын

    I think it's valid to criticize the quality of the output, but I think you miss the point of you think this is trying to be a replacement for the current traditional methods. It's just an experimental process playing around with what's currently available. It's called a creative process for a reason. A true artists enjoys figuring out new and unique ways of combining tools and process, and this video is just an exercise in that. If you can't see the purpose for it, then just remove creative from anything you do.

  • @TaylorColpitts

    @TaylorColpitts

    Жыл бұрын

    Concept art - really great for populating giant scenes with lots of gack and set dressing

  • @Mrkrillis

    @Mrkrillis

    Жыл бұрын

    Thank you for asking this question as I wondered myself what this could be used for

  • @VincentNeemie
    @VincentNeemie Жыл бұрын

    I had this theory on the start of this year, when I noticed you could generate good displacement maps using control nets, good to see someone putting that at practice.

  • @AtrusDesign

    @AtrusDesign

    11 ай бұрын

    It’s an old idea. I think many of us discover it first or later.

  • @orlybarad

    @orlybarad

    4 ай бұрын

    So, I've been diving deep into storytelling and creative videos lately. VideoGPT showed up, and it's like having this magical assistant that instantly enhances the quality of my content.

  • @referencetom1276
    @referencetom1276 Жыл бұрын

    For BG objects like murals on walls and ornaments this can give a nice 2.5 D feel. Maybe can also speed up design to find form from first idea.

  • @MordioMusic
    @MordioMusic7 ай бұрын

    usually I don't find such good music with these tutorials, cheers mate

  • @nswayze2218
    @nswayze22189 ай бұрын

    My jaw literally dropped. This is incredible! Thank you!

  • @dmingod999
    @dmingod999 Жыл бұрын

    This can be a great process to use for a rough starter mesh that you can then refine

  • @pygmalion8952

    @pygmalion8952

    Жыл бұрын

    i wrote a long ass comment on this kind of stuff here but for this also, again, you can produce these maps with just normal renders of artists and either way is ethically questionable if you do not change and add your twist to it.

  • @dmingod999

    @dmingod999

    Жыл бұрын

    @@pygmalion8952 sure you can do this from other artists but its restrictive because you can only use what already exists -- but if you're generating the images by AI you have much more freedom -- you can sketch your idea or use a whole bunch of other tools that are available to control the AI generation, then you can make the depth map and do this bit..

  • @Arvolve
    @Arvolve10 ай бұрын

    Very cool, thanks for sharing the workflow!

  • @oberdoofus
    @oberdoofus Жыл бұрын

    very interesting for concept generation - thanks for sharing! I'm assuming you can also upscale the various images in SD as well to maintain more 'closeup' detail...? Maybe with appropriate LORAs...

  • @ChrixB
    @ChrixB9 ай бұрын

    ok, I'm speechless... just wow!

  • @LeKhang98
    @LeKhang98 Жыл бұрын

    I think using Mirror is a nice idea, but it may not be applicable for all objects. How about using SD & LORA to create 2x2 images or 3x3 images of the same object from multiple different POVs, then connecting them together instead of using a mirror?

  • @games528
    @games528 Жыл бұрын

    You can't just plug the color data of a normal map texture into the normal slot in principled BSDF, you need to put a "normal map" node in between.

  • @albertobalsalm7080

    @albertobalsalm7080

    Жыл бұрын

    you can actually

  • @games528

    @games528

    Жыл бұрын

    @@albertobalsalm7080 Yes but that will lead to horrible results. You can also plug it straight into roughness if you want.

  • @sashamartinsen

    @sashamartinsen

    Жыл бұрын

    thanks i missed that part while recording

  • @AB-wf8ek

    @AB-wf8ek

    Жыл бұрын

    I don't use blender, but my guess for why this is, is that the color needs to be interpreted as linear for data processes, versus sRGB or whatever color profile is usually slapped on top of the image when rendering for your screen.

  • @EGP-Hub

    @EGP-Hub

    Жыл бұрын

    Also it needs to be set to non-colour

  • @wizards-themagicalconcert5048
    @wizards-themagicalconcert50488 ай бұрын

    Fantastic content and video mate,very useful ,subbed ! Keep it up !

  • @jaypetz
    @jaypetz Жыл бұрын

    This is really good I like this workflow thanks for sharing.

  • @danelokikischdesign
    @danelokikischdesign9 ай бұрын

    Absolutly amzing! Thank you for the tutorial! :D

  • @tommythunder6578
    @tommythunder65787 ай бұрын

    Thank you for this amazing tutorial!

  • @spooderderg4077
    @spooderderg4077 Жыл бұрын

    I'm gonna blow your mind. But this workflow can easily be improved in bender 3.5+ by creating imms and vdms. Once you create the object break it into core components with intercept booleans. Then if you want an imm just save as an asset. But if you want a vdm you apply the insertion point onto your main model vertically down towards the bottom of a cube with the top plane having uv coordinates taking up the entire cube. Then delete the other sides of the cube besides the top plane. Then select the faces of the top plane, not your merged object then in sculpt mode create a face set from edit mode selection. Then create a shape key (important for later) Immediately mask that face set and then go to the full mesh faceset manipulation brush, underneath the full mesh geometry and full mesh cloth sim brushes (I forget what they're called whenever I'm not staring at them but they affect the entire mesh that isn't masked or visible. And then select the second to last mode (should say relax or something.) You should get a flat plane again but with your geometry in the middle. Go to the top of the mesh with num7 and hit num. To center over the mesh. And hit u and project from bounds. Now you have the uvs. Now delete that shape key or set it to 0 if you want variations and go to your vdm baker and type in a name for that part and click generate at 512. You now have a draggable brush for sculpting. At this point I recommend creating a vdm displacement geometry nodes network to test it on the baking plane for any minor errors and also have a more easily editable brush. Finally rebake the cleaned up version and you'll have a reusable completely nondestructive vdm brush of your ai gen.

  • @kenalpha3

    @kenalpha3

    11 ай бұрын

    video demo?

  • @spooderderg4077

    @spooderderg4077

    11 ай бұрын

    @@kenalpha3 I'm guessing you mean the vdm part. In which case give me an idea of something furry related and I'll make one sure.

  • @kenalpha3

    @kenalpha3

    11 ай бұрын

    @@spooderderg4077 Does VDMS = [watch?v=lx6p8sJd-QY]? I looked up the term and found that vid. And by furry do you mean hairy or do you mean fantasy character? Im making a game for UE4.27, it wont handle fur very well (5 might). But Im also making character scifi armor, want to add accents buttons or creases, or accent body parts like spikes or thicker armor skin. But if you can do an example with a lizard type alien or his armor, that would be helpful thanks. [I already have a lot of alien characters + textures. But I could Im thinking I could increase my collection by adding accent meshes to the body or armor - to create a new race, or howd theyd look when they "level up."] Also if you can explain how to reuse an existing Base texture > apply a small part of it to a new mesh (the smaller accent/overlay mesh), and how to reset the UVs to match this new mesh shape? (while the original body mesh UVs and texture do not change). Thanks. I subbed.

  • @spooderderg4077

    @spooderderg4077

    11 ай бұрын

    @@kenalpha3 look at my icon, that's what a furry is, an anthropomorphic animal.

  • @kenalpha3

    @kenalpha3

    11 ай бұрын

    @@spooderderg4077 Yes, I looked at your channel. But you mean low poly furry, without individual hair strands, correct? Anyways, yours looks like a dragonoid, so that works as an example to show me. Ty.

  • @PuppetMasterdaath144
    @PuppetMasterdaath144 Жыл бұрын

    I just want to point out that you people that are dissing this, that for a person like me who had zero clue about any of this, just inticed into trying out something I can get actual creative results from like this is so exiting, I mean I read a few of the technical comments and that's so past my head it really shows how its a specialized viewpoint that's not generalized to more common people in terms of general knowledge ok weird ass rant over

  • @Karasus3D
    @Karasus3D Жыл бұрын

    My question is can i get a diffuse map turn this into a printable model id love to at least use it to make a base model and modify from there for like masks and such

  • @AlexandreRangel
    @AlexandreRangel9 ай бұрын

    Very nice techniques, thank you!!

  • @joseterran
    @joseterran Жыл бұрын

    nice one! got to try this! thansk for sharing

  • @dragonmares59110
    @dragonmares5911011 ай бұрын

    Woah, i think i will try to see if i can remake this tomorrow, would be a nice way to spend some time, thanks !

  • @WhatNRdidnext
    @WhatNRdidnext11 ай бұрын

    I love this! Plus (because of the horror-related prompts that I've been using), I'll probably give myself nightmares 😅 Thank you for sharing ❤

  • @ofulgor
    @ofulgor7 ай бұрын

    Wow... Just wow. Nice trick.

  • @EladBarness
    @EladBarness11 ай бұрын

    Amazing! Thanks for sharing

  • @lightning4201
    @lightning420111 ай бұрын

    Great video. Do you have a Cinema 4d tutorial on this?

  • @tony92506
    @tony92506 Жыл бұрын

    very cool concept

  • @s.foudehi1419
    @s.foudehi1419 Жыл бұрын

    thanks for the video, very insightful

  • @petarh.6998
    @petarh.699810 ай бұрын

    How would one do this with a front-facing character? Or does this technique demand the profile view of them?

  • @ArturSofin
    @ArturSofin Жыл бұрын

    Привет, очень классно! Подскажи пожалуйста, анимация лица сделана в unreal с помощью live link или это всё блендер ?

  • @digital-guts

    @digital-guts

    Жыл бұрын

    это metahuman animator внутри анриала уже да, но запись сама делается через live link просто более качественно интерпретируется

  • @johanverm90
    @johanverm908 ай бұрын

    Amazing ,awesome ..Thanks for sharing

  • @retroeshop1681
    @retroeshop1681 Жыл бұрын

    Honestly I'm quite impressed, a really cool way to make a lot of kitbashing, really necessary nowadays I guess now I have to learn how to make AI images hehe cheers from Mexico!

  • @shaunbrown3806
    @shaunbrown38063 ай бұрын

    @DIGITAL GUTS, I really like this workflow, I also wanted to know, can I use this same strategy for humanoid AI characters, as you are the only person I have seen use this workflow, thanks in advance :) also subbed

  • @digital-guts

    @digital-guts

    3 ай бұрын

    yeah, since this videos i’ve tried couple of things and its kinda ok for characters in certain cases. especially for weird aliens )

  • @filipemecenas
    @filipemecenas Жыл бұрын

    Thanks !!!! I will try it !!!

  • @williammccormick3787
    @williammccormick37872 ай бұрын

    Great tutorial thank you

  • @hwj8640
    @hwj8640 Жыл бұрын

    thanks for sharing! it is inspiring

  • @jonathanbernardi4306
    @jonathanbernardi43068 ай бұрын

    Very interesting nontheless, thanks for your time man, this technique sure have its uses.

  • @user-zx5ts4uk8j
    @user-zx5ts4uk8j5 ай бұрын

    When I do this with a depth map of a 16:9 format, the displacement modifier applies the map as a small 1:1 repeat pattern.. why? note: I made my place 16:9 ratio and applied scale before adding displacement modifier.

  • @miinyoo
    @miinyoo4 ай бұрын

    That actually is a pretty decent little quick workflow. Pop that out to something like zbrush and go to town refining. Is it really good enough on its own? For previz and posing with a quick rig, absolutely. That's pretty fast tbh and simple.

  • @tahajafar206
    @tahajafar2069 ай бұрын

    What about using chartunner Lora to create the front, back, left and right sides so merging all 4 sides will give a better and smoother object instead of correcting sides manually? It's just an idea but I haven't seen anyone try that so if you can could you please give it a try and share a tutorial 3:36

  • @digital-guts

    @digital-guts

    9 ай бұрын

    i’ll give it a try and take a look. i’ve done some test with ai-characters and it looks ok-ish and weird, maybe share the results later.

  • @Al-Musalmiin
    @Al-Musalmiin7 ай бұрын

    i wouldnt mind learning blender and learning how to do this. can you do a tutorial on how to run "zoe depth" locally?

  • @issaminkah
    @issaminkah Жыл бұрын

    Thanks for sharing!

  • @ElHongoVerde
    @ElHongoVerde Жыл бұрын

    It's not bad at all (is impressive actually) and you gave me very good ideas. Although I suppose this wouldn't be very applicable to non-symmetrical images, right?

  • @AArtfat
    @AArtfat11 ай бұрын

    Simple and cool

  • @Pragma020
    @Pragma02010 ай бұрын

    This is nest. cool technique.

  • @SoulStoneSeeker
    @SoulStoneSeeker Жыл бұрын

    this has many, possibilities...

  • @sameh.blender
    @sameh.blender Жыл бұрын

    Amazing , thank u

  • @Philmad
    @Philmad Жыл бұрын

    Excellent

  • @Savigo.
    @Savigo. Жыл бұрын

    Wait, can you now just plug in normal map to the "norma"l socket without extra "normal map" node? I have to check it.

  • @Savigo.

    @Savigo.

    Жыл бұрын

    Ok, You can, but it looks quite bad compared to proper connection with "normal map" node. It seems like intensity is way lower without it, and you cannot control it without normal map node.

  • @XirlioTLLXHR
    @XirlioTLLXHR9 ай бұрын

    This is good enough for some indie game companies honestly. Might really help some folks out there get some assets done faster.

  • @wrillywonka1320
    @wrillywonka132010 ай бұрын

    this is awesome! BUT you lost me at mirroring the image and then bisecting to get rid of the extra geometry. i am a noob to blender still and dont know how you did that. was it a short cut key you used? at 3:35 in the video

  • @digital-guts

    @digital-guts

    10 ай бұрын

    oh, its a speedup part and there is quiet a few hot keys here. but its very basic usage of scuplt mode in blender. there are many videos on youtube where this stuff is explaned, try this one kzread.info/dash/bejne/daGdkq2odtfJXZc.htmlsi=mKSHWz8SCE8evM6M

  • @wrillywonka1320

    @wrillywonka1320

    10 ай бұрын

    @@digital-guts thank you! And i have been using this and most images work but some images invert when i mirror them. Have you wver had this problem?

  • @timedriverable
    @timedriverable8 ай бұрын

    Sorry if this is a newbie question ...but is this dreamstudio some componet of SDXL?

  • @zephilde
    @zephilde Жыл бұрын

    You "accomodate" yourself by sculpting something random from a not-so-accurate mesh, the mirrored thing do not look any like the originalimage thing... Do you have a workflow to get a real mesh from something representative? (like a character or landscape)

  • @mercartax

    @mercartax

    Жыл бұрын

    The whole process is sub-any-standard. Kitbashing some weird crap together - that's all this will work for. Maybe in 2 or 3 years we will see something more generally usable. Good luck getting any meaningful model data from AI models these days. Hard enough to prompt them into what you actually want let alone transfer this into a working 3d environment.

  • @psykology9299
    @psykology929911 ай бұрын

    This works so much better than zoedepths image to 3d

  • @shiora4213
    @shiora42139 ай бұрын

    thanks man

  • @kingcrimson_2112
    @kingcrimson_211211 ай бұрын

    Please ignore the salty comments. This is a game changer, especially for mobile platforms. jaw dropping result and pragmatic pipeline.

  • @salvadormarley
    @salvadormarley Жыл бұрын

    How did you get the animated face, That seems completely different to what you showed us in this demo.

  • @EGP-Hub

    @EGP-Hub

    Жыл бұрын

    Looks like the metahuman facial animator possibly

  • @digital-guts

    @digital-guts

    Жыл бұрын

    yes it is and its no the point of this video. there are tons of content about metahuman in youtube

  • @salvadormarley

    @salvadormarley

    Жыл бұрын

    @@digital-guts I've heard of metahuman but never tried it. I'll look into it. Thank you.

  • @ghklfghjfghjcvbnc

    @ghklfghjfghjcvbnc

    9 ай бұрын

    u are a lying clickbait @@digital-guts

  • @n0b0dy_know
    @n0b0dy_know Жыл бұрын

    "What? What A Mazing!

  • @Rodgerbig
    @Rodgerbig11 ай бұрын

    Amazing, Bro! But.. How did you get 2nd (BW) image? My SD gen only one image

  • @digital-guts

    @digital-guts

    11 ай бұрын

    this is ControlNet depth model you can get it here github.com/Mikubill/sd-webui-controlnet or use zoedepth online from link in description

  • @Rodgerbig

    @Rodgerbig

    11 ай бұрын

    ⁠​⁠@@digital-guts thanks for the answer! yes, I have it installed. but it gives only one result and it is different from what is needed.

  • @Rodgerbig

    @Rodgerbig

    11 ай бұрын

    ⁠@@digital-gutszoedepth actually work, but I try to do this in SD

  • @DanDanceMotion
    @DanDanceMotion Жыл бұрын

    wow!

  • @kingsleyadu9289
    @kingsleyadu9289 Жыл бұрын

    u are crazy😆😆😆😆🥰🤩😍❤❤❤❤❤❤, i u love bro keep it up

  • @CharpuART
    @CharpuART9 ай бұрын

    Now you are literally working for the machine, for free! :)

  • @Murderface666
    @Murderface66611 ай бұрын

    very interesting

  • @jvdome
    @jvdome11 ай бұрын

    i could do well until the part i had to sculpt the stuff out, i couldnt come to a solution like you did easily

  • @touyaakira1866
    @touyaakira186611 ай бұрын

    Please make this topic more with more examples.Thank you

  • @timd9430
    @timd9430 Жыл бұрын

    Such a jimmy rig way to do things. Do any of these AI generators just offer an option to export or download the 3D mesh file with maps, lighting etc???? I.e. .3ds .max .dxf .fbx .obj .stl etc! Seems the AI generators are initially just composing highly elaborate 3d scenes and rendering flat image results anyways anyways? Same for vector based files??? Can they just export native vector files such as .svg, .ai, .eps, .cdr vector .pdf??? AI is a career killer.

  • @sashamartinsen

    @sashamartinsen

    Жыл бұрын

    So, is it jimmy-rig way or career killer, you decide. Neither, i think, of course it depends on your goals. Meshes like this can work only as quick kitbash parts for concepts, not as final polished product anyways. Does kitbashing killed 3d careers or photobashing killed matte painting in concept art? i dont think so.

  • @zephilde

    @zephilde

    Жыл бұрын

    No, AI like Stable Diffusion are not working in a 3D space or vector, it's working on random pixels (noise) and apply denoising steps learnt from huge images set with descriptions. Your prompt text is helping the denoising steps to be able to "hallucinate" something from noise... The fact a final image is looking like 3D render or vectors or photgraphy or painting (etc) is just pure coincidence! :)

  • @timd9430

    @timd9430

    Жыл бұрын

    @@zephilde Any video links on that exact process?

  • @siete-g4971
    @siete-g4971 Жыл бұрын

    nice method

  • @wrillywonka1320
    @wrillywonka132010 ай бұрын

    Would you say this works better with black and white images?

  • @digital-guts

    @digital-guts

    10 ай бұрын

    i dont think so. today i’m recording new video with this technique, it could be useful.

  • @wrillywonka1320

    @wrillywonka1320

    10 ай бұрын

    @@digital-guts awesome! because ive gotten it to work with about 60% of my images but some get destroyed when i bisect the z axis on mirroring. but all the info you got is useful. this technique is mid blowing and a major day saver. one last question. you kind of speeded over the part where you clean up the mesh after mirroring. me being a noob to 3d software i could really use some clarification on how you cleaned it up. you made it look so simple.

  • @entumonitor
    @entumonitor Жыл бұрын

    in the normal node the color space is non color for normal maps!

  • @Savigo.

    @Savigo.

    Жыл бұрын

    "Linear" is pretty much the same, although he missed "normal map" node between.

  • @JamesClarkToxic
    @JamesClarkToxic8 ай бұрын

    The more people who experiment with new technology, the more cool ideas we come up with, and better uses we figure out for the technology. This particular workflow may not be usable for anything meaningful, but maybe it inspires someone to try something different, and that person inspires someone else, and so-on until really cool uses come out of this.

  • @digital-guts

    @digital-guts

    8 ай бұрын

    you get the point of this video. i’m just messing around this tech and trying things. actually now making full game using only this and similar approaches to meshes. it wont be anything of industry standard quality of course, but just proof of concept experiment. having a lot of fun

  • @JamesClarkToxic

    @JamesClarkToxic

    8 ай бұрын

    @@digital-guts I've been experimenting with ways to create a character in Stable Diffusion and turn them into a 3D model for months. The first few attempts were awful, but without those, I wouldn't have the current workflow (which is getting really close). I also know that the technology is getting better every week, so all my experimenting should help me figure out how to do things once things get to that point.

  • @aleemmohammed7794
    @aleemmohammed7794 Жыл бұрын

    Can you make a character model with this?

  • @1airdrummer

    @1airdrummer

    Жыл бұрын

    no.

  • @incomebuilder4032
    @incomebuilder4032 Жыл бұрын

    Fooking genius you are..

  • @pastuh
    @pastuh Жыл бұрын

    Nice, but I will wait for 360 3D AI models :X

  • @sebastianosewolf2367
    @sebastianosewolf23676 ай бұрын

    yeah and what software or website did you use on the first minutes ?

  • @digital-guts

    @digital-guts

    6 ай бұрын

    this is Automatic1111 web-ui for stable diffusion

  • @nathanl2966
    @nathanl2966 Жыл бұрын

    wow.

  • @zergidrom4572
    @zergidrom4572 Жыл бұрын

    sheeesh

  • @googlechel
    @googlechel4 ай бұрын

    Yo, how you get a stable diffusion, control net as local? it is?

  • @digital-guts

    @digital-guts

    4 ай бұрын

    kzread.info/dash/bejne/lmWgstiCYLfFl9I.html check this link

  • @googlechel

    @googlechel

    4 ай бұрын

    @@digital-guts thanks

  • @abdullahimuhammed6550
    @abdullahimuhammed6550 Жыл бұрын

    what about the aye animation and smile tho? thats the most important part tbh

  • @giovannimontagnana6262

    @giovannimontagnana6262

    Жыл бұрын

    Most definitely the face mesh was a separate ready model. The assets were made with AI

  • @bigfatcat8me
    @bigfatcat8me6 ай бұрын

    where is your hoodie from?

  • @digital-guts

    @digital-guts

    6 ай бұрын

    i dont remember, i think something like h&m or bershka nothing special

  • @Ollacigi
    @Ollacigi Жыл бұрын

    Ita still need a time.but its a cool start

  • @ATLJB86
    @ATLJB8611 ай бұрын

    I haven’t seen a single person use AI to texture a model using individual UV maps and I can’t understand why. Ai can dramatically speed up the texture process but I have not seen anybody take an ai generated image then turn it into a 3D model and i can’t understand why…

  • @realkut6954
    @realkut69545 ай бұрын

    Hello thank for video please ,please.give me tutorial traking armor 3d stable diffusion and man video please urgen sorry bad english iam french

  • @digital-guts

    @digital-guts

    4 ай бұрын

    kzread.info/dash/bejne/lH-DwdCPd67NfKQ.htmlsi=j7BOrRMU_8AeXrUe

  • @realkut6954

    @realkut6954

    4 ай бұрын

    @@digital-guts thank my friend soory i want video 2d no man 3d sorry .

  • @realkut6954

    @realkut6954

    4 ай бұрын

    Sim wonder studio softwar

  • @realkut6954

    @realkut6954

    4 ай бұрын

    kzread.info/dash/bejne/mKaKrqODms6ulpM.htmlsi=2uVjuK6HK8WwhQWX

  • @motionislive5621
    @motionislive562110 ай бұрын

    Mirror tool become life changer LOL

  • @matthewpublikum3114
    @matthewpublikum31149 ай бұрын

    Great for kit bashing!

  • @CBikeLondon
    @CBikeLondon Жыл бұрын

    I think the opposite direction (mesh to AI) is more interesting as it can then be used for AI training

  • @thesagerinnegan5898
    @thesagerinnegan58984 ай бұрын

    what about meshes to ai images?

  • @digital-guts

    @digital-guts

    4 ай бұрын

    kzread.info/dash/bejne/eYeLlc9wadfZobg.html

  • @armandadvar6462
    @armandadvar64622 ай бұрын

    I was waiting to see animation like your intro video😢

  • @joedanger4541
    @joedanger4541 Жыл бұрын

    the R.U.R. is coming

  • @yklandares
    @yklandares Жыл бұрын

    its the end og the world VFX

  • @sburgos9621
    @sburgos9621 Жыл бұрын

    Seen this technique before but at this stage it looks very limited. In terms of the mesh without putting any textures on it, it looked not representative of the object. I feel like adding the textures fools the eye into thinking it is more detailed than the mesh actually is.

  • @sashamartinsen

    @sashamartinsen

    Жыл бұрын

    and this is the main point of this aproach. to trick the eye

  • @sburgos9621

    @sburgos9621

    Жыл бұрын

    @@sashamartinsen I do 3d printing so this technique wouldn't work for my application.

  • @younesaitdabachi7968
    @younesaitdabachi7968 Жыл бұрын

    God Damn it you look like that Guy who help Mr Walter with cooking Drug in breaking Bad by the way i like your tuto keep it UP

  • @stevesloan6775
    @stevesloan67754 ай бұрын

    Changing to double sided vertices, is the way to remove and double the texture map data.😂

  • @zacandroll
    @zacandroll10 ай бұрын

    Im baffled

  • @stevesloan6775
    @stevesloan67754 ай бұрын

    Goodness me… how and why are slow eye movements in the female eye ball-brain so deeply, directly connected to the male brain.???? 🧠 😂❤

  • @_casg
    @_casg9 ай бұрын

    Here’s a peppery comment

  • @iloveallvideos
    @iloveallvideos Жыл бұрын

    hOLY SHIT

  • @sandkang827
    @sandkang827 Жыл бұрын

    good bye my future career in 3d modeling :')

  • @fredb74

    @fredb74

    9 ай бұрын

    Dont give up! AI is just another powerful tool you'll have to learn, like Photoshop back in the days.

  • @NoEnd911
    @NoEnd911 Жыл бұрын

    Jessi Pickman🎉😂

  • @isi_guro
    @isi_guro6 ай бұрын

    ラーメン中毒

  • @raphaelprotti5536
    @raphaelprotti55369 ай бұрын

    The next logical step is to remesh/retopo this and reproject the texture.

  • @bossnoob8363
    @bossnoob8363 Жыл бұрын

    Credo

Келесі