AI images to meshes / Stable diffusion & Blender Tutorial
Ойын-сауық
after my quick proof-of-concept experiment with this technique, i've got many requests to explain how I made these meshes and what actually stable diffusion do in this case. here is your guide
Zoe Depth model
huggingface.co/spaces/shariqf...
ShaderMap
shadermap.com/home/
Backgound music: (me jamming on elektron)
• downtempo beats - elek...
Follow me on:
IG: / sashamartinsen
TW: / sashamartinsen
Пікірлер: 261
what is the purpose of this though? it can be used for distant object *maybe* but there are easier ways to make those. for general purpose assets, you really can't pass the quality standart of modern games with this tech. not to mention this is just the base color. and throw away the aesthetic consistency between models too. ai makes either nearly identical images if you ask or it just can not understand what you are trying to do at all. plus if you want symbolism in your game, there is additional steps on fixing this which i think is way cumbersome and boring than actually making the asset. i didn't even mention cinema since these kinds of assets are pretty low quality even for games. (just to add, it is still ethically questionable to use these in a profit-driven project) oh one more thing, usually, games require some procedurality in their textures for some of the assets they have. this can not produce that flexibility too. only thing that is beneficial is that depth map thing i guess. that is kinda cool.
@digital-guts
Жыл бұрын
Yeah, of course, nobody here says that this models can be used in AAA games or cinema as is, and I'm not brainless "ai-bro" to say that it will be. I myself work in gamedev for a while. But there are fields for 3d-graphics other than games and cinema, for example abstract psychedelic video-art or music videos, heavily stylized indy games, maybe some surreal party poster, etc. know what I'm sayin'. As for cinema and gamedev, I think it can be used in some cases as kitbash parts for concept-art, and with proper knowledge of how to build prompts and usage of custom made loras and stuff like this, you can get really consistent results with ai generations.
@luminousdragon
Жыл бұрын
This is a proof of concept, its brand new. the process 100% can be sped up, streamlined, with ways to get better results as ai art improves. The description of this very video just says its a proof of concept, and people were asking for details. THis type of video is for professionals who want to explore different techniques, to build off of each others works, to stay informed about new techniques, and its just interesting. For instance, I make digital art, and one thing I have been experimenting with is making a 3d environment and characters as close as possible in style as some AI art Ive already generated, without taking vvery much time or effort, then rendering it as a video, then overlaying AI on top of it for a more cohesive look. This process could be very useful for this for multiple reasons: First, if Im using AI art to make the 3d, models, they are going to mesh very well when I overlay the second set of AI art over the 3d render. Second, because the AI art is going to be overlaid on the 3D model, I dont really care if the 3d models dont look perfect, kinda irrelevant. Lastly, Look at the game BattleBit which has gone viral recently. Or look at Amogus. or Minecraft. Not every game is aiming for amazing photorealism.
@AB-wf8ek
Жыл бұрын
I think it's valid to criticize the quality of the output, but I think you miss the point of you think this is trying to be a replacement for the current traditional methods. It's just an experimental process playing around with what's currently available. It's called a creative process for a reason. A true artists enjoys figuring out new and unique ways of combining tools and process, and this video is just an exercise in that. If you can't see the purpose for it, then just remove creative from anything you do.
@TaylorColpitts
Жыл бұрын
Concept art - really great for populating giant scenes with lots of gack and set dressing
@Mrkrillis
Жыл бұрын
Thank you for asking this question as I wondered myself what this could be used for
I had this theory on the start of this year, when I noticed you could generate good displacement maps using control nets, good to see someone putting that at practice.
@AtrusDesign
11 ай бұрын
It’s an old idea. I think many of us discover it first or later.
@orlybarad
4 ай бұрын
So, I've been diving deep into storytelling and creative videos lately. VideoGPT showed up, and it's like having this magical assistant that instantly enhances the quality of my content.
For BG objects like murals on walls and ornaments this can give a nice 2.5 D feel. Maybe can also speed up design to find form from first idea.
usually I don't find such good music with these tutorials, cheers mate
My jaw literally dropped. This is incredible! Thank you!
This can be a great process to use for a rough starter mesh that you can then refine
@pygmalion8952
Жыл бұрын
i wrote a long ass comment on this kind of stuff here but for this also, again, you can produce these maps with just normal renders of artists and either way is ethically questionable if you do not change and add your twist to it.
@dmingod999
Жыл бұрын
@@pygmalion8952 sure you can do this from other artists but its restrictive because you can only use what already exists -- but if you're generating the images by AI you have much more freedom -- you can sketch your idea or use a whole bunch of other tools that are available to control the AI generation, then you can make the depth map and do this bit..
Very cool, thanks for sharing the workflow!
very interesting for concept generation - thanks for sharing! I'm assuming you can also upscale the various images in SD as well to maintain more 'closeup' detail...? Maybe with appropriate LORAs...
ok, I'm speechless... just wow!
I think using Mirror is a nice idea, but it may not be applicable for all objects. How about using SD & LORA to create 2x2 images or 3x3 images of the same object from multiple different POVs, then connecting them together instead of using a mirror?
You can't just plug the color data of a normal map texture into the normal slot in principled BSDF, you need to put a "normal map" node in between.
@albertobalsalm7080
Жыл бұрын
you can actually
@games528
Жыл бұрын
@@albertobalsalm7080 Yes but that will lead to horrible results. You can also plug it straight into roughness if you want.
@sashamartinsen
Жыл бұрын
thanks i missed that part while recording
@AB-wf8ek
Жыл бұрын
I don't use blender, but my guess for why this is, is that the color needs to be interpreted as linear for data processes, versus sRGB or whatever color profile is usually slapped on top of the image when rendering for your screen.
@EGP-Hub
Жыл бұрын
Also it needs to be set to non-colour
Fantastic content and video mate,very useful ,subbed ! Keep it up !
This is really good I like this workflow thanks for sharing.
Absolutly amzing! Thank you for the tutorial! :D
Thank you for this amazing tutorial!
I'm gonna blow your mind. But this workflow can easily be improved in bender 3.5+ by creating imms and vdms. Once you create the object break it into core components with intercept booleans. Then if you want an imm just save as an asset. But if you want a vdm you apply the insertion point onto your main model vertically down towards the bottom of a cube with the top plane having uv coordinates taking up the entire cube. Then delete the other sides of the cube besides the top plane. Then select the faces of the top plane, not your merged object then in sculpt mode create a face set from edit mode selection. Then create a shape key (important for later) Immediately mask that face set and then go to the full mesh faceset manipulation brush, underneath the full mesh geometry and full mesh cloth sim brushes (I forget what they're called whenever I'm not staring at them but they affect the entire mesh that isn't masked or visible. And then select the second to last mode (should say relax or something.) You should get a flat plane again but with your geometry in the middle. Go to the top of the mesh with num7 and hit num. To center over the mesh. And hit u and project from bounds. Now you have the uvs. Now delete that shape key or set it to 0 if you want variations and go to your vdm baker and type in a name for that part and click generate at 512. You now have a draggable brush for sculpting. At this point I recommend creating a vdm displacement geometry nodes network to test it on the baking plane for any minor errors and also have a more easily editable brush. Finally rebake the cleaned up version and you'll have a reusable completely nondestructive vdm brush of your ai gen.
@kenalpha3
11 ай бұрын
video demo?
@spooderderg4077
11 ай бұрын
@@kenalpha3 I'm guessing you mean the vdm part. In which case give me an idea of something furry related and I'll make one sure.
@kenalpha3
11 ай бұрын
@@spooderderg4077 Does VDMS = [watch?v=lx6p8sJd-QY]? I looked up the term and found that vid. And by furry do you mean hairy or do you mean fantasy character? Im making a game for UE4.27, it wont handle fur very well (5 might). But Im also making character scifi armor, want to add accents buttons or creases, or accent body parts like spikes or thicker armor skin. But if you can do an example with a lizard type alien or his armor, that would be helpful thanks. [I already have a lot of alien characters + textures. But I could Im thinking I could increase my collection by adding accent meshes to the body or armor - to create a new race, or howd theyd look when they "level up."] Also if you can explain how to reuse an existing Base texture > apply a small part of it to a new mesh (the smaller accent/overlay mesh), and how to reset the UVs to match this new mesh shape? (while the original body mesh UVs and texture do not change). Thanks. I subbed.
@spooderderg4077
11 ай бұрын
@@kenalpha3 look at my icon, that's what a furry is, an anthropomorphic animal.
@kenalpha3
11 ай бұрын
@@spooderderg4077 Yes, I looked at your channel. But you mean low poly furry, without individual hair strands, correct? Anyways, yours looks like a dragonoid, so that works as an example to show me. Ty.
I just want to point out that you people that are dissing this, that for a person like me who had zero clue about any of this, just inticed into trying out something I can get actual creative results from like this is so exiting, I mean I read a few of the technical comments and that's so past my head it really shows how its a specialized viewpoint that's not generalized to more common people in terms of general knowledge ok weird ass rant over
My question is can i get a diffuse map turn this into a printable model id love to at least use it to make a base model and modify from there for like masks and such
Very nice techniques, thank you!!
nice one! got to try this! thansk for sharing
Woah, i think i will try to see if i can remake this tomorrow, would be a nice way to spend some time, thanks !
I love this! Plus (because of the horror-related prompts that I've been using), I'll probably give myself nightmares 😅 Thank you for sharing ❤
Wow... Just wow. Nice trick.
Amazing! Thanks for sharing
Great video. Do you have a Cinema 4d tutorial on this?
very cool concept
thanks for the video, very insightful
How would one do this with a front-facing character? Or does this technique demand the profile view of them?
Привет, очень классно! Подскажи пожалуйста, анимация лица сделана в unreal с помощью live link или это всё блендер ?
@digital-guts
Жыл бұрын
это metahuman animator внутри анриала уже да, но запись сама делается через live link просто более качественно интерпретируется
Amazing ,awesome ..Thanks for sharing
Honestly I'm quite impressed, a really cool way to make a lot of kitbashing, really necessary nowadays I guess now I have to learn how to make AI images hehe cheers from Mexico!
@DIGITAL GUTS, I really like this workflow, I also wanted to know, can I use this same strategy for humanoid AI characters, as you are the only person I have seen use this workflow, thanks in advance :) also subbed
@digital-guts
3 ай бұрын
yeah, since this videos i’ve tried couple of things and its kinda ok for characters in certain cases. especially for weird aliens )
Thanks !!!! I will try it !!!
Great tutorial thank you
thanks for sharing! it is inspiring
Very interesting nontheless, thanks for your time man, this technique sure have its uses.
When I do this with a depth map of a 16:9 format, the displacement modifier applies the map as a small 1:1 repeat pattern.. why? note: I made my place 16:9 ratio and applied scale before adding displacement modifier.
That actually is a pretty decent little quick workflow. Pop that out to something like zbrush and go to town refining. Is it really good enough on its own? For previz and posing with a quick rig, absolutely. That's pretty fast tbh and simple.
What about using chartunner Lora to create the front, back, left and right sides so merging all 4 sides will give a better and smoother object instead of correcting sides manually? It's just an idea but I haven't seen anyone try that so if you can could you please give it a try and share a tutorial 3:36
@digital-guts
9 ай бұрын
i’ll give it a try and take a look. i’ve done some test with ai-characters and it looks ok-ish and weird, maybe share the results later.
i wouldnt mind learning blender and learning how to do this. can you do a tutorial on how to run "zoe depth" locally?
Thanks for sharing!
It's not bad at all (is impressive actually) and you gave me very good ideas. Although I suppose this wouldn't be very applicable to non-symmetrical images, right?
Simple and cool
This is nest. cool technique.
this has many, possibilities...
Amazing , thank u
Excellent
Wait, can you now just plug in normal map to the "norma"l socket without extra "normal map" node? I have to check it.
@Savigo.
Жыл бұрын
Ok, You can, but it looks quite bad compared to proper connection with "normal map" node. It seems like intensity is way lower without it, and you cannot control it without normal map node.
This is good enough for some indie game companies honestly. Might really help some folks out there get some assets done faster.
this is awesome! BUT you lost me at mirroring the image and then bisecting to get rid of the extra geometry. i am a noob to blender still and dont know how you did that. was it a short cut key you used? at 3:35 in the video
@digital-guts
10 ай бұрын
oh, its a speedup part and there is quiet a few hot keys here. but its very basic usage of scuplt mode in blender. there are many videos on youtube where this stuff is explaned, try this one kzread.info/dash/bejne/daGdkq2odtfJXZc.htmlsi=mKSHWz8SCE8evM6M
@wrillywonka1320
10 ай бұрын
@@digital-guts thank you! And i have been using this and most images work but some images invert when i mirror them. Have you wver had this problem?
Sorry if this is a newbie question ...but is this dreamstudio some componet of SDXL?
You "accomodate" yourself by sculpting something random from a not-so-accurate mesh, the mirrored thing do not look any like the originalimage thing... Do you have a workflow to get a real mesh from something representative? (like a character or landscape)
@mercartax
Жыл бұрын
The whole process is sub-any-standard. Kitbashing some weird crap together - that's all this will work for. Maybe in 2 or 3 years we will see something more generally usable. Good luck getting any meaningful model data from AI models these days. Hard enough to prompt them into what you actually want let alone transfer this into a working 3d environment.
This works so much better than zoedepths image to 3d
thanks man
Please ignore the salty comments. This is a game changer, especially for mobile platforms. jaw dropping result and pragmatic pipeline.
How did you get the animated face, That seems completely different to what you showed us in this demo.
@EGP-Hub
Жыл бұрын
Looks like the metahuman facial animator possibly
@digital-guts
Жыл бұрын
yes it is and its no the point of this video. there are tons of content about metahuman in youtube
@salvadormarley
Жыл бұрын
@@digital-guts I've heard of metahuman but never tried it. I'll look into it. Thank you.
@ghklfghjfghjcvbnc
9 ай бұрын
u are a lying clickbait @@digital-guts
"What? What A Mazing!
Amazing, Bro! But.. How did you get 2nd (BW) image? My SD gen only one image
@digital-guts
11 ай бұрын
this is ControlNet depth model you can get it here github.com/Mikubill/sd-webui-controlnet or use zoedepth online from link in description
@Rodgerbig
11 ай бұрын
@@digital-guts thanks for the answer! yes, I have it installed. but it gives only one result and it is different from what is needed.
@Rodgerbig
11 ай бұрын
@@digital-gutszoedepth actually work, but I try to do this in SD
wow!
u are crazy😆😆😆😆🥰🤩😍❤❤❤❤❤❤, i u love bro keep it up
Now you are literally working for the machine, for free! :)
very interesting
i could do well until the part i had to sculpt the stuff out, i couldnt come to a solution like you did easily
Please make this topic more with more examples.Thank you
Such a jimmy rig way to do things. Do any of these AI generators just offer an option to export or download the 3D mesh file with maps, lighting etc???? I.e. .3ds .max .dxf .fbx .obj .stl etc! Seems the AI generators are initially just composing highly elaborate 3d scenes and rendering flat image results anyways anyways? Same for vector based files??? Can they just export native vector files such as .svg, .ai, .eps, .cdr vector .pdf??? AI is a career killer.
@sashamartinsen
Жыл бұрын
So, is it jimmy-rig way or career killer, you decide. Neither, i think, of course it depends on your goals. Meshes like this can work only as quick kitbash parts for concepts, not as final polished product anyways. Does kitbashing killed 3d careers or photobashing killed matte painting in concept art? i dont think so.
@zephilde
Жыл бұрын
No, AI like Stable Diffusion are not working in a 3D space or vector, it's working on random pixels (noise) and apply denoising steps learnt from huge images set with descriptions. Your prompt text is helping the denoising steps to be able to "hallucinate" something from noise... The fact a final image is looking like 3D render or vectors or photgraphy or painting (etc) is just pure coincidence! :)
@timd9430
Жыл бұрын
@@zephilde Any video links on that exact process?
nice method
Would you say this works better with black and white images?
@digital-guts
10 ай бұрын
i dont think so. today i’m recording new video with this technique, it could be useful.
@wrillywonka1320
10 ай бұрын
@@digital-guts awesome! because ive gotten it to work with about 60% of my images but some get destroyed when i bisect the z axis on mirroring. but all the info you got is useful. this technique is mid blowing and a major day saver. one last question. you kind of speeded over the part where you clean up the mesh after mirroring. me being a noob to 3d software i could really use some clarification on how you cleaned it up. you made it look so simple.
in the normal node the color space is non color for normal maps!
@Savigo.
Жыл бұрын
"Linear" is pretty much the same, although he missed "normal map" node between.
The more people who experiment with new technology, the more cool ideas we come up with, and better uses we figure out for the technology. This particular workflow may not be usable for anything meaningful, but maybe it inspires someone to try something different, and that person inspires someone else, and so-on until really cool uses come out of this.
@digital-guts
8 ай бұрын
you get the point of this video. i’m just messing around this tech and trying things. actually now making full game using only this and similar approaches to meshes. it wont be anything of industry standard quality of course, but just proof of concept experiment. having a lot of fun
@JamesClarkToxic
8 ай бұрын
@@digital-guts I've been experimenting with ways to create a character in Stable Diffusion and turn them into a 3D model for months. The first few attempts were awful, but without those, I wouldn't have the current workflow (which is getting really close). I also know that the technology is getting better every week, so all my experimenting should help me figure out how to do things once things get to that point.
Can you make a character model with this?
@1airdrummer
Жыл бұрын
no.
Fooking genius you are..
Nice, but I will wait for 360 3D AI models :X
yeah and what software or website did you use on the first minutes ?
@digital-guts
6 ай бұрын
this is Automatic1111 web-ui for stable diffusion
wow.
sheeesh
Yo, how you get a stable diffusion, control net as local? it is?
@digital-guts
4 ай бұрын
kzread.info/dash/bejne/lmWgstiCYLfFl9I.html check this link
@googlechel
4 ай бұрын
@@digital-guts thanks
what about the aye animation and smile tho? thats the most important part tbh
@giovannimontagnana6262
Жыл бұрын
Most definitely the face mesh was a separate ready model. The assets were made with AI
where is your hoodie from?
@digital-guts
6 ай бұрын
i dont remember, i think something like h&m or bershka nothing special
Ita still need a time.but its a cool start
I haven’t seen a single person use AI to texture a model using individual UV maps and I can’t understand why. Ai can dramatically speed up the texture process but I have not seen anybody take an ai generated image then turn it into a 3D model and i can’t understand why…
Hello thank for video please ,please.give me tutorial traking armor 3d stable diffusion and man video please urgen sorry bad english iam french
@digital-guts
4 ай бұрын
kzread.info/dash/bejne/lH-DwdCPd67NfKQ.htmlsi=j7BOrRMU_8AeXrUe
@realkut6954
4 ай бұрын
@@digital-guts thank my friend soory i want video 2d no man 3d sorry .
@realkut6954
4 ай бұрын
Sim wonder studio softwar
@realkut6954
4 ай бұрын
kzread.info/dash/bejne/mKaKrqODms6ulpM.htmlsi=2uVjuK6HK8WwhQWX
Mirror tool become life changer LOL
Great for kit bashing!
I think the opposite direction (mesh to AI) is more interesting as it can then be used for AI training
what about meshes to ai images?
@digital-guts
4 ай бұрын
kzread.info/dash/bejne/eYeLlc9wadfZobg.html
I was waiting to see animation like your intro video😢
the R.U.R. is coming
its the end og the world VFX
Seen this technique before but at this stage it looks very limited. In terms of the mesh without putting any textures on it, it looked not representative of the object. I feel like adding the textures fools the eye into thinking it is more detailed than the mesh actually is.
@sashamartinsen
Жыл бұрын
and this is the main point of this aproach. to trick the eye
@sburgos9621
Жыл бұрын
@@sashamartinsen I do 3d printing so this technique wouldn't work for my application.
God Damn it you look like that Guy who help Mr Walter with cooking Drug in breaking Bad by the way i like your tuto keep it UP
Changing to double sided vertices, is the way to remove and double the texture map data.😂
Im baffled
Goodness me… how and why are slow eye movements in the female eye ball-brain so deeply, directly connected to the male brain.???? 🧠 😂❤
Here’s a peppery comment
hOLY SHIT
good bye my future career in 3d modeling :')
@fredb74
9 ай бұрын
Dont give up! AI is just another powerful tool you'll have to learn, like Photoshop back in the days.
Jessi Pickman🎉😂
ラーメン中毒
The next logical step is to remesh/retopo this and reproject the texture.
Credo