Create CONSISTENT Characters - Midjourney Character Design
Ғылым және технология
⚠️⚠️ WARNING - PLEASE READ ⚠️⚠️
In this video, I say that you can "train" Midjourney using the Emoji-Buttons. Well, it turns out you can't, no matter how much we imagine that it does. I was wrong. It happens. Life goes on. However, giving your character a name in the prompt DOES help keep your character more consistent.
Furthermore, I've released Part 7 of this series, which illustrates a much better way to achieve consistent characters and it works in both v4 and v5.
I hope this will "appease" some of the FB/Reddit folks out there who are quick to point out errors in everyone else's content but apparently aren't willing to put themselves out there and produce their own content that teaches people stuff. 🤷♂️
PS: You wouldn't believe how many people have written me, telling me that this video helped them progress with their projects. Despite its flaws.
⚠️⚠️ WARNING - PLEASE READ ⚠️⚠️
-
📙 Midjourney COURSE mastersofmidjourney.com Beginners to Advanced
🚀 FREE Midjourney Cheat Sheet tokenizedhq.com/freebies/mj-c...
🔗 FREE Promptalot Extension promptalot.com/extension
🔗 FREE Supporting Material tokenizedhq.com/freebies/vide...
Folder: 2023-01-23 - Consistent Characters
🌐 Check out the full blog post:
tokenizedhq.com/midjourney-co...
🤝 Credits 🏆
Shoutout to Kris from @AllAboutAI for inspiring the initial idea on how to do this.
-
📰 AI Newsletter 👉 Your Inbox tokenizedhq.com/newsletters/ai
🐤 Follow me on Twitter / chrisheidorn
💼 Follow me on LinkedIn / christianheidorn
📸 Follow me on Instagram / christianheidorn
💬 Join the Tokenized AI Discord tokenizedhq.com/invite/discord/
🎬 Character Design Series 🎬
Playlist: • Create CONSISTENT Char...
Part 1: Create Consistent Characters • Create CONSISTENT Char...
Part 2: Place Characters in Action Scenes • PLACE a Character in A...
Part 3: Apply a Consistent Character Style • Apply a CONSISTENT Cha...
Part 4: Create Multi-Character Scenes • Create MULTIPLE Charac...
Part 5: Create Facial Expression for Characters • Create FACIAL EXPRESSI...
Part 6: Infuse Your Characters with Themes • INFUSE Themes into Cha...
📺 Recommended Related Videos & Playlists 📺
Monetizing AI: • Make Money with Midjou...
Activate GOD MODE in MJ: • Become a GOD in Midjou...
The SECRET Language of MJ: • The Surprising TRUTH a...
How to Use --SEED in MJ: • How to Use the --SEED ...
Ho to Use NEGATIVE Prompts in MJ: • How to Use Negative Pr...
-
So you want to create a consistent character in Midjourney?
Do you feel like you've tried almost everything but for some reason, you keep getting characters that look very different?
Creating a consistent character in Midjourney isn't easy and it requires a bit of an unconventional approach to prompting.
In this tutorial, I'll show you how to "train" Midjourney toward the exact look that you want.
CAUTION: Bear in mind, that you can't actually "train" the model. All we're doing is going through variations until we find a look we like and then we use the seed from that particular image.
This video is Part 1 of a longer series about Midjourney character design.
-
⏰ Timestamps ⏰
00:00 Meet Peggy Palermo!
00:44 The Challenges of Creating Consistent Characters
01:30 Naming the Character
03:03 Defining the Character's Features
03:50 Picking an Initial Look
05:12 "Training" the AI Model
07:57 Giving the Character a Role
08:51 "The Matrix" starring Carla Caruso
11:53 "Tomb Raider" starring Carla Caruso
12:26 Carla Caruso: The Marvel Comic Hero
14:28 Placing the Character in Action Scenes
-
#midjourney #midjourneyai #midjourneyart #aiart
Пікірлер: 590
⚠⚠PLEASE READ - PUBLIC SERVICE ANNOUNCEMENT ⚠⚠ It has come to my attention that quite a few people in various FB groups and Subreddits have got their panties in a twist because this video of mine contains some incorrect assumptions. So they blame me for spreading what they call "misinformation". To be clear, when I recorded the video, I was honestly under the impression that this process works, as were dozens of other content creators who have done their own videos about the exact same process. Well, turns out I was wrong. Sh*t happens. We all make mistakes. THAT BEING SAID, while the "training" aspect of this video is clearly WRONG, it has been confirmed by the MJ team that giving the character a name within your prompt DOES help. Oh and by the way: Part 5 of this series also shows another way to create consistent characters and THAT one definitely works.
@TheFuss85
Жыл бұрын
trying to use the email reaction but the midjourney bot doesn't DM me the results.
@TokenizedAI
Жыл бұрын
@@TheFuss85 Wrong email icon? You also need to make sure that your settings even allow DMs to be sent to you from that server
@TheFuss85
Жыл бұрын
@@TokenizedAI that was the issue! Thank you so much Christian 👊
@Albopepper
Жыл бұрын
You can use KZread edit tool to snip that part out of your video.
@TokenizedAI
Жыл бұрын
@@Albopepper Yeah, problem is that it's not just "one small part". Cutting an entire section that also contains correct and relevant information will do more harm to the overall plot than help. Especially since the fact that you can't really train MJ doesn't really do any real damage to be honest. It just annoys the keyboard heroes in the Subreddits who spend their entire day pointing out other people's honest mistakes.
Thanks Christian, that was so insightful! Thanks for explaining more of whats happening, how and why. Part 2 is gonna be very useful as well, so thank you so much and I can't wait :)
@TokenizedAI
Жыл бұрын
Thanks for the kind feedback! :)
You, sir, are a star YT teacher! Thanks for all the solid material you've created and shared with us all. Cheers!
@TokenizedAI
Жыл бұрын
Wow, thanks!
The duplicated prompt you showed near the end of the video is very interesting. I never thought of putting near identical prompts and assigning weights to fine tune your image. Fantastic tip!
@TokenizedAI
Жыл бұрын
Yeah, I'll be covering that bit about multiprompts in Part 2. The example I used isn't even the usual way I do it. In this case I was a bit lazy. Normally I change the sentence a lot more and I also usually do it with 3-4 segments rather than just 2.
I was trying to do this legwork myself but I'm glad you already did. Subbed. Legend
@TokenizedAI
Жыл бұрын
Thanks! 🙂
Thanks for your videos and comments. Really helpful to see what you've been trying, and learn about things to try myself. I have found it quite difficult to find the other parts in this series - I normally see people include the part number in the video title, so it's really simple. Or sometimes they are in a playlist
@TokenizedAI
Жыл бұрын
I literally link to the entire playlist and also list all parts in the description 😉 It's not linked at the end of that video because I didn't originally plan for a full series. But Part 2 is linked at the end.
Great quality . Well explained. Keep going with the awesome work!!!
@TokenizedAI
Жыл бұрын
Thanks! Will do 🙂
A correction to “training the AI” section, I got confirmation from the team that the data is all from 2019 so unless you’ve a time machine that’s not possible 😉 Using the image reference and -seed does give it something to go off on. And will be your best bet for consistent characters. Hope it helps clarify for those who are creating characters over and over.
@TokenizedAI
Жыл бұрын
Thanks for sharing this. Sounds reasonable. That's why I've been putting the "training" in quotations.
@DiegoSilvaInstrutor
Жыл бұрын
Thanks ^^
@markrichards5630
Жыл бұрын
for V4 that's not quite true - new training data has been added.
@michaelsbeverly
Жыл бұрын
@@markrichards5630 Yeah, I seem to see what he describes in this video actually happening....I don't know, but it seems to work....I got a super consistent character and then changed him slowly into a vampire-like monster and the character stayed recognizeable but changed into a monster, it's a spooky effect.
@evelynannrose
Жыл бұрын
@Paleoism I tried this to, and it does seem to train MJ in a way it starts to get familiar and create consistent facial features and hair style and clothes
This is seriously awesome. i've followed your instructions patiently and getting great results. Thanks. can't wait for part 2
@TokenizedAI
Жыл бұрын
Awesome!
@BTMOM1933
Жыл бұрын
@@TokenizedAI Christian, I guess you may have already answered that question but could you point me towards an answer. Sometimes out of the blue blemishes appear on a character's skin ( the cheeks, but sometimes the forehead also) and it's difficult to get rid of them ( in fact they keep getting uglier in upscales etc) Do you have a way to cure that? I've tried --no blemishes or perfect skin Not sure it works well Thanks
@TokenizedAI
Жыл бұрын
@@BTMOM1933 I know exactly what you mean. I honestly haven't tried to get rid of it yet, so I don't have a quick fix for you I'm afraid.
@BTMOM1933
Жыл бұрын
@@TokenizedAI Apart from my interest in the technique you ve described here, and my interest in creating consistent characters, I’m intrigued by what that technique seems to tell us about the way Midjourney’s AI creates female characters. As they grow, they seem to become younger. So a « beautiful woman » will in a few iterations become an adolescent. It is as if the collective male gaze had decided that 16 to 23 is the only viable option. Giving a specific age to a character helps up to a point. « a 30 year old woman » or « a 40 year old woman » helps, to a point. Because in a few iterations the 30 yr old woman will slowly creep back down to 25 ;-) , while the 40 year old woman in a few iterations will sprout older versions of herself…I’m not sure I’ve seen the same with male characters, at all. They seem to « stay » the same age.
@TokenizedAI
Жыл бұрын
Interesting observation. If it's of any particular interest, 96% of the viewers of this channel are male and the biggest age group is 35-45 years old 😉 That should give you a good idea of what your typical MJ user looks like.
Love this technique for consistent character creation and this is by far one of the best guides I have seen on it and included points that I haven't seen mentioned before. I am especially excited to see Christian's solution for the background in part 2 and it will hopefully answer any lingering questions I have from Part 1.
@TokenizedAI
Жыл бұрын
Yep, I'm pretty sure Part 2 is going to be VERY enlightening for many people. I actually need to do an entirely separate video on multiprompting after that because it's not exclusively applicable to character design.
@kaizen_5091
Жыл бұрын
@@TokenizedAI Yes please. I can only image how difficult it is to cram what you need into your content without it being ridiculously long so it's makes sense to approach it like a series.
@ecommasters3847
Жыл бұрын
This is incredibly well explained and demonstrated.. thankyou for your videos
really interesting video. Really like the process. It contributes to my better knowledge of how MJ works. I will impatiently wait for the next episode.
@TokenizedAI
Жыл бұрын
Glad it was helpful!
Great! That’s what I was waiting. Keep going 👍
@TokenizedAI
Жыл бұрын
Definitely am 🙂
Crazy I searched ‘how create consistent character in midjorney’ and find out your video, that was uploaded only 10 hours ago! Great content, subscribed 👍👍
@TokenizedAI
Жыл бұрын
Glad I could help!
Very informative and well structured tutorial. Thank you for taking the time to share your knowledge with the rest of us plebs 😃
@TokenizedAI
Жыл бұрын
You're very welcome! Always happy to hear that people find this useful :)
Brilliant!… you seem quite a few videos for mid journey. This one is just one of the best. Thank you.
@TokenizedAI
Жыл бұрын
Just make sure you read the description too. It's important because some of the info in this one is outdated/incorrect.
Wow! This is super Duper Duper Duper Duper Duper Duper Duper helpful. Thank you for this, and for the part two. Love your channel.
@TokenizedAI
Жыл бұрын
Love you guys too 🤗
Thank you very much Christian, your videos are a very great source of knowledge about MJ :) I've learned the majors tips, logic and language with this AI thanks to your work sharing :) Waiting for the second part of this video. :)
@TokenizedAI
Жыл бұрын
Thank you so much for the kind feedback! 😊
@kazulilie
Жыл бұрын
@@TokenizedAI You're very welcomed :) Thank to you for your amazing work :)
Well this one is a game changer. I already knew a lot you are tolking about but the eastern egg for me was to add name and to realize, it associates it with look. Good job
@TokenizedAI
Жыл бұрын
Glad you found it useful! :)
Thanks. Great tips. Ive watched many other tuts, and created 10k plus images, and this was new to me! :-)
@TokenizedAI
Жыл бұрын
Cool, glad it helped :)
Want to say thanks for the tidbit of info.. .started doing while watching your vids -Thanks once again
@TokenizedAI
Жыл бұрын
My pleasure!
Don't mind the haters. Very useful content packaged into a very solid delivery style. Thanks for the value, keep pushing :))
@TokenizedAI
Жыл бұрын
Appreciate it! I don't mind the haters. I actually just troll them back 🤣
Thank you Christian! You inspired me to get serious - and get a (Stealth) subscription to Mid Journey. I like everything about your approach - it is a very creative approach - just as an Artist or a writer would do....Thank you!
@TokenizedAI
Жыл бұрын
Pssst.....don't tell the mob that you think my approach is creative. They said I'm clearly not a creative 🤣
Very interesting. It is a challenge to create consistent characters. From my experience, one must generate different types of portraits of a character - headshot, waist-up, full-length - to use them in different settings and taking different actions. And when you place a character taking an action or in a scene, there is "style transfer" or "style creep" - so you might need to use prompt weights for different parts of you prompt. I've written up a Medium on "Creating Facial Expressions on a Consistent Character in Midjourney" - KZread doesn't allow URLs in comments so you have to Google it. Also, in some cases, you are not going to be able use seeds since they really constrain what the images will look. Creating consistent characters taking lots of different actions and in different portraits is really tough. It can be done to a certain extent, but it is definitely one of the limitations of Midjourney, IMHO.
@TokenizedAI
Жыл бұрын
I think it kind of depends on how much of the character you actually want to control in minute detail. For wider shots and certain styles I'd argue that some details aren't that important. Placing the character into action scenes (which I'll cover in Part 2) requires extensive use of multiprompts to the point where it's going to really difficult for me to display it on the screen.
@davidmichaelcomfort
Жыл бұрын
You can actually re-create the exact look of the character by taking the seed from the original set of 4 images and the exact same prompt that you used. Then by using this prompt and the seed ID, you can reliably get the exact character, Then you can append or prepend additional prompts to this base prompt/seed combination. I just experimented using "In the style of the Matrix" using a weighting of 2 and it gave me good results. You can tune the images by changing the weighting and stylize values. I just added the results to the end of my Medium post.
@TokenizedAI
Жыл бұрын
Well that's what the seed is for, afterall. Though this behavior is exclusive to v4.
@davidmichaelcomfort
Жыл бұрын
@@TokenizedAI I am working on Medium posts on creating different types of portraits of characters, having characters interact with each other and another post on Lighting. I've written posts on "A Guide to Using Different Shot Types in Midjourney, including Close-up, Medium-shot, Long-Shots" and "Using Color and Color Theory in Midjourney" . It is really a "blue sky" time for AI Art and Midjourney. Everyone is learning and struggling how to do things. Sometimes things work great, but most of the time things don't really work out they way you want them too. So persistence and experimentation is key. Thanks again for your videos.
@TokenizedAI
Жыл бұрын
Indeed, despite what most opponents of AI art says, it's far from "easy" if you want to do anything remotely meaningful.
Part 2 please!!! Great explanation. Thank you!
@TokenizedAI
Жыл бұрын
Coming soon! I'm working on it.
Great information. It introduced me to the power of seeds. Also, you seem to be very knowledgeable about hair.
@TokenizedAI
Жыл бұрын
Hahaha...why do you say that? Because I know what the different hairstyles are called? 😂
@bigheadzhang
Жыл бұрын
@@TokenizedAI 😝Keep up the good work!
Just what I needed. It has been a month long struggle for the project I am working on
@TokenizedAI
Жыл бұрын
Well I hope this helps you. It might not solve all problems though. Part 2 is going to me much more of a game changer for people. That I'm certain of 😁
Amazing!! This is one of the most hidden secrets no AI creator wants to share 👏🏼👏🏼 Thanks man 👍🏼
@TokenizedAI
Жыл бұрын
I'm not really sure they don't want to share it. Most top-notch creators just don't run a KZread channel.
@villagranvicent
Жыл бұрын
@@TokenizedAI I know, but I have seen many of unanswered questions about exactly that on their Instagram accounts.
@PaladinCiel
Жыл бұрын
I suspect its more so an issue of people not wanting MJ to ban them or patch out techniques their using to get certain results considered NSFW. The censorship is royally killing me. Not even trying to create prawn. My works are Risque not prawnographic. Which MJ and their devs equates to prawn. I sometimes wonder if they have some secret code they use to bypass the censors for themselves.
@TokenizedAI
Жыл бұрын
This is typical behavior in many other areas that people think is competitive. Point is..... someone is going to share those insights at some point, so it's pretty useless to keep them a secret.
Midjourney 4 runs on Stable Diffusion, so to understand how to get consistent look, think how it is done in Stable Diffusion via custom embedings. Your prompt is converted to tokens and certain tokens trigger certain things. But if you really want it to stay 100% all the time, then you need to train your own embeding using your own imageset. MJ can't (at least not yet) be trained by any means.Training a character takes about 10minutes, so it is a "bit faster" the trying to force MJ to do that. But if you want to stay in MJ, then spend time on what word triggers what token. They might even trigger a sampler from the prompt behind the scenes and figuring out that makes life a bit more complicated :) Different samplers tend to read the prompt differently. It's not always easy even in SD where you can see all the parameters, but MJ hides 95% of the parameters making it a lot harder sometimes :D But in exhange you get nice default MJ look where you can simply type: shsjysitug -and get nice looking image. (That can be replicated in SD quite "easily" btw)
@TokenizedAI
Жыл бұрын
Stable Diffusion was released after Midjourney, so that's not entirely true. What IS true is that Midjourney has been experimenting with Stable Diffusion since it's release because the Creative ML OpenRAIL license makes this possible. Hence why Midjourney also made adjustments in their Terms of Service shortly after SD's release. Midjourney uses Natural Language Processing, while Stable Diffusion does not. This is one of the reasons why relatively "natural" prompts that work very well in Midjourney, do not yield similar results in Stable Diffusion. In SD you need to use far more explicit (sometimes weird) keywords to get what you want. PS: I'm curious whether you have an explicit source that confirms that MJ runs entirely on SD? That seems somewhat farfetched in my opinion, but I'll happily have myself proven wrong.
@digidope
Жыл бұрын
@@TokenizedAI Emad the CEO of SD posted on Twitter that MJ 4 Beta is using SD. Also in the new lawsuit against MJ the plaintiffs claim that MJ use same dataset as SD. NLP is layer between MJ and SD that converts the prompt to format that the AI image generator understands better. SD is just a base technology where anyone can build anything on top. So by default it does not do much. BlueWillow seems to be using some sort of NLP layer as they trigger model depending on what is written on the prompt. Replicating MJ look in SD is fairly simple when using right model with right embedings. The power of MJ comes from the embedings they have created in house. For those who don't know what embedings are: If i write word SNOWBOARD in SD i get quite crappy image. MJ will produce very nice image with the same word. I generated four snowboard images in MJ and trained a new embeding from those images and named my embeding as SNOWBOARD. Now everytime my prompt has word SNOWBOARD it will generate similar image than MJ. Today it makes no difference what tech is behind what AI generator as new models and embedings can be created from AI generated images. As transformers and models are available for anyone, it's not hard to write NLP layer for SD where you can enter MJ style prompt and it will convert it to format that is best for AI image generator. One step further is to include GPT3 so one can just write: Give me ten images on topic Scifi Gardening. GPT3 then generates ten ideas, converts those to format for SD and will use model, embedings and negative keywords that are best suited for each prompt and then SD outputs ten images. Kinda next level MJ.
@digidope
Жыл бұрын
Also just noticed this: It was possible to "break" MJ look so it looks like default SD model. This prompt worked in late November, but today you will get error. It means they have added more "training wheels" to prevent "breaking" the system: Hierarchy of power by Robert MacBryde, pixabay contest winner --no text --no infographics --no poster
@TokenizedAI
Жыл бұрын
@@digidope Yeah, I assumed that they were using SD for their MJ4 beta. But are you sure that they're using SD exclusively?
@digidope
Жыл бұрын
@@TokenizedAI not sure. Maybe they used SD to create their models and embedings.
I agree with you, there are many who are quick to criticise supportive tutorials of others without offering insight of there own.
@TokenizedAI
Жыл бұрын
A day in the life of KZreadr 😆
Thank you for your thoroughness. 😀
@TokenizedAI
Жыл бұрын
My pleasure!
Regardless of what anyone says, this tutorial was VERY helpful. Definitely one of the better tutorials out there on creating a consistent character. I will be binging the whole YT series and looking up any courses you have! Thanks for doing what you do.
@TokenizedAI
Жыл бұрын
I appreciate that!
Your videos are great! Very clear! Thank you.
@TokenizedAI
Жыл бұрын
Glad it was helpful!
That was an excellent lesson, thank you so much for sharing!
@TokenizedAI
Жыл бұрын
Glad you enjoyed it!
Thank you, really well explained tutorial! Great channel!
@TokenizedAI
Жыл бұрын
Thanks for the really nice feedback :)
cool, that is amazing!
@TokenizedAI
Жыл бұрын
Thank you! Cheers!
Just awesome! Thanks for these tips.
@TokenizedAI
Жыл бұрын
You are so welcome!
This was a really good guide! Thank you!
@TokenizedAI
Жыл бұрын
Glad it was helpful!
very big thanks ! love this again high quality grade🙏 why then people kept telling that rating job was useless. damn thanks for all those new tips !
@TokenizedAI
Жыл бұрын
Well, maybe it is useless? I honestly don't know. I tried out this method after hearing about it and found it to work surprisingly well. Might also depend a lot on how people are prompting. Not everyone who says that something is or is not useless necessarily knows what they're talking about 😅
@VahnAeris
Жыл бұрын
@@TokenizedAI I agree, this is why the best knowing is the one we experiment anyway. I'll try a bit on my side and try to feedback you if I found more certainty.
@TokenizedAI
Жыл бұрын
Yes, please do share your findings!
@VahnAeris
Жыл бұрын
@@TokenizedAI so far you seems a bit ahead of my curve, but I m hard on learning more ! thanks for good share, will do
hey tank you very much for this series, it is immensely useful, is it necessary to create a separate server for the creation of each character or is it possible to just use the midjourney bot directly?
@TokenizedAI
Жыл бұрын
No, you can do this anywhere where you have access to the Midjourney Bot.
Awesome I gotta jump into midjourney
@TokenizedAI
Жыл бұрын
Indeed, you should.
Very good tutorial !
@TokenizedAI
Жыл бұрын
Thanks a lot!
Beautiful
@TokenizedAI
Жыл бұрын
Thank you! Cheers!
seams to be my first thing todo on monday. ☀️
@TokenizedAI
Жыл бұрын
Go for it! :)
Awesome. It’s exactly what I need
@TokenizedAI
Жыл бұрын
Glad to hear that 😊
Thank you so much! It’s so helpful!
@TokenizedAI
Жыл бұрын
You're so welcome!
Great tutorial as usual! A question: do you think it can be useful to give a name (as you did with Carla Caruso) also when training other things such as graphic styles, objects, and so on?
@TokenizedAI
Жыл бұрын
I've never really thought of it, to be honest. Maybe you should try that and share your findings with us :) That makes me wonder. I think I really should start a community Discord for everyone watching. There have been some really great discussions and suggestions in the comments.
excellent tutorial, it helps me a lot , thanks
@TokenizedAI
Жыл бұрын
Glad to hear that!
That's great content and information, as always, Christian! I have two questions here, if you don't mind: - Does doing this in a separate Discord channel has any influence on the results or is it just to keep everything tidy and more organized? - Is the name given to the character really that important? I have created a character through various blending attempts, without textual prompts, and have a seed for it but since it was made in that way, I didn't give it a name. Is there a way to continuing doing this with that particular character? (I mainly make creatures instead of regular human characters by the way 🤣)
@TokenizedAI
Жыл бұрын
1. Doing this in any particular location in Discord really shouldn't make any difference. 2. You don't necessarily have to give it a name. Especially if you're not doing close up portraits. The main reason why I'm using a name is because I want to avoid "my prompt" getting "contaminated" by people you might be using a similarly descriptive prompt. I have no idea whether it matters though. You could easily also call it "Wiley the Creature" or "Beast of the West". The point is to introduce something unique (but recurring) into the prompt without it being a word that might influence the image output. For example "Windy Wiley" might risk adding "wind" into the image. But maybe I'm also overcomplicating things. Who knows 🤷🏻♂️
Hey, Christian, thanks for your fantastic work. I walked through each video in this series. It sounds like focusing on Part 5 is the most reliable way to go. How would you recommend learning how to train Midjourney with an image of myself or someone I photograph? Would the principles used with Carla apply similarly or are there some critical in between steps?
@TokenizedAI
Жыл бұрын
Actually, I'd recommend Part 7 since it's the latest one and shows a reliable way of maintaining some consistency. In the end though, all parts provide insight into what can be done to tackle the problem.
this is a gold mine 👀
@TokenizedAI
Жыл бұрын
🙏🏻
Thanks for sharing 😊
Thanks for the video !! Do you think it's good to add a art style to the prompt or is it better to leave art style informations in prompt and regenerate till you found the right one and then follow your method?
@TokenizedAI
Жыл бұрын
You can always change the art style later, at any time. So I don't really see the value in doing that. I think it doesn't really matter. Do whatever works best for you.
Dude! This is fantastic. Thank you so much. I would LOVE it and appreciate it so very much if you could explain or do a tutorial about starting with a vector-style character or mascot that you already have looking as you desire, and want midjourney to retain as much as possible from your original upload as the source image. As a next step, run a series of poses and facial expressions but keeping all else the same. You touched on some of this, but how can I force midjourney to keep the identical image except maybe change sunglasses from dark to light. Something very small. Is this possible? Thank you again.
@TokenizedAI
Жыл бұрын
What you are trying to do is not within the scope of what the technology is currently capable of. At least not with Midjourney.
Naw. I thought those were photographs of a real actress. I’m shocked! 😱😏
@TokenizedAI
Жыл бұрын
😉
Here it is! Man, you're the best!
@TokenizedAI
Жыл бұрын
Well....Part 1 at least. Part 2 should be even better 😁
Amazing ! thank you, mister )
@TokenizedAI
Жыл бұрын
Welcome!
Awesome! Thank you!
@TokenizedAI
Жыл бұрын
Pleasure!
So great! Thank you
@TokenizedAI
Жыл бұрын
You're welcome 😊
Have I mentioned how much i love your work?
@TokenizedAI
Жыл бұрын
I believe so 😊
Very interesting, and I love it. Can you do a series for kids, please?
@TokenizedAI
Жыл бұрын
Can you define "for kids"? Cause most people who watch the channel are aged 30-50.
OKaY....I've watched this video maybe 4 times (parts of it 6 or 7 or 8 times) and FINALLY I get the back to the future reference because you're pausing and going forward in time...hahaha......
@TokenizedAI
Жыл бұрын
LOL 😅 Bettrr late than never.
Awesome video! I was wondering if Midjourney would be able to keep that character and style across multiple prompts though. So let's say you've got your Carla in Marvel comic style, would it be able to generate multiple comic book panels with the same character but in different poses? And do you think it'll reproduce the character if you supply it a link to an image URL if you've found the Carla you want to keep consistently?
@TokenizedAI
Жыл бұрын
That's something I'll cover in Part 2 when we get to prompt "action scenes". Part 1 was just about getting the basics right for a portrait. I wouldn't recommend using an image prompt simply because it introduces an "uncontrollable" element that will be blended with whatever you put in the prompt. As long as v4 doesn't support image weight, I'd avoid it. Especially since I found that it's not really necessary anyway. Knowing how to craft multiprompts for bigger scenes is far more important.
Amazing free course :) A note using the new midjourney 5.2 it dosen't give you sets of new images, if I use the seed number for 1 picture. It keeps giving me the same excat images from the 4 imgaes that included this seed image. So there is now no variations when using 5.2 version. Which brings up a new issue, if I am still not happy with that seed number, and want to change it a bit, then what to do? I think we have to now use the subtle variation (I think, not really sure) It is fun to play with, but a bit time consuming when we have to keep changing with each new version relased. What version did you use here? was it midjourney 5? But no matter what I love your explanations and the depth you go into. Amazing and Thank you
@TokenizedAI
6 ай бұрын
Yes, this behaviour has been around about 10 months. This video is old (probably v4). Please check the description for a disclaimer.
MY GOD. you are exactly what I wanted to do.
@TokenizedAI
Жыл бұрын
😊
Hey there, I'm still very much a newbie, so I really do appreciate your content and your transparency
@TokenizedAI
Жыл бұрын
Check out my dedicated video on that here on the channel. It explains everything.
very good information, well presented, thank you
@TokenizedAI
Жыл бұрын
Glad it was helpful!
GREAT!!!!
@TokenizedAI
Жыл бұрын
Glad you enjoyed it :)
Great man
@TokenizedAI
Жыл бұрын
Thanks!
Wow!!! Thank you 🙏🏻
@TokenizedAI
Жыл бұрын
You're welcome 🙂
Wait, so is the heart-face emoji the only part of the video we should ignore? Or does doing the whole rinse+repeat for 15x not work either? Just checking!
@TokenizedAI
Жыл бұрын
The whole rinse+repeat part. Using a specific seed still helps and so does naming your character. Just save yourself the GPU hours and don't do all those iterations.
I loved watching your videos. You explain very well so I subscribed. do you have any idea how to generate a face from a photo that would look a lot alike. When I send a picture of me to midjourney it has a lot of trouble creating a true facial likeness. do you have a tip?
@TokenizedAI
Жыл бұрын
You need to use at least 3-4 images with different angles and add them all as image prompts. A single image prompt usually isn't enough.
Training an embed on local stable diffusion is very good for creating a unique consistent character also.
@TokenizedAI
Жыл бұрын
Yep, but this was supposed to give MJ users a potential solution. Most MJ users don't use Stable Diffusion.
hey mate, nice video as always! Was wondering if you know a way to have the generated character in a image to be positioned either left or right and maybe zoomed out so it doesn't take the full space... Thanks mate and keep these videos coming!
@TokenizedAI
Жыл бұрын
You can control that a bit by describing it in the prompt. Alternatively be more specific about what's on the other side of the image.
great insights
@TokenizedAI
Жыл бұрын
Thanks 👍🏻
Thanks for all the info you're putting out. I just subscribed. I have a question, is it the same for environments? Does the seed concept work for also for environments?
@TokenizedAI
Жыл бұрын
I honestly don't know. I haven't experimented with that yet.
@veilofreality
Жыл бұрын
@@TokenizedAI so, if I may ask, how do you deal with the problem of having a character act inside a constant environment, like a room where, for example you want the door, windows and furniture to be consistent? Would that represent an insurmountable problem?
Hey awesome job. I’d like to know if at the end, what would MJ send you if you just put imagine Carla Caruso. Would it deliver herb like that? Or would it start from scratch?
@TokenizedAI
Жыл бұрын
I actually show 3 sets of images in the video. If you just enter the name, you'll get images that look nothing like her. You need to use it in combination with the description.
Hi Christian, thank you for your work! I just wanted to know can I create two characters and take their seeds and combine them in one image as a two choosen characters speaking or acting some how? I just trying to make manga if its works wold be nice / the question is again can I use two seeds in one image ? will it save characters styles?
@TokenizedAI
Жыл бұрын
Afraid not, because that's not really how seeds work.
This technique becomes even more effective if you give her an actor's named if you want action scenes
@TokenizedAI
Жыл бұрын
Yeah, using real actor names is very effective. But I often worry about the face of the real actor bleeding into the image.
Hey! Will you be making a video or videos in relation to how Quality and Style work? That would be awesome if you touch on that information, I may be able to learn something new from it. Either through /settings options or manual options inside the prompt :D
@TokenizedAI
Жыл бұрын
Possibly, but I have a very long list of other topics to cover first.
@johnmc9073
Жыл бұрын
@@TokenizedAI Good to know :D
I was looking for this exact same thing the other day, as I ran into the same issues, then once I got it good, it would not get back to the original one. So I will try this method now. But since my characters are more cartoon I will see how this goes
@TokenizedAI
Жыл бұрын
Let me know how it works out for your use case!
If, after upscaling, you train the model for positive features from the prompt; would it also help to upscale one that you didn't like and rate it with the sad face from the far left (kind of like a negative prompt)?
@TokenizedAI
Жыл бұрын
Good question! I haven't tried that. I don't know how relevant that would be since we keep switching the seed during the "training" process.
Wow. A realistic person on KZread that works with AI... How refreshing. You're fantastic.
@TokenizedAI
Жыл бұрын
What's unrealistic about the others? 🙂
@flickwtchr
Жыл бұрын
And how refreshing that he talks at a normal pace unlike the ubiquitous fast talking rapid cut youtubpreneur. I no longer have patience for that style and just click away immediately.
I'm from Brazil, I follow tutorials created by colleagues from here, and this content of yours, which is from someone far away, enriched me with your wisdom. I have a doubt, this process will also work when I put my photo at the beginning of the prompt and I add characteristics until I find the ideal photo, and so I do this process you taught, does it work? I am grateful and I follow you. Hugs
@TokenizedAI
Жыл бұрын
Thanks for the feedback. This process will not work with your own photo. It only works with characters created within Midjourney.
Great tutorial greatly appreciated! I decided to try it on a dragon, but it did not seem very responsive. Have you used this technique to make non human characters?
@TokenizedAI
Жыл бұрын
From what I hear, it's not very effective with non-human characters. 😔
Is it possible to train your face for similiar scenarios with midjourney - like with stable diffusion automatic 1111 - based on like 20-30 images?
@TokenizedAI
Жыл бұрын
I honestly don't know. Thing is, this isn't the same thing as training your own model. And it's working with existing imagery created entirely in MJ. Doing this with your own face is considerably more difficult I would assume.
Great explanations and content. I wonder if you have tried generating with multiple characters. It seems like this is quite difficult and any tips you have would be greatly appreciated.
@TokenizedAI
Жыл бұрын
I have generated scenes with multiple characters but not to the point of explicitly describing details for more than just 1 of them.
Great video tutorial. Im sorry for my newbie question, but how did you create a channel specifically for this character, with the midjourney bot in it
@TokenizedAI
Жыл бұрын
Watch this: kzread.info/dash/bejne/pZ2c0pqIg7KuZs4.html
@skdskd4822
Жыл бұрын
Thanks
Weird Science! Men creating beautiful women... who would have thought? But the possebilities this opens up is just amazing.
@TokenizedAI
11 ай бұрын
Weird Science was hilarious back in the day 😆
Thank you! Im trying it right now
@TokenizedAI
Жыл бұрын
Good luck!
Does comma placement act as a stopper for what parts of the prompts are effected by weight values? Example: "tall guy in trench coat, lookes like bataman::1" is the part that gets weighted only the, "looks like batman" segment, or is everything before the weight modifier given the weight value regardless of comma placement?
@TokenizedAI
Жыл бұрын
No. The comma has some influence, very much like it would in regular written language. But it doesn't delimit the segment. So in the case of "tall guy in trench coat, lloks like batman::1", the comma influences how the 2 partial phrases may be interpreted, but the weight applies to the entire segment as a whole.
I loved this, your tips are worth gold! I have two questions, is the Midjourney trial enough to train him for a consistent character? And secondly, can this work on other systems like Stable Diffusion or other apps that pretend to emulate Midjourney but don't reach its quality?
@TokenizedAI
Жыл бұрын
1. Technically yes, but then you wouldn't really have much Fast GPU time left. What's keeping you from just getting the Basic subscription. There's little point in trying to do this with a trial account. 2. Stable Diffusion works very differently from MJ, so I don't really know how one would replicate this process, to be honest.
@Henry_Drae
Жыл бұрын
@@TokenizedAI Thank you for your response! It's not that I don't want to subscribe because I'm stingy, it's just that I live in Argentina and here the dollar is really very expensive, almost like in Venezuela, it ends up being prohibitive. However, I pay some subscriptions to other services I work with because I can finally recover them in the investment.
@TokenizedAI
Жыл бұрын
Well, you can always create more than one trial account. Though I don't want to promote that strategy. As long as you have access to the seed from the first account you can continue with the second.
@Henry_Drae
Жыл бұрын
@@TokenizedAI I understand you perfectly, it is not ideal to promote these actions, but as a resource it is valid. thank you very much for your help and understanding!
thanks alot,,,,,very helpful
@TokenizedAI
Жыл бұрын
Most welcome!
Do you think it's possible to use a process like this to target a style rather than the facial features? I'm trying to use this with --niji but I keep getting inconsistent results, probably because I don't really know how to put into words the style I'm looking for. This video was amazing, thank you for sharing such clear instructions!
@TokenizedAI
Жыл бұрын
I actually hate Niji mode. I can create much better anime with the regular v4. If you the style of a particular show in mind (Death Note, Naruto or DBZ) it's pretty good at understanding those. Alternatively, use an image prompt with the style. But you'll still need to write a good text prompt to support it.
@basiccomponents
Жыл бұрын
@@TokenizedAI I gave -v4 a serious try and I have to agree with you, the results are more consistent, thanks for the suggestion! Also, to target the style I want, I'm feeding it a lot of images in the prompt, along with some text, but after 8 images it starts to give me the "Invalid link!" error on images' links that it accepted in the prompt just before that, do you have an idea on how to avoid this?
Thanks, i used a much simplier but less effective way ( CHANGING THING )
@TokenizedAI
Жыл бұрын
Nice work! There are so many different ways to do this.
@ManoloMacchetta
Жыл бұрын
@@TokenizedAI i m watching all you video in the next few days. since MJ changed the output (following the lawsuit) I am struggling to get some older images, but maybe with your method it could done. Thanks again!
Thanks for the tutorial! But does this work with niji? Just in case i waste my time
Hi! Do you generate all the variations while in relaxed mode and only upscale in fast? Is that recommended or should you be running fast as much as possible. Is there a quality difference?
@TokenizedAI
Жыл бұрын
I usually do the variations in relaxed mode and the upscales in fast mode. Though sometimes, when not many people are using MJ, upscaling in relaxed mode is relatively quick as well.
@flavionichele7214
Жыл бұрын
Thanks for answering! Can I bother you with one more question. I’m looking at on1 raw for editing (enlarging etc). What’s your opinion? Which program do you use?
@TokenizedAI
Жыл бұрын
@@flavionichele7214 I rarely need to enlarge stuff. But I have a preference for letsenhance.io. You can try them out for free. But I guess it depends on what you want to do.
@flavionichele7214
Жыл бұрын
@@TokenizedAI thank you. I'm going to check it out!
Thank you for a very informative video series. However i find for some reason I can’t get the seed from the single upscaled pic but only the 4 pic grid. So I can’t really follow along. Could you offer any help in this regard?
@TokenizedAI
Жыл бұрын
v5 doesn't eve have an upscaler yet. That's why the single image doesn't have it's own seed. It's just the individual image from the grid. The grid has its own seed.
@alexg5576
Жыл бұрын
@@TokenizedAI Thanks very much. So should I use version 4 for this particular exercise?
Hi, I have been following your videos, I was trying to read your blog website. Couldn't land on a page for this video. Do you have any geographic restrictions for the accessibility?
@TokenizedAI
Жыл бұрын
My site gets a lot of DDoS attacks from certain regions which is why I have restricted some. You can get around it with a VPN though.
Good video, even if it is based on a false assumption. I read your comment. I also tried with a name. Which also worked to some extent. However, I was puzzled how Midjourney values this one. I gave her the first name 'Emefa'. This is an African name. A corresponding African woman was also created. However, only up to the part with the Marvel Comic style. By the second part, where the name is omitted, Midjourny created me suddenly a white woman. It is interesting that Midjourney can apparently classify the origin of the names. I must also say that Midjourney has only created white women for me to date.
@TokenizedAI
Жыл бұрын
Someone who reads description texts! 🤗 Welcome, you're a dying breed 😅 As for skin color, v5 has become better at that, but you also need to be more explicit in your prompts.
I hate to break it to you, but Midjourney doesn't have a memory ... rating the images helps them on the backend but it doesn't influence what you're going to see next, as MJ doesn't remember what it has already shown you.
@TokenizedAI
Жыл бұрын
I honestly don't care if it does or not 😅 As long as it gets the job done, whether in reality or only by perception, that's all that matters to me and many others as well. Either way, no harm is being done by offering this as a potential solution. Those who like it, will use it. Those who don't simply won't. PS: I didn't make this up. As I mention in the video, I saw Kris use this, I found it interesting, I tried it, it seemed to work (perception is a powerful thing) and I decided that it couldn't hurt. And others seem to agree. People just want to results and aren't really dogmatic about how they get there.
@FutureTechPilot
Жыл бұрын
haha that's an interesting way of looking at things ...