NEW ControlNet for Stable diffusion RELEASED! THIS IS MIND BLOWING!
ControlNet can transfer any pose or composition. In this ControlNet tutorial for Stable diffusion I'll guide you through installing ControlNet and how to use it. ControlNet is a neural network structure to control Stable diffusion models by adding extra conditions.
Open cmd, type in: pip install opencv-python
Extension: github.com/Mikubill/sd-webui-...
Updated 1.1 models: huggingface.co/lllyasviel/Con...
1.0 Models from video (old): huggingface.co/lllyasviel/Con...
FREE Prompt styles here:
/ sebs-hilis-79649068
How to install Stable diffusion - ULTIMATE guide:
• Stable diffusion tutor...
Chat with me in our community discord: / discord
Support me on Patreon to get access to unique perks!
/ sebastiankamph
The Rise of AI Art: A Creative Revolution
• The Rise of AI Art - A...
7 Secrets to writing with ChatGPT (Don't tell your boss!)
• 7 Secrets in ChatGPT (...
Ultimate Animation guide in Stable diffusion
• Stable diffusion anima...
Dreambooth tutorial for Stable diffusion
• Dreambooth tutorial fo...
5 tricks you're not using
• Top 5 Stable diffusion...
Avoid these 7 mistakes
• Don't make these 7 mis...
How to ChatGPT. ChatGPT explained:
• How to ChatGPT? Chat G...
How to fix live render preview:
• Stable diffusion gui m...
Пікірлер: 505
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068 Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
This is the reason that its so important that Stable Diffusion is open source.
@losttoothbrush
Жыл бұрын
I mean its cool yeah, but doesnt it steal art from Artist that way?
@JorgetePanete
Жыл бұрын
it's*
@IIStaffyII
Жыл бұрын
@@losttoothbrush Open source just means people can access the source code and therefore add to the tool. Being open source is not directly contributing to the "stealing" issue. Although indirectly it can make it more accessable. In the end it's a tool and I'd argue what you make with it may be transformative work or not.
@Mimeniia
Жыл бұрын
People "artists" cling to their prompts like their lives depend on it. Asking them to share is like squeazing blood from a stone.
@verendale1789
Жыл бұрын
@@losttoothbrush Well, yknow, if we are gonna steal art, at least make it public and for everyone instead for one big corpo having the goods, hell yea brotha
man you are incredible! so good and simple, i installed stable diffusion with one of your videos, and now im ready to install control net. i am officially your fan!! thanks for everything!! greetings from corfu greece
As a Drawing Teacher of having 33 years of experience of teaching school kids how to draw and paint, One thing for sure....AI can not replace human creativity but I must say this will surely help so many people with poor drawing skill to unleash their creative thoughts and imagination! which for a teacher like me gives immense hope of revolution in Arts! Thanks for such an easy and helpful tutorial on this topic!
@mike_lupriger
Жыл бұрын
@@ClanBez Same, I see possibility to work on multiple projects as a designer. Tedious parts of process are getting automated. Super excited keep exploring!! Will get more time for vacation, well I hope!🤞 PS: In my area, high school art teacher is referred to as Drawing teacher and College art teachers are referred to as Art teachers. Yeah it's little weird.
@rushalias8511
Жыл бұрын
honestly refreshing to see some people be so open minded to this. AI art is often viewed as a job killer but i mean honestly speaking look at so many incidents from the past. When digital art first started i'm sure millions of artists who worked hard with paint, and pencils and ink and every other form of real life art, felt threatened by it. Why pay a guy to paint a logo for you, when you can use a paint tool? Among so many other stuff. But look what happened now, digital art is so common now because its quicker, cheaper and more flexible. If you made a mistake in a real life painting, you didn't have an undo button or an eraser. Just like digital art gave so many new individuals a chance to make art, so too is ai, its all on how you use it
@pedrovitor5324
Жыл бұрын
People feel threatened because a lot of artists still lives from comissions (Btw, they aren't wrong for doing that, it's "easy money"). When you're a teacher in an art school it's easy for you to not be threatened by AI art. Don't get me wrong, I'm not here to sound mad or anything, I'm just saying the truth. I agree AI art will revolutionize the way we think about creativity and I also think it won't destroy art (At least not completely), people will still have their community of non AI art. But it's undeniable, AI art has tons of legal issues and the AI is pretty bad right now. Very rarely I wasn't able to spot if an art was AI or not.
@viquietentakelliebe2561
Жыл бұрын
yeah, but it can sure enhance what skill you have yet to acquire or lack the talent for
@lilacbuni
Жыл бұрын
@@viquietentakelliebe2561 How can u enhance a skill ur not practising? drawing a squiggle then letting ai complete the work based off actual artist's work isn't YOUR imagination or skill and u still learn nothing. ur not doing any of the work the ai is
This is probably the most useful thing for SD. Thanks for showing us!
Thank you, this is really helpful. My "pencil sketch of a ballerina" had three arms and no head, but eventually I generated something usable. It's all absolutely fascinating and it's been fun to learn over the past week or so.
@sebastiankamph
Жыл бұрын
Glad it was helpful! And we've all struggled with the correct amount of body parts 😅
Really cool. Things are evolving pretty fast! Thanks
@sebastiankamph
Жыл бұрын
Right? This is moving extremely fast. I'm hyped for what's more to come! 🌟
Another good easy to follow tutorial, thanks Seb 👍
Brah, your camera is so nice..... Love to see the commitment to your craft. Keep it up fam
This is gold, and Im talking about your video, dude. Really well explained, very detailed, thanks a lot!
@sebastiankamph
Жыл бұрын
Why thank you for the kind words, that's really thoughtful of you 😊🌟
If you want to use the source image as ControlNet image, you don't have to load the ControlNet image separately (it will automatically pick the source image when no image is selected). Saves some time. 🙂
@Naundob
Жыл бұрын
I wonder why img2img is used at all since ControlNet is meant to do the job now instead of the old img2img algorithm, right?
@superresistant8041
Жыл бұрын
@@Naundob ControlNet can create from something whereas img2img can create from nothing.
@Naundob
Жыл бұрын
@@superresistant8041 Interesting, isn't img2img meant to create a new image from an image instead from nothing?
@daryladhityahenry
Жыл бұрын
Please please please finish these arguments... I don't understand what you both talking about hahaahahah. And give conclusion please. Thanksss
@ikcikor3670
Жыл бұрын
@@Naundob img2img gives you way less control, basically you pick "denoising strength" which at 0.5 basically tells AI "this is a 50% done txt2img image, half way between random noise and desired result, continue working on it until the end" so you have to look for golden middle between your image not changing at all and changing way too much. Controlnet can be used both in txt2img and img2img and it has many powerful features like drawing very accurate poses, keeping lineart intact and turning simple scribbles into actual art (where with normal img2img you'd end up with either an ugly result or one that doesn't resemble the doodle almost at all)
The pose algorithm is EXACTLY what I've been looking for. Thanks for this video! Hopefully I'll manage to install it. Last time I tried to use extensions, Stable Diffusion just refused it and I had to reinstall everything, lol. EDOT : Ok, I installed it, and it works! Sadly, the Open Pose model seems... capricious. It often doesn't give me any skull. The Depth Map works wondefully though.
This is the second best thing right after Ikea Köttbullar
@joskimengstrom2853
Жыл бұрын
🐴 🍖
This looks amazing. My drive is full but I definitely want to play more with this.
@sebastiankamph
Жыл бұрын
Throw away the other models and get this, it's fantastic! If you only have space for one, get the canny model.
@justinwhite2725
Жыл бұрын
@@sebastiankamph I'm going to get a new hd after work today. 2tb or so. My stable diffusion folder is 500gb. I'm also a little nervous since I have an AMD card I'm not sure if this will work on the CPU, but I'm working on building a new computer soon.
Set this up yesterday its pretty amazing
That is really awesome :D Gonna try the scribble! I've been having horrible varied results of deformed humans and I was getting sick of it. Haven't touched SD since. Now this changes! :D
I messed with this already... seems like the first step to something amazing!
Thanks for this well put together tutorial on how to get it going! This is kinda what i was hoping for, turning my b&w line art into ai generated images =D, lotsa scribbles here i come!
great video on controlnet man, thanks a lot !!
@sebastiankamph
10 ай бұрын
Glad you liked it!
Super helpful content man, thank you for making it.
@sebastiankamph
Жыл бұрын
My pleasure! Glad you enjoyed it 😊
Since I've been playing with ControlNet I am in a constant state of awe and disbelief😮 Truly game changing. What I really like is the possibility of rendering higher resolution images with that much control. Does anyone have a tip on applying a certain color scheme when using ControlNet? Probably something we have to wait for until the next SD revolution hits. So roughly 5 days.. (me making sounds of pure excitement and slight fatigue at the same time).
@sebastiankamph
Жыл бұрын
Hah, I totally feel you. I'm hyped for every new update, and then I look at the list of all the videos I want to do.
@deadlymarmoset2074
Жыл бұрын
Try using the base picture in the img2img for the colors and tone you want, use a de-noising strength of like 70+, (it can be of a completely unrelated subject and different aspect ratio) Then set the text prompt to the subject you want. Additionally you can set the base control net image, to the pose and subject your looking for. This is creating a relatively new image however, not color grading an existing one, still, it is an interesting way to control the general vibe and keep consistent colors between renders.
@sergiogonzalez2611
Жыл бұрын
@@sebastiankamph SEBASTIAN GREAT CHANNLE AND CONTENT, i hacve a doubt this extention work with stable difusion 1.5 models?
@sebastiankamph
Жыл бұрын
@@sergiogonzalez2611 Works with all models, majority of my testing have been on 1.5.
@prettyawesomeperson2188
Жыл бұрын
I'm having trouble to get it to work. I'm lost. I tried for example to scribble a poorly drawn dog, prompted "A photorealistic dog"(With openpose, canny, depth) and the only time I got a photorealistic dog was when it outputed a black img, otherwise It just spits out a 3D image of my scribble. Hope that made sense.
Installing controlNet !!!! eeeeeek great tutorial so much fun!
@sebastiankamph
Жыл бұрын
Have fun!
I feel silly, but I hadn't tried this yet because I dont have 50 gigabytes of free drive space. It didn't occur to me that I could just install part of them. This is truly amazing stuff, I'm looking forward to seeing how animations look with this tool.
If you lower the weight to zero it will cost you and arm and a leg. Brilliant! Thanks for Your Video! Definitely Highly Valuable Content.
Controlnet is insane. Thanks for the examples
@sebastiankamph
11 ай бұрын
You bet!
This is absolutely amazing! Thank you so much!! s2
@sebastiankamph
Жыл бұрын
Thank you for the kind words 😊
Thanks for explaining this.
I had Pingu vibes at the end, this is quite an amazing update.
Is the preprocessor always has to match the controlnet model? I was using it with mostly no preprocessor selected and it seems to still work? I thought it's only an optional thing which allows you to create an additional pass.
Thanks for sharing your experience! I'd kind of given up on SD because my computer is way too slow (5-10min to generate a 512x512 Euler a image) but when I came back to the community last week, everyone was creaming their panties over Controlnet and I had no idea why. Thanks to your explanation, now I kind of understand but I guess I'll have to try it myself some day once I can afford a better computer.
@sebastiankamph
Жыл бұрын
I feel you! But yeah, ControlNet is WILD!
...WOW! ...the next growth spurt of SD...people say AI makes us stupid but i haven't learned so much since AI crashed into my life...Big FANX for keeping us up to date!
@sebastiankamph
Жыл бұрын
So much new information entering our heads 😅 Thanks for the support! 🌟
@conorstewart2214
Жыл бұрын
AI does and will make people stupid, in the sense they don’t need to learn anything themselves they just ask an AI to do it for them. You are learning because you are interested in it and it is new, once it becomes more prevalent it will most likely stop being open source and people will just be interested in the results, not how it works.
@coloryvr
Жыл бұрын
I agree with many things and I think that children should not have access to generative AIs until a certain age ((16?). However, I have no idea how to remove open source software from millions of private PCs (?). My biggest concern is that the AIs will greatly increase the general smartphone addiction. (I don't have one myself and don't want one either). But: I love "painting" and filming in VR... and thanks to the new AIs, I now have the potential of an entire animation studio at my own disposal.... BTW: The absolute nightmare are AIs that develop weapons, toxins, etc. as well as the AI-based mind-reading technology that is already pushing onto the markets...
Be so much better when somebody actually puts a proper UI on all of this.
got it working, great video.
Great video thank you brother!
Very helpful.. Thank you!
controlnet is king from what I can tell.. so far
amazing video, thanks!
The audio is SUPER👌👍
You have teached me so much, thank you very much!
@sebastiankamph
10 ай бұрын
Glad to hear that!
I'm trying to find a way to have SD include character accessories accurately and consistently. Like having a character holding a Gameboy, or some other specific device. Would love to see a video breaking down how to train SD on specific objects, and then how to include those objects in a scene.
For the Openpose, is there a way to get the coordinates of the joints in the pose?
Sabastian, I get this error when I tried typing pip install opencv-python 'pip' is not recognized as an internal or external command, operable program or batch file. Any idea what is wrong?
I had difficulty cutting through the jargon. thanks man.
@sebastiankamph
Жыл бұрын
Glad I could help 😊
How challenging would it be to add your own training data (not sure if correct term) that this stack would use? Let's say that I would get too much of certain style, but in case I would like to do something totally different.
I'm convinced the future of IA generated picture will be with a mix of 3D models. Like you do a precise pose in 3D and apply stable diffusion on it so that it can have precise informations about depth in the scene and that will achieve true photorealistic render.
@martiddy
Жыл бұрын
You can do that already with ControlNet
Thanks Seb ! you are my Obiwan Kenobi of ai !
@sebastiankamph
Жыл бұрын
Thank you as always my friend! Your supportive attitude is a national treasure 🌟
Does stable diffusion rely on metadeta created when it generates the sketch or the original image to generate the reposed image? I'm wondering because I think it would be interesting to upload hand drawn sketches for the pose sketch and have stable diffusion redraw an image based on that.
another awesome video. Thanks!
@sebastiankamph
Жыл бұрын
Glad you enjoyed it! 🌟
Thank you for this mate
@sebastiankamph
Жыл бұрын
Happy to help! 🌟
how does it handle larger images? I played a bit with version 1.6 and I got a lot of out of v-memory exceptions for thing like 1000x800 pixels. and I have 12GB of visual ram.
Any clue why the controlnet models takes a while to load for me ? I've had the same issue with safetensors models
When I open the pre-processor tab there is a long list of processors to choose from, also processors I have not installed (manually). For instance, there are 3 scribble processors: scribble_hed, _pidinet and _xdog - which one to choose? It is also hard to invert the sketch from black to white
amazing thanks
Thanks for the explanation! Just asking , the checkpoint that you got there, is it self made? Or can I get it from somewhere? If I use the v2-1_768-ema-pruned.ckpt, I get this error "RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x1024 and 768x320)". Any idea?
@Mrig87
Жыл бұрын
I get the same... any ideas ?
@sebastiankamph
Жыл бұрын
Check Civitai for models. I recommend finetuned 1.5 models.
@Mrig87
Жыл бұрын
@@sebastiankamph yup I figured this was because I used 2.1 models, 1.5 works !
Hi, thank you for this, I'm very interested but I can't download your prompt styles, any help ?
Super useful tutorial. I have one question, my stable diffusion does not show me Scribble mode next to enable, i have invert input color, rgb to bgr, low vram and guess mode, why is that?
This is really a Game Changing feature!!!
@sebastiankamph
Жыл бұрын
Sure is! 🤩💥💫
HI! Very useful video i got intrigued but how to do it all in Google Colab especially the first steps in "Command Prompt" or cnd, is it possible?
Thanks another well done video. Annoying, are those two dropdowns really needed? Seems like preprocessor type and model go hand in hand? Or is it some UX decision made by extension author?
@sebastiankamph
Жыл бұрын
Thanks! Honestly, I couldn't say. It's still too early, let's see as people explore the tool more how it ends up.
ooohhh someone that explain things like should be done. ty
@sebastiankamph
Жыл бұрын
Thank you, that's very kind 😊🌟
This is nuts! 🤯
@sebastiankamph
Жыл бұрын
I couldn't agree more 🌟
is there a way to clone a object or a person with the background with Inpaint? what would be the prompt ? Ty
Thank you for the tutorial - I am not getting the two images when I generate from ControlNet - just the one.
How did you get the drawing canvas ?
For storyboarding this is insane.
what do i do if my canvas won't show any marks even after inverting the preprocessor?
This is an AMAZINGLY useful tool. Another big step for A.I art.
@sebastiankamph
Жыл бұрын
Couldn't agree more! Real game changer 🌟🌟🌟
another great video!
@sebastiankamph
9 ай бұрын
Glad you enjoyed it!
thanks alot. only works with 1.5 tough. but i found out, so all good:)
@theaiplaybook
Жыл бұрын
How did you do it?
Thank U
Issue: When I try to generate after adding the "ballerina dancing in a colorful space nebula, swirling with saturated colors" it generates up to about 94% and then stops without actually generating an image. Any idea what I could be doing wrong?
How can I use an alpha of an image to use it for create a new different image? Thx
is there any docs on these model so i have an idea what I'm dldling ? -- sorry if that's a dumb q , I'm SUPER new to all of this :)
This is truly mind-blowing. Thank you for sharing. What version of Stable Diffusion are you using. 1.5 or 2?
@sebastiankamph
Жыл бұрын
Both! Your Stable diffusion program is not version dependant. It's the actual model .ckpt or .safetensors file that has a version. 1.5 is great for illustrations, while 2.1 does a great job with photorealistic portraits.
I followed your instructions, but can't seem to load the model in controlnet? do you know why? thanks
Could you help me adding Control Net to the Deforum extension? Thank you
Hej, I am interested in car body design and I need to produce orthogonal views of a vehicle (front, side, rear and top). Do you know if there is any Stable Diffusion extension that allows me to generate these views/images based on a car render I already have? My idea is to use these four views as a blueprint to make the 3D CAD model in Solidworks. Thank you!
ty!!
What stable diffusion checkpoint do you recommend? Does it change anything picking a different one apart from the first image generation? Amazing video! Got everything up and running
@sebastiankamph
Жыл бұрын
I've been playing a lot with Dreamshaper and variants of Protogen lately, but there are a lot of good ones out there.
Is it possible to get multiple poses in one image, like two or more figures interacting? Or would one do the figures individually and try to inpaint the others into the same scene?
@sebastiankamph
Жыл бұрын
Yeah, I think Controlnet is a great way to have multiple people in the image. Take a photo or sketch them. SD is not great at multiple faces though, but can inpaint that if needed.
@Gh0sty.14
Жыл бұрын
It can do multiple people. I saw someone show an example where there were four people in the image.
Pretty awesome! 😍 Now I’d like to know if there’s a way to apply these poses to our own custom characters, instead of just random characters. 🤔 Is it possible to pose two of our original characters together? Also, it’s nice that we can copy the pose, but can we also copy facial expressions into our characters?
@sebastiankamph
Жыл бұрын
Yes and yes! 🌟 It might be a little tricky to get exactly what you're looking for though, but it is possible. I would inpaint each character separately to get the original features.
Thank you. If we are running it on Colab Notebook with WebUI enabled, can we paste the models in Google Drive's Models folder instead of the WebUI folder and then just paste the path into the Notebook?
@SilasGrieves
Жыл бұрын
Not OP but yes, you can copy/paste the models into your folder on your Google Drive but make sure you paste them to the Models folder in the Extensions parent folder and Stable Diffusion’s base models folder.
@pkay3399
Жыл бұрын
@@SilasGrieves Thank you
this is cool
Hello Sebastian Kamph, I really like your channel and the way how you talk and make those very comprehensive videos. I learn a lot from you and I thank you very much for that. Pls never change the way of your videos (calm, stabil, precise). Of course I have a question. I am concerned about the pickle files from Illyasviel. Does pickle mean, that it can harm your PC? If yes, what safetensor files can be the alternative? thank you very much and have a nice day. Best Regards
@sebastiankamph
2 ай бұрын
Hey! Thank you! Safetensors are pickle-free and safe, yes. But official files from lllyasviel are safe
how do you choose a specific model to use on your project , where is the model tab
How did you get the photos you're using in the thumbnail? Just wondering as the video you're doing doesn't actually have them
@sebastiankamph
Жыл бұрын
Same principle, just using a better prompt and more renders.
@alanbeckco
Жыл бұрын
@@sebastiankamph Nice man. The reason why I am asking is I use to be a dancer and the photo in the middle looks legit like a real photo. I tried following the tutorial but doesn't look like it will work for me as I have the 2019-2020 MacBook pro, I think couple of months later they upgraded the chip :( Would be unreal to have a play around and just see what you can come up with
Whenever I try to generate anything with for example img2img, it freezes and the generate button doesn't function anymore. Normal txt2img works just fine.
I am lost at step 1 because of an ID10T error. Where is the extension to download? I found the models OK but don't see a dl for the extension.
My question is: Can you give SD a character in the img to img tab and use ControlNet to pose them, thus having a near identical character from the img to img one, just in a different pose?
@Max_Powers
Жыл бұрын
I would like to know the answer to this too
I haven’t been able to get the model to deviate like in your thumbnail. How did you manage to lose the skirt in one photo but get a flowing dress in another? Photoshopping the image fist?
@sebastiankamph
Жыл бұрын
These are not shopped at all, just prompt and settings changed inside SD. You can finetune with both denoising strength and ControlNet weight 🌟
@JeremyFry
Жыл бұрын
@@sebastiankamph I thought you might need to tweak the input images. I'm watching your other workflow videos now and it's been very helpful to see how you can tweak things. Thank you for all these videos!
what kind of specs are you using for your computer? and how long does it take to generate a controlnet image?
@sebastiankamph
Жыл бұрын
RTX 3080. Depends on settings. 5-20s
Fantastic! thanks for the tutorial! let's play!
@sebastiankamph
Жыл бұрын
Have fun! Good to see you again Gerard 💫
After being so disappointed with Pose, I had much better results with Depth. Thanks!
@sebastiankamph
Жыл бұрын
Great to hear!
Thanks 👍
@sebastiankamph
Жыл бұрын
You're welcome 🌟
Are you running on an old version of A1111? I don't have buttons for the sampling methods. That changed to a drop down long ago. Didn't it? 🤔🤔
@sebastiankamph
Жыл бұрын
Yes! I've kept various stable releases and stopped auto-updating since I had it break far too often.
having issues with scribble brush color.. seems as if I'm drawing with a white color brush on a white canvas
Awesome !
@sebastiankamph
Жыл бұрын
Thanks Adriaan! Good to hear from you again 😊🌟
What GPU do you have? I noticed you generate stuff way faster than I'm able to. Thanks for the tutorial btw
@sebastiankamph
Жыл бұрын
RTX 3080. You're welcome!
🎯 Key Takeaways for quick navigation: 00:00 🎨 *Introduction to ControlNet for Stable Diffusion* - ControlNet is a revolutionary tool for AI art, allowing you to transform images while preserving composition or pose. 00:26 📥 *Downloading ControlNet Models* - Download ControlNet models, including Canney, Depth Map, Open Pose, and Scribble, to get started. 01:05 ⚙️ *Installing ControlNet for Stable Fusion* - Install ControlNet in Stable Fusion by adding the GitHub link in the extensions tab and restarting the UI. 02:44 🖼️ *Using ControlNet with Image to Image* - Demonstration of using ControlNet with Image to Image, starting with a pencil sketch and generating a transformed image. 03:08 🧩 *ControlNet Model Variations* - Explains the different ControlNet models (Candy, Depth Map, Open Pose, Scribble) and their unique capabilities. 06:25 🌟 *Impact of ControlNet on AI Art* - Shows the results of ControlNet transformations and highlights its potential to revolutionize AI art. 08:54 🎨 *Using ControlNet Scribble Mode* - Demonstrates how to use ControlNet in scribble mode to transform a hand-drawn sketch into an image. 10:47 🧪 *Experimentation and Conclusion* - Encourages experimentation with ControlNet in different modes and concludes by highlighting its game-changing potential in AI art. Made with HARPA AI
Hi Seb, i get the following, 'pip' is not recognized as an internal or external command, operable program or batch file. any idea why? I of course have everything installed and use SD regularly.
@niftydegen
Жыл бұрын
btw I just updated my python to the newest version, i still get that same error.
@sebastiankamph
Жыл бұрын
Hey! You need to reinstall Python and make sure you check the box "Add Python to path". If that doesn't work you need to install it manually from the command line (gotta Google more here for manual install).
@niftydegen
Жыл бұрын
Fixed it. Had to add the path to python scripts folder.
@niftydegen
Жыл бұрын
@@sebastiankamph thank you Seb! yes I did it manually. All good now :)