Endangered AI

Endangered AI

Welcome to Endangered AI, your go-to destination for all things Artificial Intelligence! Our mission is to make the complex world of AI accessible, engaging, and understandable for everyone. Whether you're a beginner just dipping your toes into the realm of AI, or a seasoned professional seeking advanced knowledge, we've got something for you. Our comprehensive video guides cover a wide array of topics, from basic introductions to intricate tutorials on the latest AI models like GPT-4. We delve into everything from coding and programming, to the ethical considerations of AI and its impact on society. Join us as we journey through the fascinating landscape of AI and uncover its limitless potential together. Subscribe now and become a part of our AI Explorers community!

Пікірлер

  • @mikemcaulay9507
    @mikemcaulay95075 сағат бұрын

    Thanks, I'm new to this area, sort of. I'm a bit of a jack of all trades, master of some, as a software developer of 30 years. I've played with everything, including design, and have set my sights on AI in general the last year and a half. My primary skill set resides in software development, and so I've been most focused on using things like LLMs as part of a hybrid solution where most of it consists and traditional system development and AI filling in the crucial gaps where it's most powerful. Yet, I'm still finding that exploring the vast array of AI tools that exist out there provides excellent insights across the board. I learn a lot by doing and experimenting and have been hitting a bit of a frustration wall with Scenario as I've had trouble finding real examples doing the kind of things I've wanted to. For example, I have a model I've worked on, that is essentially a character I made in an RPG. I'm playing with using the character in crossover media, so really wanted to nail her look. But I've struggled getting the closeup faces to appear on the larger, full-body poses. I'm sure this is trivial, but my brain just needs a decent example to see how to achieve it, for me to get it. I figure canvas in Scenario is probably a good way to go, but it's been driving me nuts. Anyway, thanks for putting this video up. What you've done has given a good place to start poking around in achieving my goals.

  • @DexterBee-zv9io
    @DexterBee-zv9io2 күн бұрын

    Nah , stay away from windows , they are getting too evil...I switched from win 11 to Linux some 6 months ago , best decision ever

  • @NGIgri
    @NGIgri2 күн бұрын

    Thx alot! The best video on ip adapter basics!

  • @pancat422
    @pancat4225 күн бұрын

    I've tried to install all the models to the corresponding folder, 2 clip_vision models to /models/clip_vision, 10 ipadapter models to /model/ipadapter (newly created), and add "ipadapter: models/ipadapter" to extra_model_paths.yaml, but when I run the workflow,(image -> prep image for clipvision -> ipadapter unified loader -> ipadapater) it still shows the error Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. Not sure why is that😢

  • @pancat422
    @pancat4225 күн бұрын

    Solved, cuz I reuse the model from webui, so when I put the models in the webui/models, the models can be found

  • @pancat422
    @pancat4225 күн бұрын

    But actually "IPAdapater Unified Loader" node can process the workflow, but the "IPAdapter Model Loaded" node cannot find the models still...

  • @valorantacemiyimben
    @valorantacemiyimben7 күн бұрын

    hello,I did not receive any workflow to my e-mail address.

  • @CDIGS-EI-hv3cf
    @CDIGS-EI-hv3cf8 күн бұрын

    hey yiur discord link is not working... thank you for the video!

  • @CDIGS-EI-hv3cf
    @CDIGS-EI-hv3cf8 күн бұрын

    Also your pod is not running correct. There is an issue with gradio. You should check that please! The pods not running.

  • @chisler6192
    @chisler61929 күн бұрын

    Thank you so much! I'm looking into integrating toon crafter as a adobe animate / Toon Boom plugin. Would that help? In which form would you say it'd be the most useful?

  • @gammingtoch259
    @gammingtoch25914 күн бұрын

    Notes updates for all: I've tested it using SDXL IPAdapter Plus + inpainting with (Fooocus/ BrushNet/ DifferentialDifussion) and work very nice and with better results (More consistency) Also I'ved tested it with TensorRT + Generate Image + latente upscale (Here needs use less s_noise 0.88 for my case in Samplers) and works very nice + SDUpscaler with TensorRT. I love you man, thank u very much!!!

  • @erikwettergren
    @erikwettergren14 күн бұрын

    Looks like a very useful flow, however I can't seem to run it, I get this error message: Error occurred when executing MeshGraphormer-DepthMapPreprocessor: followed by a number of missing files messages. Any ideas?

  • @EndangeredAI
    @EndangeredAI11 күн бұрын

    are you running it locally? inside a venv? It sounds like something may have gone wrong when installing the nodes. I would try reinstalling the nodes. If you still have an issue, run comfy inside a venv and try the installation again

  • @eleneitor
    @eleneitor15 күн бұрын

    For me it has been impossible, it always gives me this error: RuntimeError: The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (1, 1).

  • @EndangeredAI
    @EndangeredAI11 күн бұрын

    It sounds like you're using an attention mask, and the source image is too big, and therefore the mask size is too big. What size image are you using for the input?

  • @eleneitor
    @eleneitor11 күн бұрын

    @@EndangeredAI I used a 320x512 images

  • @isaaclund1715
    @isaaclund171517 күн бұрын

    Hey love these these tutorial videos. Can you do a tutorial on how to install ToonCrafter on ComfyUI and connect to external GPU? Thanks! There’s an open version available on GitHub by kijai

  • @gammingtoch259
    @gammingtoch25917 күн бұрын

    Bro, question, can be it used in all (ipadapter, controlnet, other)?? or it is like Tensorrt that have limitations and cant work with ipadaptar or controlnet modules?

  • @EndangeredAI
    @EndangeredAI15 күн бұрын

    Should work with all, I’ve used it with ipadapter

  • @gammingtoch259
    @gammingtoch25915 күн бұрын

    @@EndangeredAI Thank u very much bro! Its amazing !

  • @mmrawrr
    @mmrawrr17 күн бұрын

    AYS is really very powerful. I was working with img2img, moderate denoise (0.45 to 0.55), playing with densediffusion and area conditioning.. adding the AYS improves a lot the quality and coherence.

  • @-Belshazzar-
    @-Belshazzar-17 күн бұрын

    thanks for the video, personally i am not impressed, meh

  • @EndangeredAI
    @EndangeredAI11 күн бұрын

    yeah some people have been impressed, others haven't. I personally find it generally gives me slightly better results than not

  • @ofQuestion
    @ofQuestion17 күн бұрын

    But if the ToonCrafter link is free to generate and also lets you download the zip with the code, why do you have to pay to use it on the PC?

  • @EndangeredAI
    @EndangeredAI11 күн бұрын

    It's free but you need a powerful GPU. I use runpod to rent a very powerful gpu and run it on their linux operating system.

  • @ofQuestion
    @ofQuestion17 күн бұрын

    I don't understand enything! I can't install like a simple program for PC?

  • @EndangeredAI
    @EndangeredAI11 күн бұрын

    you can, but you need a very powerful GPU to run it, although I imagine they have made some optimisations since I made the video. I run it on runpod, because in the event what you want to do requires a gpu with more than 24 gigs of VRAM, Runpod has more powerful options

  • @iblackstar
    @iblackstar18 күн бұрын

    The attention mask portion blew my mind. It changes everything for me and makes separating the aspects of ipadapter accessible. Thanks so much!

  • @EndangeredAI
    @EndangeredAI15 күн бұрын

    Great to hear!

  • @rosteliokovalchuks215
    @rosteliokovalchuks21522 күн бұрын

    👍👍

  • @captainoctonion9045
    @captainoctonion904522 күн бұрын

    I'm already paying waaay to much for ai in this economy, I stick to comfy so any comfy tutorials are welcome

  • @EndangeredAI
    @EndangeredAI22 күн бұрын

    I know that feeling! At least scenario has a free tier!

  • @Andro-Meta
    @Andro-Meta22 күн бұрын

    Great work!

  • @EndangeredAI
    @EndangeredAI22 күн бұрын

    Thank you! Cheers!

  • @dangerousmindgames
    @dangerousmindgames23 күн бұрын

    Doesn't work. Install failed.

  • @EndangeredAI
    @EndangeredAI22 күн бұрын

    What went wrong? With some context I can suggest what to do

  • @dadekennedy9712
    @dadekennedy971223 күн бұрын

    I prefer longer format. Videos. So I don't get distracted having to wait for other parts.

  • @EndangeredAI
    @EndangeredAI23 күн бұрын

    Thanks for bothering to answer! Your input is appreciated! Once I finish up part two, I’ll ship the long video as well.

  • @kineticmotive4466
    @kineticmotive446623 күн бұрын

    Do you know how to do facial expression manipulation similar to FaceApp using Stable Diffusion? I'm simply trying to create an expression sheet with the same character, FaceApp gives the best results but it only provides the "Smile" expression. ReActor gives decent results, but not as good as FaceApp.

  • @JuicyBurger29
    @JuicyBurger2923 күн бұрын

    It ended up being horrible at everything human XD

  • @EndangeredAI
    @EndangeredAI23 күн бұрын

    So true. Unless you pay for the api, which has none of the goodies like controlnet 😭

  • @JuicyBurger29
    @JuicyBurger2923 күн бұрын

    @@EndangeredAI bruh. :|

  • @HalkerVeil
    @HalkerVeil25 күн бұрын

    Why is he holding his shield with a baby arm? lmao wtf

  • @EndangeredAI
    @EndangeredAI25 күн бұрын

    Omg I never noticed that 😅😂

  • @Spot_the_Difference
    @Spot_the_Difference25 күн бұрын

    what model are you using?

  • @EndangeredAI
    @EndangeredAI25 күн бұрын

    Let me double check but I think it was vanilla sdxl

  • @Spot_the_Difference
    @Spot_the_Difference25 күн бұрын

    whats the discord url

  • @EndangeredAI
    @EndangeredAI25 күн бұрын

    discord.gg/gd9G84bm

  • @philzan3627
    @philzan362726 күн бұрын

    I checked with several other sources and this is also another cherry picked model that is super heavy. You're better off using ebsynth or comfyUI rotoscoping or something of that nature for better results. Those results with the snake? That's the average result. So you gotta run this repeatedly until you find something remotely acceptable.

  • @ViralKiller
    @ViralKiller26 күн бұрын

    Why can't these GitHub folk just give you a bat or exe file, everything is so ultra complicated....

  • @ItsThePirate
    @ItsThePirate27 күн бұрын

    continue posting. stay encouraged. you are informative

  • @EndangeredAI
    @EndangeredAI22 күн бұрын

    Thanks!

  • @Hazardteam
    @Hazardteam29 күн бұрын

    This guy is rare ultra annoying. Who the 'ck want to see 2 hand gestures in FULL TOTAL into the face??

  • @Macatho
    @Macatho29 күн бұрын

    Why isnt there a setting to just inpaint the mask with the highest probability value, I find that super easy in A1111's ADetailer. Usually gets the job done.

  • @GForcenuwan
    @GForcenuwan29 күн бұрын

    Very well explained! thank you!!

  • @EndangeredAI
    @EndangeredAI11 күн бұрын

    Glad it was helpful!

  • @SmartMoneyRyan
    @SmartMoneyRyanАй бұрын

    if the prompts get more advanced your steps need to go up. default of 20 works for more simple images. albeit i think albedobase model might be better *edit , after trying stable diffusion 3, determined, albedobaseXL is far far better

  • @EndangeredAI
    @EndangeredAI28 күн бұрын

    Thanks for the input! I'm going to pin this as it's great advice!

  • @SmartMoneyRyan
    @SmartMoneyRyan28 күн бұрын

    @@EndangeredAI thanks, yeah after playing with sd3 for a few days it seems better at following the prompts. But the images arent as good it has difficulty with more niche objects it probably wasn't trained on. The library seems smaller maybe it was trained on? I only have 12gb of GPU memory so can't run the newer bigger model, it's not in your video but it's 11gb I think, they added it to hugginface, maybe that's why.. The model in your video uses about 9gb of vram when rendering Ish. So the bigger model probably wouldn't work for me. I generate larger images though, like hero images for websites and stuff. AlbedobaseXL doing a 1080p image uses around 11.7 GB of vram completely maxing my vram lol. But it works well for graphics/scenery non animate objects like cars, lawn mowers, etc not humans. Which is what I use it for.

  • @theGreatQAwakening
    @theGreatQAwakeningАй бұрын

    THANK YOU SO MUCH BROTHER! <3

  • @EndangeredAI
    @EndangeredAI11 күн бұрын

    Happy to help

  • @Nid_All
    @Nid_AllАй бұрын

    I use Gemini 1.5 Pro to make prompts and it is super cool tho

  • @AlexDisciple
    @AlexDiscipleАй бұрын

    This is a great explanation to get started. When can we expect #2?

  • @chaNo121
    @chaNo121Ай бұрын

    The discord link is not working, amazing video.

  • @Zteedify
    @ZteedifyАй бұрын

    Someone getting this error after trying to run python gradio_app.py? AE working on z of shape (1, 4, 32, 32) = 4096 dimensions. checkpoints/tooncrafter_512_interp_v1/model.ckpt Traceback (most recent call last): File "/workspace/ToonCrafter/gradio_app.py", line 79, in <module> dynamicrafter_iface = dynamicrafter_demo(result_dir) File "/workspace/ToonCrafter/gradio_app.py", line 29, in dynamicrafter_demo image2video = Image2Video(result_dir, resolution=resolution) File "/workspace/ToonCrafter/scripts/gradio/i2v_test_application.py", line 32, in __init__ model = load_model_checkpoint(model, ckpt_path) File "/workspace/ToonCrafter/scripts/evaluation/funcs.py", line 140, in load_model_checkpoint load_checkpoint(model, ckpt, full_strict=True) File "/workspace/ToonCrafter/scripts/evaluation/funcs.py", line 115, in load_checkpoint state_dict = torch.load(ckpt, map_location="cpu") File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 815, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1033, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, '<'.

  • @jffaust
    @jffaustАй бұрын

    Excellent elocution and content, but really wish there were chapters in the videos for easier navigation. Almost didn't watch because of that

  • @hempsack
    @hempsackАй бұрын

    I will not even install the model because of all the license restrictions with it, i am not taking chances on these images getting mixed up with stuff I can use without a license and getting into serious trouble. I will not be obligated to their greedy ways and wind up getting sued. People are furious about this because stable AI is from what I'm hearing going broke. And need to punish the main people that made them what they are today. Totally wrong IMO.

  • @rosteliokovalchuks215
    @rosteliokovalchuks215Ай бұрын

    🤙

  • @swannschilling474
    @swannschilling474Ай бұрын

    It might still be a good model once we jailbreak it! 😂

  • @EndangeredAI
    @EndangeredAIАй бұрын

    I think so to

  • @swannschilling474
    @swannschilling474Ай бұрын

    Thanks for this one! 😊

  • @martaalfieri9422
    @martaalfieri9422Ай бұрын

    comfyui workflow? Thanks!

  • @MilesBellas
    @MilesBellasАй бұрын

    Nerdy Rodent and Olivio Sarikas didn't show images from SD3 because they stated it would be against Stability AI's policy.

  • @EndangeredAI
    @EndangeredAIАй бұрын

    Could you link where they said that? I didn’t interpret it that way.

  • @MilesBellas
    @MilesBellasАй бұрын

    @@EndangeredAI Their latest videos...... Videos = Money = Commercial License

  • @EndangeredAI
    @EndangeredAIАй бұрын

    I’m pretty sure they clarified that was for anyone selling inference and not commercial use of images generated. I’ll have a look at their latest videos, I’m a little behind

  • @MilesBellas
    @MilesBellasАй бұрын

    @@EndangeredAI Maybe they were exaggerating to make a point ? Matt Wolfe complained too, unfortunately.

  • @EndangeredAI
    @EndangeredAIАй бұрын

    Regardless if it was the case, I would qualify under license as I am preparing a comparison video with the sd3 api, which can only be used via paying for sd3 so works out either way

  • @skroudge
    @skroudgeАй бұрын

    Can i download and run this on potato pc (low end pc)??????????????

  • @EndangeredAI
    @EndangeredAIАй бұрын

    If you have 24gb vram 😅, you’re better off trying it on runpod

  • @skroudge
    @skroudgeАй бұрын

    @@EndangeredAI I will try on hugging face 😅

  • @HalkerVeil
    @HalkerVeilАй бұрын

    @@EndangeredAI So it DOES work on 24Gb?

  • @EndangeredAI
    @EndangeredAIАй бұрын

    @@HalkerVeil when it first came it it was intermittent, but I think they have issues optimisation improvements since. You can also decrease the frame size

  • @HalkerVeil
    @HalkerVeilАй бұрын

    @@EndangeredAI So we're forced into the A100 market at about $8,000? There must be something out there as a middle ground.

  • @Yamoyashi
    @YamoyashiАй бұрын

    26 gb vram lmaoo

  • @1lllllllll1
    @1lllllllll1Ай бұрын

    SD3 DOA

  • @EndangeredAI
    @EndangeredAIАй бұрын

    Actually, I’ve been using the model the last few days and it grows on you once you learn how to mold it. If the safety features can beg overcome this could be quite the model. I have a video coming on tips and tricks to get the most of it