AnimateDiff CLI prompt travel: Getting up and running.
This video is a quick overview of getting repo up and running on your PC. Although the tutorial is for windows, I have tested on Linux and it works just fine. Just ensure to adjust any paths and commands to the linux equivalent.
The repo is located here:
github.com/s9roll7/animatedif...
The prompt template file can be copied from here:
pastebin.com/vYwZH4Wt
Ensure to name it prompt.json so that you can follow along with the video.
The motion_modules can be found on the main AnimateDiff repo where you will be offered different sources to download them from:
github.com/guoyww/AnimateDiff
or you can go directly to the huggingface space that has them:
huggingface.co/guoyww/animate...
The model used in this video was downloaded from CivitAI:
civitai.com/models/25694/epic...
The specific one used was the epicrealism_naturalSinRC1VAE.safetensors model.
Пікірлер: 162
Really appreciate that you go into detail about every little step like that you can hit tab to auto-complete command line folders, which other tutorials might skip over and presume that everybody knows. I know that one, but there's often some little thing that isn't explained in a video which makes it hard to replicate what they're doing.
@c0nsumption
9 ай бұрын
Thanks dude. I agree. The the VSCode tutorials where they edit multiple lines or rename multiple variables at once but never say how 🙄 Used to drive me nuts as beginner
Finally concise and super useful guides. Subbed! Thanks!
Works like a charm - thank you for the tutorial!
@c0nsumption
9 ай бұрын
Awesome. Happy you’re up and running. More tutorials and prompts on the way ☝🏽
You just earned a sub. Continue doing your work dude the whole community appreciates it. You rock.
Thank you for this tutorial man. You definitely have a knack for it! Prompt Travelling is really the type of synthography output I was waiting for.
very cool! thanks for sharing the process
Cool walkthru! It took me less time to get it going than your tutorial which means your instructions are clear and well paced. I do wish the voice was turned up a bit. 9.9/10 on content. Well done!
@c0nsumption
9 ай бұрын
Was nervous it was too loud. Thanks a lot for the positive feedback 🙏🏽
This is genuine good results I have been looking for!! However there are some hiccups in the installation but manage to resolve if to read the error message in the installation.
good tut, yeah please do more 👍
Do you have a vid for this AD CLI for comfyui? This is so detailed and well done
so cool!
Thank you for the tutorial. Any chance you can do one for using stylize create-region?
Hey, thank you very much for doing this tutorial! Very clear, and well explained. Just a question, in which folder do we put the loras?
@a.dejavu
9 ай бұрын
Never mind, just saw your next video :)))))
Ridiculous how almost every single tutorial assumes that you know all this pedantic shit. Great tutorial
Thanks! works great! would love to know how to maybe use stylize to run this over a video.
@c0nsumption
9 ай бұрын
I can make that. Currently about to drop the next video for this: IPAdapters and LoRAs
this is so fuckin helpful man , really great video
great tutorial thanks! What if I already installed A1111 and comfyui, do I need to install again all the pytorch, xformers etc stuff?
@c0nsumption
9 ай бұрын
Well that’s why we make a virtual environment. There may be differences in dependencies like PyTorch, Xformers, Numpy, etc. Like each one may use different versions that could break the other. So the venv keeps everything contained in its own environment specifically for that application. I can’t say for sure as I haven’t tested but I’d assume you will have to
I keep getting Torch not compiled with CUDA enabled problem. I tried delete the venv folder and reinstall but no dice
@c0nsumption
9 ай бұрын
reddit.com/r/animatediff/s/lSa5FgRqtJ
Thanks for this! Question: If I already have a folder of models, is there any way to point it to that? so I don't have duplicate checkpoints taking up space on my poor c drive?
@c0nsumption
9 ай бұрын
Hmmm 🤔 has to be a sub path of the root project directory. I’m super busy right now but if you give me a couple days I can manipulate the source to allow for this. I’m building a front end for it anyways so it’s relevant to a personal project 👍🏽
this looks amazing! Do you know if there's a way to implement adetailer to make face looks better and consistent ? Wether in ComfyUI or A1111 ?
@c0nsumption
9 ай бұрын
You could run your output frames through it in Auto1111 for the time being 🤷🏽♂️ Use the two in conjunction
great tutorial thanks! does animatediff work only with the SD 1.5 models, or does SDXL models work?
@c0nsumption
9 ай бұрын
At the moment only SD1.5. If SDXL support comes I genuinely believe it’ll will be absolutely revolutionary. Btw I just uploaded another video for this that adds in LoRAs, Embeddings, and IPAdapters (image prompts)
@MrPlasmo
9 ай бұрын
@@c0nsumption thx yeah i'm using SDXL more than MJ these days - yep saw your new vid!
BTW, how do you add a source image? IIRC you mentioned something on Reddit about "IP Adapters", but I'm not sure how that works.
@c0nsumption
9 ай бұрын
Just released a new video on the subject! Check my page :)
Not sure what I'm doing wrong but when I install the dependencies, torchaudio and torchvision require torch2.1.0+cu118, but xformers uninstalls that and replaces it with torch2.1.0 - cant install both xformers and torch2.1.0+cu118, they keep uninstalling each other. Any ideas? what are the actual requirements? pip says xformers 0.0.22 is not compatible with torch 2.0.1+cu118 but the guide seems to require both. edit: not sure what i did to fix it but after fighting with pip for a while and specifying torch==2.0.1+cu118 and installing a few other requirements like pandas i got it working. pretty wild output!
@c0nsumption
9 ай бұрын
Worse comes to worse follow the original repo. Just follow the instructions to the T. Ignore the warnings: git clone github.com/s9roll7/animatediff-cli-prompt-travel.git cd animatediff-cli-prompt-travel python -m venv venv venv\Scripts\activate python -m pip install torch torchvision torchaudio --index-url download.pytorch.org/whl/cu118 python -m pip install -e . python -m pip install xformers After that, run the code CLI command. You may get a warning to pip install a missing package after that. Again, don’t get bogged down in the details, just follow the instructions. Then post any errors on Reddit with photo reference. What ever it uninstalls and reinstalls to do what it needs to do is not to worry about. First ensure you follow the directions and then if it doesn’t work when running you explain the issue
@ZIOJONES
9 ай бұрын
Same problem here. That's what happens when I try to install Xformers with the command line "python -m pip install xformers": Installing collected packages: torch Attempting uninstall: torch Found existing installation: torch 2.1.0+cu118 Uninstalling torch-2.1.0+cu118: Successfully uninstalled torch-2.1.0+cu118 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchaudio 2.1.0+cu118 requires torch==2.1.0+cu118, but you have torch 2.0.1 which is incompatible. torchvision 0.16.0+cu118 requires torch==2.1.0+cu118, but you have torch 2.0.1 which is incompatible. Successfully installed torch-2.0.1 Have you figured out how did you solve this? Thanks.
@c0nsumption
9 ай бұрын
@@ZIOJONES after installing xformers, pip uninstall torch and then run the full torch command again. There’s a post on the r/animatediff subreddit
hi! what i gathered is "cli prompt" changes the prompt on animatediff. is there any way to do this on automatic1111?
@c0nsumption
9 ай бұрын
So far, no. Best to start learning how to use other tools other than Auto1111. It’s has its place but more advanced workflows, especially in ComfyUi are present and more are on the way
Thank you so much, Sir! A little question. How to use a picture as the input together with prompt to generate a video like ip_ref.mp4 in your repo? Could you give step-by-step instructions? Thanks again. ---A new subscriber :)
@c0nsumption
9 ай бұрын
Now worries :) Also this isn’t my repo, just my workflow for it! I made a video from IPAdapters (image prompts) here: kzread.info/dash/bejne/e6yjutNygMy2mdY.htmlsi=WGLUgjL9a_7f3egP
@frankzheng7925
9 ай бұрын
Thank you, Sir! Do I need to download and install IPAdapters and ControlNet like LoRAs and Embeddings? Or they have been included by default?
@c0nsumption
9 ай бұрын
@@frankzheng7925 once you run the prompt in the next video, its should automatically install. But make sure to be connected to the Internet for the initial run so that it installs properly. Also there is a point in the video where I pip install mediapipe Make sure not to miss that step as it comes up briefly
@frankzheng7925
9 ай бұрын
Thank you, Sir!@@c0nsumption
You just earned a new sub!
@c0nsumption
9 ай бұрын
Welcome aboard! I just released another vid for IPAdapters, LoRA's and embeddings. Will get you closer to what your seeing on my Reddit and Twitter (o゚v゚)ノ
If the output is generating still images but not the video file, what would be the solution? As far as I know, I installed everything to the letter and followed closely with each of your instructions. It gets to 126/128 frames and gives the following error: FileNotFoundError: [WinError 2] The system cannot find the file specified.
@c0nsumption
9 ай бұрын
Sorry, I’m off to Muay Thai at the moment. I will try to help after posting tonight’s tutorial. There’s also this discord where people may be able to help: discord.gg/ZRQzfuGP
@c0nsumption
9 ай бұрын
I believe you have to download ffmpeg. Similar users have shown similar errors but before that string it usually mentions ffmpeg. Would you be willing to share more of the error message?
@metaivisuals
8 ай бұрын
you can just use ffmpeg directly. assuming your files are 001 to 999: ffmpeg -framerate 25 -s 256x384 -i %03d.png output.mp4
Hello, thank you very much for the tutorial. I'm having trouble with video export; the frames come out black. Please help. In the GPU, CUDA?¿
@leretah
9 ай бұрын
my video graphics card is nvidia geforce gtx 1660 super
@c0nsumption
9 ай бұрын
What size are you generating at? Can you share your error and prompt file on r/animatediff subreddit for clarity? You can erase prompts if you don’t want to share that info
@leretah
9 ай бұрын
the same that your awesome tutorial, sure now im going to reddit, thank you @@c0nsumption
I cant install the torch lib I'm getting the same error again and again, I tried the latest version from the offical page, but stil not working, any idea? ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch
@c0nsumption
8 ай бұрын
The most recent version isn’t what’s needed for the project. Go to the AnimateDiff CLI prompt travel repo: github.com/s9roll7/animatediff-cli-prompt-travel
do you have to activate virtual environment everytime you want to run this?
@c0nsumption
9 ай бұрын
Yes, the venv is a container of all the python modules the project uses. That’s why auto1111 and ComfyUI and the CLI don’t clash. They all use similar things but maybe different versions. This prevents them from overwriting one another’s requirements
can we do image to image?
It didn't work! One of the modules it said I was missing was called "triton". I tried installing triton, but it couldn't a compatible version. It tried to generate my video anyways eventually, but nothing was in the output folder other than a ".gitignore" file when I checked. Not sure what I'm doing wrong.
@terbospeed
8 ай бұрын
Triton isn't needed and that error is normal, there may have been another error in the stack trace.
in your executed command, can you explain again what -L and -C exactly is? (-W and -H are obvious) thanks!
@c0nsumption
9 ай бұрын
-c is path to json relative to root project directory -W is the width -H is the height -L is the length (amount of frames) -C is the frame context
@christopherzou2369
9 ай бұрын
What is frame context? @@c0nsumption
@cgonestudio2752
9 ай бұрын
segun entiendo es el promp de la imagen que deseas ver,@@christopherzou2369
Thanks for the video. It is very helpful. I am having some issues when I run the installing xformers step. I get an error "ModuleNotFoundError: No module named 'torch'". I have tried a couple of times now using your instructions and also the instructions on the repo. And I've tried a couple of different versions of torch too. Have also checked that torch is installed with "pip list" after the torch install and it's there on the list. I'm stumped.
@c0nsumption
8 ай бұрын
So far the solution I’ve heard works for this is: STEPS : git clone github.com/s9roll7/animatediff- cli-prompt-travel.git cd animatediff-cli-prompt-travel python -m venv venv venv|Scripts\activate python -m pip install torch==2.1.0+Cu118 torchvision torchaudio --index-url download.pytorch.org/whl/cu118 python -m pip install -e . python -m pip install xformers python -m pip install torch==2.1.0+cu118 torchvision torchaudio --index-url download.pytorch.org/whl/cu118 Original Reddit post: reddit.com/r/animatediff/s/oDT7rtDv5N
@darrynrogers204
8 ай бұрын
@@c0nsumption Thanks for the help and the link. I'll have another try.
How do you slow down the change? even if I only use 4 the image changes non-stop throughout the video.
@c0nsumption
9 ай бұрын
If you only use 4 what? You change the keyframes intervals in your prompt map. I’ll make a vid for this.
@xcixxcix8659
9 ай бұрын
Under prompt map, I guess those are called key frames, at 0, 16, 32 etc. Do I have to turn down frames per second? It seems like when I go higher the changes come faster but it looks smoother. Great stuff though. I appreciate it.
@c0nsumption
9 ай бұрын
@@xcixxcix8659 you could turn down frames per second. You can also change those values or add even more like 0, 8, 16 or distance them like 0, 16, 32. The possibilities are endless honestly
cool
Now they have a LCM lora support ;)
Thanks for the guide, can confirm that for me it works (slowly) on a 1080. Would be very interested in a version for google colab, I know pretty much nothing about linux commands but it should be possible to translate it into linux commands I assume?
@c0nsumption
9 ай бұрын
Already tested on Linux. Works properly. Just remember on Linux to create and activate venv it’s: python3 -m venv venv source venv/bin/activate When passing your frames to be refined you should also be using forward slashes (Linux/paths/) instead of backwards slash (\windows\paths\)
@elowine
9 ай бұрын
@@c0nsumption Thanks for the reply, that's exactly where I got stuck haha! Getting this error when I try to activate animatediff "/usr/bin/python3: No module named animatediff"
@elowine
9 ай бұрын
@@c0nsumption figured it out, you can used this command to set the correct location I used this one for now: %set_env PYTHONPATH=/content/animatediff-cli-prompt-travel/src
@c0nsumption
9 ай бұрын
@@elowine can you share on r/animatediff subreddit for others trying to accomplish the same?
@elowine
9 ай бұрын
@@c0nsumption will make a simple post tomorrow :)
do we need stable diffusion for this?
@metaivisuals
8 ай бұрын
no it can be a seperate thing like he shows. you will need to enter everything via cmd window. But you can also install it for stable diffusion or comfy ui (but you will need to check the other tutorials of this channel)
Thank you. This is awesome!!! ♥
I got up to "python -m pip install -e ." and it's giving me "does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found". Any tips?
@c0nsumption
9 ай бұрын
Can you post screenshots on r/animatediff subreddit? That’ll help get some more info. Seems like there are files missing from the git clone command 🤔
@c0nsumption
9 ай бұрын
You made sure to ‘cd animatediff-cli-prompt-travel’ before hand right? 🤔
Is there Mac support?
I get this error FileNotFoundError: [WinError 2] The system cannot find the file specified. while Encoding interpolated frames with ffmpeg..
@c0nsumption
9 ай бұрын
You need to show more of the error message for clarity but I’m assuming you need to install ffmpeg Not the Python package but ffmpeg itself. Please share issues on r/animatediff for more help from the community
hi i installed without any error but when i start loading the program it gives me this error. AssertionError: Torch not compiled with CUDA enabled. can u help me how to deal this. thanks
@c0nsumption
9 ай бұрын
Check the r/animatediff subredddit. A few others have experienced similar problems and are posting for help. reddit.com/r/animatediff/s/BeBztmBxJl
Can I install this on mac?
@c0nsumption
9 ай бұрын
No idea 🤷🏽♂️ sorry. Only tested on Windows and Ubuntu
I followed the intruction but I got this error: AssertionError: Torch not compiled with CUDA enabled. Do you guys have an opinion how to solve that? Thank you
@c0nsumption
5 ай бұрын
pip install torch torchvision torchaudio --extra-index-url download.pytorch.org/whl/cu121 Try that.
@ataberkseker
5 ай бұрын
@@c0nsumption all the requirements are satisfied it said. And the same error occured. Thank you for your answer
@c0nsumption
5 ай бұрын
@@ataberkseker pip uninstall torch Then do the command above. You may need to uninstall torch vision and audio as well. Python packages can be a pain in the butt, I know
@ataberkseker
5 ай бұрын
@@c0nsumption ahh dude I'm grateful for your help. I was trying to get a single output for days. It worked finally. Just a single problem. After it reached 100% this error popped up: FileNotFoundError: [WinError 2] The system cannot find the file specified. The images were created but the mp4 file couldn't be. Also another point I'm curious about is if there is a command or a line of script that force my laptop to use its gpu instead of cpu. thank you for your time again
@c0nsumption
5 ай бұрын
@@ataberkseker what kind of gpu you got? Also sounds like you are missing ffmpeg
Hi man I got the images but had some error with ffmpeg, and didn’t get the video. Any idea what to do ?
@Prabhakaranraj
9 ай бұрын
File not found error: " [Winerror 2] the system cannot find the file specified" im not a coder so may be a simple problem.
@c0nsumption
9 ай бұрын
You have to install ffmpeg onto your computer. Here’s from ChatGPT: Installing `ffmpeg` depends on your operating system. Here's a brief guide for various systems: **1. Ubuntu and Debian-based distributions:** Open a terminal and type: ``` sudo apt update sudo apt install ffmpeg ``` **2. CentOS and RHEL:** First, you might need to enable EPEL & RPM Fusion repositories: ``` sudo yum install epel-release sudo rpm -v --import li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro sudo rpm -Uvh li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm ``` Then install ffmpeg: ``` sudo yum install ffmpeg ffmpeg-devel ``` **3. Fedora:** ``` sudo dnf install ffmpeg ``` **4. macOS (using Homebrew):** If you haven't installed Homebrew yet, you can do so by following the instructions at brew.sh/. Once you have Homebrew: ``` brew install ffmpeg ``` **5. Windows:** For Windows, it's a bit more manual: a. Download a build from the official FFmpeg website: ffmpeg.org/download.html#build-windows b. Extract the downloaded archive to a directory on your computer. c. Add the `bin` directory from the extracted archive to your system's PATH. This will allow you to run `ffmpeg` from the Command Prompt. Remember to always refer to the official documentation or the `ffmpeg` website for the most up-to-date information.
@Prabhakaranraj
9 ай бұрын
@@c0nsumption thank you
@c0nsumption
9 ай бұрын
Did that get it working? 🧍🏽♂️Let’s figure this ‘ish’ out
@Prabhakaranraj
9 ай бұрын
@@c0nsumption yes it did. Solved. Thanks a lot. Everything working Like a charm ✌🏼
installed everything, but get this error when trying to execute: "AssertionError: Torch not compiled with CUDA enabled" I reinstalled pytorch which I thought would fix this... any ideas?
@c0nsumption
9 ай бұрын
Do you have Cuda toolkit installed on your computer?
@MrPlasmo
9 ай бұрын
@@c0nsumption yes, I installed the windows installer cuda toolkit 12.2.2_537.13 still not working
@c0nsumption
9 ай бұрын
You’re gonna have to post in the r/animatediff subreddit. This way we can get photos to help you with your problem. There’s just not enough information here to give you a solution
@MrPlasmo
9 ай бұрын
@@c0nsumption i think i got it to work (thanks for your help) - what was happening is that I installed Pytorch on my C: drive, but my animatediff folders were installed in D: drive... I reinstalled pytorch into D:/ and the video compiled finally :)
@c0nsumption
9 ай бұрын
@@MrPlasmo can you please share this on reddit? there is a post in the r/animatediff subreddit where people have encountered the same issue. I believe that could've been fixed by adding CUDA to your environment variables but its been a long time so im not sure. Good job though!
Is there any way to remove the watermark from the images? I can see that you don't have them.
@c0nsumption
9 ай бұрын
What motion module are you using? mm_sd_v15_v2.ckpt for the most part doesn’t have any. Also your negative prompt and the SD model you’re using.
@johanrr4
9 ай бұрын
@@c0nsumption I was using the normal v15 and the same sd model as you are using in the video, no negative prompt. I'll try with v15_v2.
@c0nsumption
9 ай бұрын
@@johanrr4 v15 is known to have issues with watermarks. You v2 for no watermarks
@johanrr4
9 ай бұрын
@@c0nsumption It gone with the v2 version. Thanks!
is there a difference using this besides the automatic11 gui?
@c0nsumption
9 ай бұрын
Prompt travel and ControlNet with AnimateDiff. You can directly influence the animation with these. Some of my more recent shorts show some examples. You can literally control facial expressions, poses, and more while it also interpolates the motions between each prompt per frame. That’s why we can now control things like blinking as well and what not
@eegernades
9 ай бұрын
Intresting. Will have to try it out. Got a 4090 too, so this should be intresting. You a dev? I'm a Jr web dev rn.
@c0nsumption
9 ай бұрын
@@eegernades Awesome. Yeah I’m a dev but it’s extremely hard for me to find work cause of Latino descent and a really Spanish name. It’s damn near automatic rejection. I can program in Python, JavaScript, and C# as my main languages but I can easily keep up with a C++ project and other languages. I have experience in both Unreal engine and 10 years of experience using Unity. Have done contracts for both. And have done front end/back end development work for different companies in the web3 space. Also worked for Modern Chess for about a little over a year. Really just trying to keep food on the table 😅 A.I., media, graphics, and game development are things I am extremely passionate about and so I’ve invested all my time and money into hoping to succeed in it. God willing it pays off soon, it’s been extremely hard to stay afloat 🧍🏽♂️ Blessings man, good luck to you 🙏🏽
@Prabhakaranraj
9 ай бұрын
Awesome man
@eegernades
9 ай бұрын
@@c0nsumption damn, I'm latino too. My first name passes the white check, but I think my last name gives it out. Currently know, javascript, sql, some python and the non programming html and css. You good with backend stuff? Currently playing with the idea of building a SASS project with ai art.
will this work from google colab?
@c0nsumption
9 ай бұрын
Another user tried and was successful. I’ve tested on a Linux rig and can confirm it works fine on Linux. Just remember that on Linux: To create virtual env: python3 -m venv venv To activate: source venv/bin/activate
How much VRAM do we need for this?
@c0nsumption
9 ай бұрын
Not entirely sure but the repo says that at -W 256 -H 384 it’s took up 6 GB to 7 GB. But I believe that was using ControlNets as well. I’ll do some test later. Extremely busy at the moment 🙇🏽♂️
Great vid! how do you make it work on an AMD CPU?
@c0nsumption
9 ай бұрын
???? You don’t have a graphics card? If you don’t, that’s something you would have to investigate
@Gripping100
9 ай бұрын
I meant AMD GPU :)@@c0nsumption
@c0nsumption
9 ай бұрын
@@Gripping100 github.com/s9roll7/animatediff-cli-prompt-travel Original repo doesn’t mention anything about AMD 🤔 ComfyUI has AMD support though. I’m making a prompt travel/IPAdapter tutorial for that possibly tomorrow.
@c0nsumption
9 ай бұрын
@@Gripping100 otherwise you can check ComfyUI repo for AMD instructions and see if maybe they allow it to work. No promises though. It does say you need Linux though 🤔
Prompt file link dont work. Can you fix it?
@c0nsumption
9 ай бұрын
Just tested, works just fine. Whats happening on your end? Should lead to pastebin. You can either download or copy and paste it into your own .json file on your computer
you still need 12gb VRAM?
@c0nsumption
9 ай бұрын
Repo says otherwise. Don’t know for sure though, you would have to test. Run at lower settings like -W 256 -H 384. Repo says this take 6 to 7 GB but cannot confirm
hey, noob question, can I do the same on mac?
@c0nsumption
9 ай бұрын
I’ve heard of projects allowing for stable diffusion to run on Mac and their GPUs but I’ve never really invested the time to research and test as I don’t have a Macintosh anywhere near powerful to make it worth the time. Would be up to you to do the footwork. Blessings 🙏🏽
@internaldevices
9 ай бұрын
Thank you very much. @jifeng123guo
@internaldevices
9 ай бұрын
Thank you sir. Sending you blessings too.@@c0nsumption
I also have this problem: FileNotFoundError: [WinError 2] The system cannot find the file specified and most likely it is related to ffmpeg. Please tell me how to download and what to do to resolve this issue. I'm a newbie, thank you very much in advance
@c0nsumption
9 ай бұрын
It’s already in one of the other comments. Please read through the comments
After running it says: FileNotFoundError. Generations is 100% Safing frames stops at 98% What might be the mistake? Newbie, thanks in advance! Great video btw
@R3alOzero
9 ай бұрын
Frames generated, but not the video
@c0nsumption
9 ай бұрын
Need to install ffmpeg
Toooooo fast 👎
Saying ‘really quick’ when it’s 18mins is 😮