粘土风格ComfyUI工作流和提示词优化

Ғылым және технология

介绍了两个新做的粘土风格工作流,图片转粘土和黏土头像。介绍了提示词优化后的3.0版本和3.1版本。
图片转粘土:openart.ai/workflows/datou/im...
粘土头像:openart.ai/workflows/datou/cl...
LLaVA模型:ollama.com/library/llava:7b-v...

Пікірлер: 54

  • @AIDesigner5323
    @AIDesigner5323Ай бұрын

    见过最详细的工作流,都配有模型下载,和数值运用,大佬实在太周全了

  • @NeoAnifuture
    @NeoAnifutureАй бұрын

    It's very carefully produced and it's often very long. Thank you for sharing. It inspires me a lot. Keep watching.

  • @user-st2xl9un3z
    @user-st2xl9un3zАй бұрын

    喜欢大头哥的流和视频 赞

  • @user-st2xl9un3z

    @user-st2xl9un3z

    Ай бұрын

    大头哥的流 我就是那个AUX的depth-anything每次都报错 不得不换成marigold

  • @Datou1977

    @Datou1977

    Ай бұрын

    @@user-st2xl9un3z marigold的深度图更精确和清晰,效果更好,就是速度慢点。

  • @35wangfeng
    @35wangfengАй бұрын

    👍👍👍好棒

  • @YiZhouZhou
    @YiZhouZhouАй бұрын

    “哎,你还是要进步啊,同学”,哈哈,头哥幽默

  • @donggua-666
    @donggua-666Ай бұрын

    王哥牛掰

  • @user-sm3yd7zr1c
    @user-sm3yd7zr1c29 күн бұрын

    头哥 ,我这个显卡2070s 能带的动 llava:7b-v1.6-mistral-fp16 这个模型吗。

  • @lgyhz9640
    @lgyhz9640Ай бұрын

    感谢大佬分享,经过好几天的折腾终于能出图了,但是出图效果跟openart上面的图片有差距,我应该如何调整那?

  • @Datou1977

    @Datou1977

    Ай бұрын

    到底有什么差距我也不清楚啊,而且模型一致的话工作流运行下来就能得到一样的结果,不会差很多。

  • @user-hg5rz1cp2k
    @user-hg5rz1cp2kАй бұрын

    试了一下ollamam vision使用llama3模型不能识别图像,改成WD 1.4 tagger识别特征后,再给Ollama重新编译,才走通了这个流程。

  • @Datou1977

    @Datou1977

    Ай бұрын

    llama3是瞎子啊老哥,识别图像要用llava-phi3:3.8b-mini-fp16

  • @charlshuang
    @charlshuangАй бұрын

    datou老师,sigmas节点报红,在manager也搜不出,请问是哪个插件的?我用的是整合包的Comfyui

  • @Datou1977

    @Datou1977

    Ай бұрын

    comfyui本体自带的,把它升级到最新版就有了,你试试

  • @sasiburi5091
    @sasiburi5091Ай бұрын

    老师 我装完ollama 也可以在cmd窗口聊天了,但是运行comfyui就报错,是需要改名字吗?我还找不到它下载的模型在哪? Error occurred when executing OllamaVision: model 'llava:7b-v1.6-mistral-fp16' not found, try pulling it first

  • @Datou1977

    @Datou1977

    Ай бұрын

    应该是没有下载这个模型,cmd里面输入ollama run llava:7b-v1.6-mistral-fp16,下载成功后再用comfyui调用

  • @xiaoxizhang2055
    @xiaoxizhang2055Ай бұрын

    奈何显存不多,Ollama前段时间确实装了,也可以运行,但是同时运行comfyui这貌似成了奢望。张学友那张动态的效果是怎么实现的呢?

  • @Datou1977

    @Datou1977

    Ай бұрын

    动态效果看这个视频 kzread.info/dash/bejne/kZiaxJqqg8K_nrg.html

  • @Datou1977

    @Datou1977

    Ай бұрын

    把ollama节点换成gemini节点,就不占用本地算力了,模型能力还更强。实在不行去掉语言模型全自动生成提示词这一块,用wd14 tagger节点或者手动输入提示词。

  • @xiaoxizhang2055

    @xiaoxizhang2055

    Ай бұрын

    @@Datou1977 好的大佬,这就去试试。哇哈哈。

  • @allensen-mv9uk
    @allensen-mv9ukАй бұрын

    Hi Datou,在运行时出现 raise Exception("IPAdapter model not found.")的提示, 可我明明在models下新建了一个ipadapter文件夹,且在该文件夹中放置了ip-adapter-plus_sd15.safetensors 模型,这是什么状况呢,请教😂

  • @Datou1977

    @Datou1977

    Ай бұрын

    需要xl的ipa模型,理论上会自动下载啊。huggingface.co/h94/IP-Adapter/tree/main/sdxl_models

  • @anjiesong
    @anjiesong29 күн бұрын

    Error occurred when executing OllamaVision: llama runner process has terminated: exit status 0xc0000005 Ollama已经安装也运行了,模型也都下载好了。为啥还出错啊,求博主解答下,就卡在这了

  • @Datou1977

    @Datou1977

    29 күн бұрын

    关注一下显存占用情况,或者先用cmd运行ollama模型看看能不能正常对话

  • @user-qx8ve6vv4f
    @user-qx8ve6vv4fАй бұрын

    老师这个用cmd下载的llama模型是不是默认都存C盘了,有没有办法找到后放到其他盘也能识别出来,C盘标红不够用了

  • @Datou1977

    @Datou1977

    Ай бұрын

    ollama这个软件好像默认是往c盘装,它也没有任何设置界面,没得改。我是整块硬盘不分区,就一个c盘。你如果分区了,可以用分区软件调整一下c盘的大小(有点风险,小心)。

  • @harveyyao4121

    @harveyyao4121

    Ай бұрын

    可以换的,从环境变量里改。有教程

  • @Datou1977

    @Datou1977

    Ай бұрын

    x.com/xulzy_6/status/1787684486815396073 这里

  • @ChoRay-8204
    @ChoRay-820429 күн бұрын

    整了三天了,求老师解惑,同样是clip的报错。在IPAdapter style & Composition SDXL节点上:Error occurred when executing IPAdapterStyleComposition: Missing CLIPVision model. File "/Volumes/Lei’s2T/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/Volumes/Lei’s2T/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/Volumes/Lei’s2T/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/Volumes/Lei’s2T/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 661, in apply_ipadapter raise Exception("Missing CLIPVision model.")

  • @ChoRay-8204

    @ChoRay-8204

    29 күн бұрын

    CLIP_Vision下也拷贝了CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors和CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors

  • @Datou1977

    @Datou1977

    27 күн бұрын

    @@ChoRay-8204 First, you should download CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, and then put it under ComfyUI/models/clip_vision. After that, you just quit ComfyUI, and restart ComfyUI to run the whole process again. It should work.

  • @binary6699
    @binary6699Ай бұрын

    ollama的节点和模型那里,如果本地没有部署,能调用某些平台的吗?

  • @Datou1977

    @Datou1977

    Ай бұрын

    gemini节点可以高替

  • @binary6699

    @binary6699

    Ай бұрын

    @@Datou1977 具体怎么操作呢?

  • @Datou1977

    @Datou1977

    Ай бұрын

    @@binary6699 github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini 看这里的说明

  • @justtomatoo
    @justtomatooАй бұрын

    老师,我运行的时候提示Error occurred when executing OllamaVision,该怎么调整,我在本地看ollama是可以正常运行的

  • @Datou1977

    @Datou1977

    Ай бұрын

    要不换个vision模型试试,用llava-phi3:3.8b-mini-fp16

  • @justtomatoo

    @justtomatoo

    Ай бұрын

    @@Datou1977 老师,ollama的模型我直接拉取运行的,所以vision模型填写的只有llava/llama3,这样有影响吗

  • @Datou1977

    @Datou1977

    Ай бұрын

    @@justtomatoo 有影响,默认名字是量化后的小号模型,能力缩水了

  • @AIDesigner5323
    @AIDesigner5323Ай бұрын

    大佬,3.0的关键词,在哪里可以下载啊?

  • @Datou1977

    @Datou1977

    Ай бұрын

    3.0版的提示词在最新一期漫画转真人那个工作流里,3.0还是稳一些

  • @pingxuewen5020
    @pingxuewen5020Ай бұрын

    16g 显存 comfyui能跑吗

  • @Datou1977

    @Datou1977

    Ай бұрын

    如果把ollama节点换成gemini节点,不在本地跑语言模型,生成粘土图片只需要14G显存。如果用本地的模型,下载尺寸小一点的模型。

  • @user-yp8cj6pe7s
    @user-yp8cj6pe7sАй бұрын

    大头哥,你的机器配置是什么?

  • @Datou1977

    @Datou1977

    Ай бұрын

    x.com/datou/status/1643088563855302656?s=46&t=q-CkUEteWAvmSTc8lwBgUA

  • @user-tr3zl5ci8e
    @user-tr3zl5ci8eАй бұрын

    老师,请问提示找不到ClipVision如何解决

  • @Datou1977

    @Datou1977

    Ай бұрын

    在哪个节点提示的?

  • @Datou1977

    @Datou1977

    Ай бұрын

    Yi Lin about 7 hours ago First, you should download CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, and then put it under ComfyUI/models/clip_vision. After that, you just quit ComfyUI, and restart ComfyUI to run the whole process again. It should work.

  • @user-tr3zl5ci8e

    @user-tr3zl5ci8e

    Ай бұрын

    @@Datou1977 Error occurred when executing IPAdapterUnifiedLoader: ClipVision model not found. File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("ClipVision model not found.")

  • @user-tr3zl5ci8e

    @user-tr3zl5ci8e

    Ай бұрын

    @@Datou1977 Exception during processing!!! ClipVision model not found. Traceback (most recent call last): File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("ClipVision model not found.") Exception: ClipVision model not found.

  • @Datou1977

    @Datou1977

    Ай бұрын

    @@user-tr3zl5ci8e CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors 放对位置了?重启comfyui了?

  • @fangazio2011
    @fangazio201127 күн бұрын

    不知道名字那个好像是劳模姐

  • @Datou1977

    @Datou1977

    25 күн бұрын

    对,杰西卡·查斯坦