19 Comments
User's avatar
Nate from Black Mixture's avatar

Awesome job on this! Works flawlessly and I love this clean and easy to understand documentation. 🙌🏾

Expand full comment
KotatsuAi's avatar

The problem with this workflow is that only after wasting a huge amount of time, the VAE Decode node will finally realize if the process fails due to insufficient VRAM, or succeeds. With an 8GB card, you'll get out of memory errors too often until you finally calibrate what settings your card can handle. The sample workflow will (supposedly) run on 12GB cards. With less VRAM, forget about getting that frame count in that resolution. Don't even bother to try this with less than 24 or 40GB VRAM.

Expand full comment
asdf's avatar

Naisu👌

Expand full comment
bee's avatar

worked on first try !

Thank you for such a simple tutorial

Expand full comment
yang's avatar

MLLM 文本编码器,这个该如何添加并使用呢?MLLM text encoder, how to add and use it?

Expand full comment
JP's avatar

awesome job! is it possible to add in the example workflow how to add lora(s)? thanks!

Expand full comment
qilin's avatar

good

Expand full comment
Arunderan's avatar

Really nice. But isn't there a licensing issue in europe and great britain? The current license excludes the countries.

Expand full comment
Lucas's avatar

It’s great. Works on Mac. I found that the clip l and llama 3 need to be put in Clip folder to be recognized. Tried on m4 max, 64ram, 49 length, took 40 min to generate the video. Anything could do to shorten the time?

Expand full comment
Aykut's avatar

Is the ComfyUI model suitable for an NVIDIA RTX 3060Ti with 8GB VRAM?

Expand full comment
Arunderan's avatar

The times where 8 gb was enough for videos is unfortunately over. Even with AnimateDiff i ran too often into OOM. The new models requires simply much more vram.

Expand full comment
KotatsuAi's avatar

The guys of Hunyuan in China are using 40 GB VRAM cards... and that's barely enough to 3 or 4 seconds of video. Don't even ask the price of those cards.

Expand full comment
Hal Rottenberg's avatar

Do not think so out of the box. I have a slightly bigger 3060 w/12GB and am researching what tweaks I need. I'd take this to discord to work through with others.

Expand full comment
KotatsuAi's avatar

Send this suggestion to Discord: how about a node to check if the workflow will run out of VRAM in the VAE Decode node?

Expand full comment
KotatsuAi's avatar

That's my card and no, it's not. Only after wasting time with all the previous nodes, the VAE Decode realizes that you don't have enough VRAM. I would pay for a preflight node that checks if you have enough VRAM before the current workflow settings start wasting your time.

Expand full comment
luokai's avatar

Great work!

Expand full comment
Noam Naumovsky's avatar

Looks great! This workflow also supports low vram?

Expand full comment
KotatsuAi's avatar

I won't try this workflow ever again until I get a 24 or 40GB VRAM card, so no.

Expand full comment