19 Comments
User's avatar
Nate from Black Mixture's avatar

Awesome job on this! Works flawlessly and I love this clean and easy to understand documentation. 🙌🏾

KotatsuAi's avatar

The problem with this workflow is that only after wasting a huge amount of time, the VAE Decode node will finally realize if the process fails due to insufficient VRAM, or succeeds. With an 8GB card, you'll get out of memory errors too often until you finally calibrate what settings your card can handle. The sample workflow will (supposedly) run on 12GB cards. With less VRAM, forget about getting that frame count in that resolution. Don't even bother to try this with less than 24 or 40GB VRAM.

bee's avatar

worked on first try !

Thank you for such a simple tutorial

yang's avatar

MLLM 文本编码器,这个该如何添加并使用呢?MLLM text encoder, how to add and use it?

JP's avatar

awesome job! is it possible to add in the example workflow how to add lora(s)? thanks!

Arunderan's avatar

Really nice. But isn't there a licensing issue in europe and great britain? The current license excludes the countries.

Lucas's avatar

It’s great. Works on Mac. I found that the clip l and llama 3 need to be put in Clip folder to be recognized. Tried on m4 max, 64ram, 49 length, took 40 min to generate the video. Anything could do to shorten the time?

Aykut's avatar

Is the ComfyUI model suitable for an NVIDIA RTX 3060Ti with 8GB VRAM?

Arunderan's avatar

The times where 8 gb was enough for videos is unfortunately over. Even with AnimateDiff i ran too often into OOM. The new models requires simply much more vram.

KotatsuAi's avatar

The guys of Hunyuan in China are using 40 GB VRAM cards... and that's barely enough to 3 or 4 seconds of video. Don't even ask the price of those cards.

Hal Rottenberg's avatar

Do not think so out of the box. I have a slightly bigger 3060 w/12GB and am researching what tweaks I need. I'd take this to discord to work through with others.

KotatsuAi's avatar

Send this suggestion to Discord: how about a node to check if the workflow will run out of VRAM in the VAE Decode node?

KotatsuAi's avatar

That's my card and no, it's not. Only after wasting time with all the previous nodes, the VAE Decode realizes that you don't have enough VRAM. I would pay for a preflight node that checks if you have enough VRAM before the current workflow settings start wasting your time.

Noam Naumovsky's avatar

Looks great! This workflow also supports low vram?

KotatsuAi's avatar

I won't try this workflow ever again until I get a 24 or 40GB VRAM card, so no.