16 Comments

Awesome job on this! Works flawlessly and I love this clean and easy to understand documentation. 🙌🏾

Expand full comment

Naisu👌

Expand full comment

awesome job! is it possible to add in the example workflow how to add lora(s)? thanks!

Expand full comment

The problem with this workflow is that only after wasting a huge amount of time, the VAE Decode node will finally realize if the process fails due to insufficient VRAM, or succeeds. With an 8GB card, you'll get out of memory errors too often until you finally calibrate what settings your card can handle. The sample workflow will (supposedly) run on 12GB cards. With less VRAM, forget about getting that frame count in that resolution. Don't even bother to try this with less than 24 or 40GB VRAM.

Expand full comment

good

Expand full comment

Really nice. But isn't there a licensing issue in europe and great britain? The current license excludes the countries.

Expand full comment

It’s great. Works on Mac. I found that the clip l and llama 3 need to be put in Clip folder to be recognized. Tried on m4 max, 64ram, 49 length, took 40 min to generate the video. Anything could do to shorten the time?

Expand full comment

Is the ComfyUI model suitable for an NVIDIA RTX 3060Ti with 8GB VRAM?

Expand full comment

The times where 8 gb was enough for videos is unfortunately over. Even with AnimateDiff i ran too often into OOM. The new models requires simply much more vram.

Expand full comment

The guys of Hunyuan in China are using 40 GB VRAM cards... and that's barely enough to 3 or 4 seconds of video. Don't even ask the price of those cards.

Expand full comment

Do not think so out of the box. I have a slightly bigger 3060 w/12GB and am researching what tweaks I need. I'd take this to discord to work through with others.

Expand full comment

Send this suggestion to Discord: how about a node to check if the workflow will run out of VRAM in the VAE Decode node?

Expand full comment

That's my card and no, it's not. Only after wasting time with all the previous nodes, the VAE Decode realizes that you don't have enough VRAM. I would pay for a preflight node that checks if you have enough VRAM before the current workflow settings start wasting your time.

Expand full comment

Great work!

Expand full comment

Looks great! This workflow also supports low vram?

Expand full comment

I won't try this workflow ever again until I get a 24 or 40GB VRAM card, so no.

Expand full comment