We’re excited to announce that HunyuanVideo, a groundbreaking 13-billion-parameter open-source video foundation model, is now natively supported in ComfyUI!
The problem with this workflow is that only after wasting a huge amount of time, the VAE Decode node will finally realize if the process fails due to insufficient VRAM, or succeeds. With an 8GB card, you'll get out of memory errors too often until you finally calibrate what settings your card can handle. The sample workflow will (supposedly) run on 12GB cards. With less VRAM, forget about getting that frame count in that resolution. Don't even bother to try this with less than 24 or 40GB VRAM.
It’s great. Works on Mac. I found that the clip l and llama 3 need to be put in Clip folder to be recognized. Tried on m4 max, 64ram, 49 length, took 40 min to generate the video. Anything could do to shorten the time?
The times where 8 gb was enough for videos is unfortunately over. Even with AnimateDiff i ran too often into OOM. The new models requires simply much more vram.
The guys of Hunyuan in China are using 40 GB VRAM cards... and that's barely enough to 3 or 4 seconds of video. Don't even ask the price of those cards.
Do not think so out of the box. I have a slightly bigger 3060 w/12GB and am researching what tweaks I need. I'd take this to discord to work through with others.
That's my card and no, it's not. Only after wasting time with all the previous nodes, the VAE Decode realizes that you don't have enough VRAM. I would pay for a preflight node that checks if you have enough VRAM before the current workflow settings start wasting your time.
Awesome job on this! Works flawlessly and I love this clean and easy to understand documentation. 🙌🏾
Naisu👌
awesome job! is it possible to add in the example workflow how to add lora(s)? thanks!
The problem with this workflow is that only after wasting a huge amount of time, the VAE Decode node will finally realize if the process fails due to insufficient VRAM, or succeeds. With an 8GB card, you'll get out of memory errors too often until you finally calibrate what settings your card can handle. The sample workflow will (supposedly) run on 12GB cards. With less VRAM, forget about getting that frame count in that resolution. Don't even bother to try this with less than 24 or 40GB VRAM.
good
Really nice. But isn't there a licensing issue in europe and great britain? The current license excludes the countries.
It’s great. Works on Mac. I found that the clip l and llama 3 need to be put in Clip folder to be recognized. Tried on m4 max, 64ram, 49 length, took 40 min to generate the video. Anything could do to shorten the time?
Is the ComfyUI model suitable for an NVIDIA RTX 3060Ti with 8GB VRAM?
The times where 8 gb was enough for videos is unfortunately over. Even with AnimateDiff i ran too often into OOM. The new models requires simply much more vram.
The guys of Hunyuan in China are using 40 GB VRAM cards... and that's barely enough to 3 or 4 seconds of video. Don't even ask the price of those cards.
Do not think so out of the box. I have a slightly bigger 3060 w/12GB and am researching what tweaks I need. I'd take this to discord to work through with others.
Send this suggestion to Discord: how about a node to check if the workflow will run out of VRAM in the VAE Decode node?
That's my card and no, it's not. Only after wasting time with all the previous nodes, the VAE Decode realizes that you don't have enough VRAM. I would pay for a preflight node that checks if you have enough VRAM before the current workflow settings start wasting your time.
Great work!
Looks great! This workflow also supports low vram?
I won't try this workflow ever again until I get a 24 or 40GB VRAM card, so no.