29 Comments

Just ran the t2i workflow on an RTX2060 with 6GB and it worked.

Expand full comment

but the question is how long did it took?

Expand full comment

Deleted original reply because I read your question wrong.

438.7 seconds with the default settings (33 frames?)

Expand full comment

same as well finally something thats actually usable xD

Expand full comment

Great work! But where to find the 'SaveWEBM' node in the workflows?

Expand full comment

use Video Combine instead

Expand full comment

CUDA error: operation not supported

CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1

Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Expand full comment

How to solve this problem?

Expand full comment

为了庆祝官方支持wan,我注册了!

希望官方可以原生支持的模型越来越多,也越来越好~

Expand full comment

Hi, I'm encountering the following error during inference:

mat1 and mat2 shapes cannot be multiplied (77x768 and 4096x5120)

Could you please help me understand what's causing this and how I can resolve it?

I believe it may be related to a mismatch between the text encoder output and the model's expected input shape, but I'm not sure how to fix it.

Thank you in advance for your support!

Expand full comment

在用,感觉别人的使用效果挺好的,但我好像生成的一般,就是不管我咋改好像都只能生成一秒的视频,请问大家是怎么生成多秒的视频的

I'm currently using it. I've noticed that others seem to have good results, but it seems that no matter how I adjust it, I can only generate one-second videos. Could anyone share how they manage to generate videos that are longer than one second?

Translated by machine

Expand full comment

对于开源的视频模型板块这真是一个令人印象深刻的信息,赞!我这就去部署在我的计算机上

Expand full comment

failed to run on the KSampler node.

Tried to set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` , but still did not work.

---

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 653, in multistep_uni_pc_bh_update

rhos_c = torch.linalg.solve(R, b)

The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on https://github.com/pytorch/pytorch/issues/141287 and mention use-case, that resulted in missing op as well as commit hash 2236df1770800ffea5697b11b0bb0d910b2e59e1. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

----

Device: MacBook Pro M4max 64G

ComfyUI desktop: 0.4.26

Workflow: example_workflows_Wan2.1_image_to_video_wan_480p

Expand full comment

CLIPLoader

Error(s) in loading state_dict for T5:

size mismatch for shared.weight: copying a param with shape torch.Size([256384, 4096]) from checkpoint, the shape in current model is torch.Size([32128, 4096]). What's this error?

Expand full comment

Getting very good img2video results. Does pretty good at following prompts. About the Kling subscription - ughh

Expand full comment

Did you update Comfy?

Expand full comment

Running the text_to_video_wan.jason I get this error:

Prompt outputs failed validation

CLIPLoader:

- Value not in list: type: 'wan' not in ['stable_diffusion', 'stable_cascade', 'sd3', 'stable_audio', 'mochi', 'ltxv', 'pixart', 'cosmos', 'lumina2']

Expand full comment

I can't get the node WanImageToVideo to work. Where can I install this custom node?

Expand full comment

Prompt outputs failed validation

CLIPLoader:

- Value not in list: type: 'wan' not in ['stable_diffusion', 'stable_cascade', 'sd3', 'stable_audio', 'mochi', 'ltxv', 'pixart', 'cosmos', 'lumina2']

just checked and comfyui is on the latest version

Expand full comment

Can we have fp8 version please?

Expand full comment

Can we have the fp8 models please?

Expand full comment