I'm currently using it. I've noticed that others seem to have good results, but it seems that no matter how I adjust it, I can only generate one-second videos. Could anyone share how they manage to generate videos that are longer than one second?
The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on https://github.com/pytorch/pytorch/issues/141287 and mention use-case, that resulted in missing op as well as commit hash 2236df1770800ffea5697b11b0bb0d910b2e59e1. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
size mismatch for shared.weight: copying a param with shape torch.Size([256384, 4096]) from checkpoint, the shape in current model is torch.Size([32128, 4096]). What's this error?
Just ran the t2i workflow on an RTX2060 with 6GB and it worked.
but the question is how long did it took?
Deleted original reply because I read your question wrong.
438.7 seconds with the default settings (33 frames?)
same as well finally something thats actually usable xD
Great work! But where to find the 'SaveWEBM' node in the workflows?
use Video Combine instead
CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
How to solve this problem?
为了庆祝官方支持wan,我注册了!
希望官方可以原生支持的模型越来越多,也越来越好~
Hi, I'm encountering the following error during inference:
mat1 and mat2 shapes cannot be multiplied (77x768 and 4096x5120)
Could you please help me understand what's causing this and how I can resolve it?
I believe it may be related to a mismatch between the text encoder output and the model's expected input shape, but I'm not sure how to fix it.
Thank you in advance for your support!
在用,感觉别人的使用效果挺好的,但我好像生成的一般,就是不管我咋改好像都只能生成一秒的视频,请问大家是怎么生成多秒的视频的
I'm currently using it. I've noticed that others seem to have good results, but it seems that no matter how I adjust it, I can only generate one-second videos. Could anyone share how they manage to generate videos that are longer than one second?
Translated by machine
对于开源的视频模型板块这真是一个令人印象深刻的信息,赞!我这就去部署在我的计算机上
failed to run on the KSampler node.
Tried to set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` , but still did not work.
---
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 653, in multistep_uni_pc_bh_update
rhos_c = torch.linalg.solve(R, b)
The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on https://github.com/pytorch/pytorch/issues/141287 and mention use-case, that resulted in missing op as well as commit hash 2236df1770800ffea5697b11b0bb0d910b2e59e1. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
----
Device: MacBook Pro M4max 64G
ComfyUI desktop: 0.4.26
Workflow: example_workflows_Wan2.1_image_to_video_wan_480p
CLIPLoader
Error(s) in loading state_dict for T5:
size mismatch for shared.weight: copying a param with shape torch.Size([256384, 4096]) from checkpoint, the shape in current model is torch.Size([32128, 4096]). What's this error?
Getting very good img2video results. Does pretty good at following prompts. About the Kling subscription - ughh
Did you update Comfy?
Running the text_to_video_wan.jason I get this error:
Prompt outputs failed validation
CLIPLoader:
- Value not in list: type: 'wan' not in ['stable_diffusion', 'stable_cascade', 'sd3', 'stable_audio', 'mochi', 'ltxv', 'pixart', 'cosmos', 'lumina2']
I can't get the node WanImageToVideo to work. Where can I install this custom node?
Prompt outputs failed validation
CLIPLoader:
- Value not in list: type: 'wan' not in ['stable_diffusion', 'stable_cascade', 'sd3', 'stable_audio', 'mochi', 'ltxv', 'pixart', 'cosmos', 'lumina2']
just checked and comfyui is on the latest version
Can we have fp8 version please?
Can we have the fp8 models please?