We'd like to share the new native support for Wan2.2 FLF2V. With the latest ComfyUI updates, you can now turn the first frame to the last frame into a video using Wan 2.2 in ComfyUI.
Yeah as a Software Engineer/Programmer myself, when I saw the FLF it reminded me of the FIFO, LIFO and other formulas (First in First Out, Last In First Out, etc) and maybe that's why I was able to draw the conclusion that it might be "First Last Frame" kinda faster xD
Why is there no definitive guide on prompt structure? And when are you going to fix the broken Wan 2.2 14B Animate template? All the nodes should be available, but in one instance when I had my custom models directories setup in the extra models directory yaml, it broke the manager. So when I go to download custom nodes, nothing was there. And it’s a bit infuriating to redo that every time there is new update. You should make so you can just do it settings inside the program.
And I am assuming Wan2.2 is better at character animations than regular old 2.2 14B? Too many questions, not enough wiki.
Also when can we see more Lora’s for Wan2.2 and ACE-Step Audio?
Wan2.2 is here! Turn text or images into 4K videos with ease. Perfect for trailers, ads, music videos, and edits. Fast, smooth, and built for creators!
It's saying you don't have the right clip vision model(or you don't have it installed to get right folder). Though, don't ask me what that model is, cuz I'm not sure.
And everyone is just supposed to know what "FLF" is..? Are you turning fluff into videos?
Might be "First Last Frame" or something judging by the context lol
I think it should sound better '' Inbetween video''..but since no one is in animation Dev. they understand it better with FLF. crocs!
Yeah as a Software Engineer/Programmer myself, when I saw the FLF it reminded me of the FIFO, LIFO and other formulas (First in First Out, Last In First Out, etc) and maybe that's why I was able to draw the conclusion that it might be "First Last Frame" kinda faster xD
Why is there no definitive guide on prompt structure? And when are you going to fix the broken Wan 2.2 14B Animate template? All the nodes should be available, but in one instance when I had my custom models directories setup in the extra models directory yaml, it broke the manager. So when I go to download custom nodes, nothing was there. And it’s a bit infuriating to redo that every time there is new update. You should make so you can just do it settings inside the program.
And I am assuming Wan2.2 is better at character animations than regular old 2.2 14B? Too many questions, not enough wiki.
Also when can we see more Lora’s for Wan2.2 and ACE-Step Audio?
ComfyUI is my new Fruity Loops Studio
Any inpainting workflow?
Wan2.2 is here! Turn text or images into 4K videos with ease. Perfect for trailers, ads, music videos, and edits. Fast, smooth, and built for creators!
https://www.wan-ai.co/
Every Shot, Wan Take. No workflow required, one-click quick generation.
File "G:\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_wan.py", line 163, in encode
if clip_vision_output is not None:
^^^^^^^^^^^^^^^^^^
UnboundLocalError: cannot access local variable 'clip_vision_output' where it is not associated with a value
just updated comfy, same issue, re-downloaded the model referenced in the workflow as well...
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders
Edit: Just restarted comfy and it started working...
Technically Clip_Vision is not needed for Wan2.2. Since FLF2V is a new feature of ComfyUI you need to update it.
That should solve the problem.
It's saying you don't have the right clip vision model(or you don't have it installed to get right folder). Though, don't ask me what that model is, cuz I'm not sure.