10 Comments

I am very interested in this point. Can it be implemented somehow in Comfyui? (otherwise I am afraid it will be like with other models i.e. the model itself can do a lot of things, but in comfy it is not possible to implement it)

3. Image to video that works very well and can be controlled by a prompt. The image to video model behaves like an inpainting model so you can do things like generate from the last frame instead of the first frame or generate the video between two images.

Expand full comment

Everything in 3. is implemented in ComfyUI, look at the image to video workflow on the examples page.

Expand full comment

Using the presented workflow, you can specify and generate start and end images. However, it is not possible to generate a video where the end image is at the end of the generated video.

I tried various examples, but in most cases, the start image immediately transitions to the end image a few frames later, and after that, a video is created that combines the end image with the prompt instruction.

Is it possible to use a node to specify the position of the input image in the generated frame?

Expand full comment

CosmosImageToVideoLatent this node is red outlined and gives an error:

Cannot execute because a node is missing the class_type property.: Node ID '#83'

I hate these messages every time I try smth new, really? What class type does it need? Why isn't that described int he Nvidia cosmos model comfyui page, as in download this put it there.

Expand full comment

update your comfyui using 'git pull' .

Expand full comment

“The best model”... not really. Hunyuan far surpasses it in terms of rendering quality, speed, flexibility, lora support, etc... If you're using the Kijai's wrapper you can go even further with optimizations. Not to mention the forthcoming arrival of their I2V version. I think this model will soon be forgotten unless they come out with a new more accomplished version.

Expand full comment

The fact that it's undistilled and has image to video that works well makes it better than the current hunyuan video model.

I have no doubt that something better will come out in the future but for now it is better.

Expand full comment

If you consider that output quality is not a criterion to be taken into account, then this is probably the best model. As far as I'm concerned it's far from usable for client projects as the renderings even in I2V mode contain far too much distortion/artifacts. Fingers crossed for the future 🤷‍♂️

Expand full comment

What about the minimax that was released a few days ago in open source?

Expand full comment

Quality is a concern. But Hunyuan cannot be used legally in the european union. There is no valid license, they exclude it. Nvidia Cosmos can be used.

Expand full comment