It hung when trying to load the fp8 diffusion mode. There’s a red banner that popped up, saying “Reconnecting…” when I run the workflow in ComfyUI 0.3.73.
You need more system RAM, if you keep seeing the 'reconnecting' that means your comfyui server is shutting down due to OOM (out of memory). I'm looking at the ram usage as I'm using flux2 with the workflow they used, and I'm around 33.1 GB RAM being used. Which means your 32 GB RAM is being saturated and crashing before you can do anything.
I initially had 32 GB RAM in my system and upgraded to 128 GB RAM and it's been smooth sailing. I highly recommend investing in as much RAM as you can afford so you can start fully utilizing your powerful GPU!
Interesting how most of the new features in Flux.2 are the ones from Nano Banana Pro. This reinforces the idea that behind Nano Banana and GPT there are either well-known models or similar models and workflows.
I have 32 GB RAM, 24 GB VRAM, Nvidia 5090.
It hung when trying to load the fp8 diffusion mode. There’s a red banner that popped up, saying “Reconnecting…” when I run the workflow in ComfyUI 0.3.73.
I can’t run the BF16 diffusion model.
My system:
5090 RTX 32 GB VRAM
128 GB RAM
You need more system RAM, if you keep seeing the 'reconnecting' that means your comfyui server is shutting down due to OOM (out of memory). I'm looking at the ram usage as I'm using flux2 with the workflow they used, and I'm around 33.1 GB RAM being used. Which means your 32 GB RAM is being saturated and crashing before you can do anything.
I initially had 32 GB RAM in my system and upgraded to 128 GB RAM and it's been smooth sailing. I highly recommend investing in as much RAM as you can afford so you can start fully utilizing your powerful GPU!
The fp8 or BF16 are you running with 5090 and 128 Gb RAM?
Can confirm about the System RAM usage. It runs also with a 4090 RTX 24GB VRAM and 128GB System RAM.
Finished at about 230sec. Offload about 55GB RAM.
FP8 workflow work fine in ComfyUI 0.3.75.
Workflow use 70GB RAM and aboud 5GB VRAM
96 RAM, 12GB VRAM 3060
I was able to run the FP8 version successfully on an AMD 7900XTX and 96GB RAM, at ~12s/it , edit image mode. Usable.
But the text to image mode is much slower, at 68s/it. I don't get it.
A workaround seems to be to provide an empty image and prompt whatever text2image you want to do. Weird
Default fp8 workflow is obliterating my PC with RTX 5090 and 96 GB RAM. Takes several minutes at 100% VRAM and RAM utilization to generate an image.
Interesting how most of the new features in Flux.2 are the ones from Nano Banana Pro. This reinforces the idea that behind Nano Banana and GPT there are either well-known models or similar models and workflows.
No other example of workflows? only one example? black forest labs showed so many examples
I checked the workflow it has nice notes in it that's true, but just lacks few prompts examples I would say, but overall good, thank you