At this stage, our Discord server would be the best place! I realize I did not mention this on the blogpost, that's now been amended! If you need access to the private channel mentioned in the updated post, ping me once you join and I can provide you access.
to achieve parallel execution and out-of-process node execution (which i guess the direction forward is going to be execution of multiple branches after a topo sort) you dont really need make everything async.
> There is no issue with calling non-async code from async call, so you can continue using whatever libraries you like
this is not true at all! you introduce a risk of blocking event loop which will be crucial during the execution. you might go with converting everything to threads from coro's but then what is the purpose of having everything as coro's then?
the dependency resolution is still a mystery to me. assuming a custom node declares a dependency on `some-package==2.0.0` and comfy has `some-package==3.0.0` how does the solution described in this specification help with that?
Appreciate your feedback! I'm hoping to get together more documentation around those plans to share in the near future, but here's the short version:
We're actually looking to enable different custom node packs to execute in entirely different Python processes. Each process can use its own Python environment, including different versions of libraries (with the exception of torch -- they all need to be using the same version of torch in order to pass tensors between processes through shared memory rather than copying them).
The plan is to make this largely transparent to custom node authors as long as they're using the public API described in this post -- we'll handle proxying those calls between processes. In this way, the same custom node pack should work both in and out-of-process, minimizing the work custom node authors need to do. The coroutines are necessary in order to make sure the host process can call functions in the child process from within a call initiated *by* the child process -- just as we can do when everything is in the same process.
I will qualify all of this with the fact that, while we have a proof-of-concept of this working, there's still a lot of work to do on it, and it's entirely possible there are pitfalls we haven't discovered yet. If you have specific concerns, please consider joining our Discord to discuss them! Feel free to ping me (@guill) there.
this is all very important. We ourselves are already working on distributing the workload on multiple GPUs in our pipeline with an AWS Deadline integration. more stability will also help a lot. Thanks!
This sounds great! How can a custom node developer give feedback so that it is taken into account before v3 is finalized?
At this stage, our Discord server would be the best place! I realize I did not mention this on the blogpost, that's now been amended! If you need access to the private channel mentioned in the updated post, ping me once you join and I can provide you access.
to achieve parallel execution and out-of-process node execution (which i guess the direction forward is going to be execution of multiple branches after a topo sort) you dont really need make everything async.
> There is no issue with calling non-async code from async call, so you can continue using whatever libraries you like
this is not true at all! you introduce a risk of blocking event loop which will be crucial during the execution. you might go with converting everything to threads from coro's but then what is the purpose of having everything as coro's then?
the dependency resolution is still a mystery to me. assuming a custom node declares a dependency on `some-package==2.0.0` and comfy has `some-package==3.0.0` how does the solution described in this specification help with that?
Hey shizoidcat,
Appreciate your feedback! I'm hoping to get together more documentation around those plans to share in the near future, but here's the short version:
We're actually looking to enable different custom node packs to execute in entirely different Python processes. Each process can use its own Python environment, including different versions of libraries (with the exception of torch -- they all need to be using the same version of torch in order to pass tensors between processes through shared memory rather than copying them).
The plan is to make this largely transparent to custom node authors as long as they're using the public API described in this post -- we'll handle proxying those calls between processes. In this way, the same custom node pack should work both in and out-of-process, minimizing the work custom node authors need to do. The coroutines are necessary in order to make sure the host process can call functions in the child process from within a call initiated *by* the child process -- just as we can do when everything is in the same process.
I will qualify all of this with the fact that, while we have a proof-of-concept of this working, there's still a lot of work to do on it, and it's entirely possible there are pitfalls we haven't discovered yet. If you have specific concerns, please consider joining our Discord to discuss them! Feel free to ping me (@guill) there.
-- Jacob
this is all very important. We ourselves are already working on distributing the workload on multiple GPUs in our pipeline with an AWS Deadline integration. more stability will also help a lot. Thanks!
will need more example or docs to get to this new version guidelines.