Discussion about this post

User's avatar
Red's avatar

Please also mention the hardware/memory requirements so the community knows whether they can run the full model locally or if they’ll need to wait for quantized or distilled versions. Thanks for the early support and the blog post!

No posts

Ready for more?