Upload mixed precision fp8 quantized model with comfy_quant layer configs and sensitive layers kept in high precision

#14
Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment