silveroxides commited on
Commit
19a4cb9
·
verified ·
1 Parent(s): 5e2d746

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -27,6 +27,8 @@ pipeline_tag: image-to-image
27
  ### This fp8_scaled model is faster than the official one released by ComfyOrg when used with the Loader in the workflow.
28
  ### The custom node that loads the model in the workflow is necessary to obtain fastest inference on lower VRAM GPU.
29
 
 
 
30
  ![Workflow](./workflow_assets/fp8_scaled_flux2_w_enhanced_prompting-overview.png)
31
 
32
  ![Sample](./workflow_assets/fp8_scaled_flux2_w_enhanced_prompting-sample.png)
 
27
  ### This fp8_scaled model is faster than the official one released by ComfyOrg when used with the Loader in the workflow.
28
  ### The custom node that loads the model in the workflow is necessary to obtain fastest inference on lower VRAM GPU.
29
 
30
+ ### If you want to use the int8 model, then you need to use [silveroxides/ComfyUI-QuantOps](https://github.com/silveroxides/ComfyUI-QuantOps) custom node until official support is merged.
31
+
32
  ![Workflow](./workflow_assets/fp8_scaled_flux2_w_enhanced_prompting-overview.png)
33
 
34
  ![Sample](./workflow_assets/fp8_scaled_flux2_w_enhanced_prompting-sample.png)