Update README.md
Browse files
README.md
CHANGED
|
@@ -27,6 +27,8 @@ pipeline_tag: image-to-image
|
|
| 27 |
### This fp8_scaled model is faster than the official one released by ComfyOrg when used with the Loader in the workflow.
|
| 28 |
### The custom node that loads the model in the workflow is necessary to obtain fastest inference on lower VRAM GPU.
|
| 29 |
|
|
|
|
|
|
|
| 30 |

|
| 31 |
|
| 32 |

|
|
|
|
| 27 |
### This fp8_scaled model is faster than the official one released by ComfyOrg when used with the Loader in the workflow.
|
| 28 |
### The custom node that loads the model in the workflow is necessary to obtain fastest inference on lower VRAM GPU.
|
| 29 |
|
| 30 |
+
### If you want to use the int8 model, then you need to use [silveroxides/ComfyUI-QuantOps](https://github.com/silveroxides/ComfyUI-QuantOps) custom node until official support is merged.
|
| 31 |
+
|
| 32 |

|
| 33 |
|
| 34 |

|