Instructions to use LanguageBind/LanguageBind_Video_FT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LanguageBind/LanguageBind_Video_FT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="LanguageBind/LanguageBind_Video_FT") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoModelForZeroShotImageClassification model = AutoModelForZeroShotImageClassification.from_pretrained("LanguageBind/LanguageBind_Video_FT", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Commit ·
f22d9f1
1
Parent(s): 7bc45ba
Update config.json
Browse files- config.json +1 -1
config.json
CHANGED
|
@@ -88,7 +88,7 @@
|
|
| 88 |
"transformers_version": null,
|
| 89 |
"vision_config": {
|
| 90 |
"_name_or_path": "",
|
| 91 |
-
"lora_r":
|
| 92 |
"lora_alpha": 16,
|
| 93 |
"lora_dropout": 0.1,
|
| 94 |
"add_time_attn": true,
|
|
|
|
| 88 |
"transformers_version": null,
|
| 89 |
"vision_config": {
|
| 90 |
"_name_or_path": "",
|
| 91 |
+
"lora_r": 0,
|
| 92 |
"lora_alpha": 16,
|
| 93 |
"lora_dropout": 0.1,
|
| 94 |
"add_time_attn": true,
|