aict-sharif-edu commited on
Commit
5cabeea
·
verified ·
1 Parent(s): 02447a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -5
README.md CHANGED
@@ -64,11 +64,39 @@ The following hyperparameters were used during training:
64
  - training_steps: 4000
65
  - mixed_precision_training: Native AMP
66
 
67
- ### Training results
68
-
69
- - Best validation WER (Word Error Rate): 0.915
70
- - Best validation CER (Character Error Rate): 0.428
71
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ### Framework versions
74
 
 
64
  - training_steps: 4000
65
  - mixed_precision_training: Native AMP
66
 
67
+ ### Test results
68
+
69
+ - Best test WER (Word Error Rate): 0.915
70
+ - Best test CER (Character Error Rate): 0.428
71
+
72
+ ### Usage
73
+
74
+ ```python
75
+ import torch
76
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
77
+
78
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
79
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
80
+
81
+ model_id = "aictsharif/whisper-tiny-fa"
82
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
83
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
84
+ )
85
+ model.to(device)
86
+
87
+ processor = AutoProcessor.from_pretrained(model_id)
88
+ pipe = pipeline(
89
+ "automatic-speech-recognition",
90
+ model=model,
91
+ tokenizer=processor.tokenizer,
92
+ feature_extractor=processor.feature_extractor,
93
+ torch_dtype=torch_dtype,
94
+ device=device,
95
+ )
96
+
97
+ result = pipe('sample.mp3')
98
+ print(result["text"])
99
+ ```
100
 
101
  ### Framework versions
102