ggerganov commited on
Commit
24d18d2
·
unverified ·
1 Parent(s): 2cba95c

Update README.md

Browse files
Files changed (1) hide show
  1. bindings/javascript/README.md +77 -1
bindings/javascript/README.md CHANGED
@@ -2,4 +2,80 @@
2
 
3
  Node.js package for Whisper speech recognition
4
 
5
- For sample usage check [tests/test-whisper.js](/tests/test-whisper.js)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  Node.js package for Whisper speech recognition
4
 
5
+ Package: https://www.npmjs.com/package/whisper.cpp
6
+
7
+ ## Details
8
+
9
+ The performance is comparable to when running `whisper.cpp` in the browser via WASM.
10
+
11
+ The API is currently very rudimentary:
12
+
13
+ https://github.com/ggerganov/whisper.cpp/blob/npm/bindings/javascript/emscripten.cpp
14
+
15
+ I am hoping that there will be interest in contributions and making it better based on what is needed in practice.
16
+ For sample usage check [tests/test-whisper.js](https://github.com/ggerganov/whisper.cpp/blob/npm/tests/test-whisper.js)
17
+
18
+ ## Package building + test
19
+
20
+ ```bash
21
+ # load emscripten
22
+ source /path/to/emsdk/emsdk_env.sh
23
+
24
+ # clone repo
25
+ git clone https://github.com/ggerganov/whisper.cpp
26
+ cd whisper.cpp
27
+
28
+ # grab base.en model
29
+ ./models/download-ggml-model.sh base.en
30
+
31
+ # prepare PCM sample for testing
32
+ ffmpeg -i samples/jfk.wav -f f32le -acodec pcm_f32le samples/jfk.pcmf32
33
+
34
+ # build
35
+ mkdir build-em && cd build-em
36
+ emcmake cmake .. && make -j
37
+
38
+ # run test
39
+ node --experimental-wasm-threads --experimental-wasm-simd ../tests/test-whisper.js
40
+
41
+ # publish npm package
42
+ make publish-npm
43
+ ```
44
+
45
+ ## Sample run
46
+
47
+ ```java
48
+ $ node --experimental-wasm-threads --experimental-wasm-simd ../tests/test-whisper.js
49
+
50
+ whisper_model_load: loading model from 'whisper.bin'
51
+ whisper_model_load: n_vocab = 51864
52
+ whisper_model_load: n_audio_ctx = 1500
53
+ whisper_model_load: n_audio_state = 512
54
+ whisper_model_load: n_audio_head = 8
55
+ whisper_model_load: n_audio_layer = 6
56
+ whisper_model_load: n_text_ctx = 448
57
+ whisper_model_load: n_text_state = 512
58
+ whisper_model_load: n_text_head = 8
59
+ whisper_model_load: n_text_layer = 6
60
+ whisper_model_load: n_mels = 80
61
+ whisper_model_load: f16 = 1
62
+ whisper_model_load: type = 2
63
+ whisper_model_load: adding 1607 extra tokens
64
+ whisper_model_load: mem_required = 506.00 MB
65
+ whisper_model_load: ggml ctx size = 140.60 MB
66
+ whisper_model_load: memory size = 22.83 MB
67
+ whisper_model_load: model size = 140.54 MB
68
+
69
+ system_info: n_threads = 8 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | NEON = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 1 | BLAS = 0 |
70
+
71
+ operator(): processing 176000 samples, 11.0 sec, 8 threads, 1 processors, lang = en, task = transcribe ...
72
+
73
+ [00:00:00.000 --> 00:00:11.000] And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.
74
+
75
+ whisper_print_timings: load time = 162.37 ms
76
+ whisper_print_timings: mel time = 183.70 ms
77
+ whisper_print_timings: sample time = 4.27 ms
78
+ whisper_print_timings: encode time = 8582.63 ms / 1430.44 ms per layer
79
+ whisper_print_timings: decode time = 436.16 ms / 72.69 ms per layer
80
+ whisper_print_timings: total time = 9370.90 ms
81
+ ```