Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Duplicated from
natasa365/whisper.cpp
Xenobd
/
whisper.cpp
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
a027c1d
whisper.cpp
/
ggml
/
src
6.68 MB
100 contributors
History:
571 commits
David Huang
HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (llama/12032)
a027c1d
10 months ago
ggml-amx
ggml : adapt AMX to tensor->grad removal (llama/0)
about 1 year ago
ggml-blas
ggml : add support for dynamic loading of backends (llama/10469)
about 1 year ago
ggml-cann
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-cpu
ggml : fix kleidiai build (llama/12159)
10 months ago
ggml-cuda
HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (llama/12032)
10 months ago
ggml-hip
HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (llama/12032)
10 months ago
ggml-kompute
llama : add Qwen2VL support + multimodal RoPE (llama/10361)
about 1 year ago
ggml-metal
cuda/cpu: Increase support for fp16 unary operations (ggml/1125)
10 months ago
ggml-musa
CUDA: app option to compile without FlashAttention (llama/12025)
10 months ago
ggml-opencl
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-rpc
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-sycl
SYCL: Move CPY kernels to a separate file and add few missing kernels (llama/12133)
10 months ago
ggml-vulkan
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
CMakeLists.txt
11.9 kB
whisper : support GGML_BACKEND_DL (#2843)
10 months ago
ggml-alloc.c
Safe
38.5 kB
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-backend-impl.h
Safe
12 kB
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-backend-reg.cpp
17.1 kB
ggml-backend : keep paths in native string type when possible (llama/12144)
10 months ago
ggml-backend.cpp
77.6 kB
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
10 months ago
ggml-common.h
Safe
133 kB
CUDA: use arch list for compatibility check (llama/11775)
11 months ago
ggml-impl.h
Safe
18.4 kB
MUSA: support ARM64 and enable dp4a .etc (llama/11843)
10 months ago
ggml-opt.cpp
Safe
31.7 kB
ggml-opt: fix data corruption (ggml/1022)
about 1 year ago
ggml-quants.c
Safe
214 kB
ggml : refactor online repacking (llama/10446)
about 1 year ago
ggml-quants.h
Safe
8.34 kB
ggml : build backends as libraries (llama/10256)
about 1 year ago
ggml-threading.cpp
Safe
250 Bytes
ggml : build backends as libraries (llama/10256)
about 1 year ago
ggml-threading.h
Safe
198 Bytes
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)
about 1 year ago
ggml.c
209 kB
ggml-cpu: Support s390x SIMD Instruction Set (llama/12019)
10 months ago
gguf.cpp
Safe
45 kB
cmake : add sanitizer flags for llama.cpp (llama/11279)
11 months ago