Spaces:
Running
Running
Commit History
Initial cmake support of SYCL for AMD GPUs (llama/9658)
7d7ac98
Alberto Cabrera Pérez
commited on
vulkan : do not use tensor->extra (llama/9407)
7d66a68
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
52069b8
ggml: refactor cross entropy loss CPU impl. (ggml/976)
2a0805f
scripts : sync ggml-backend.cpp
26efed1
whisper : fix excessive memory usage (#2443)
afe3785
unverified
examples : update dr_wav.h to newer version (#2449)
d678325
unverified
Rahul Vadhyar
commited on
talk-llama : sync llama.cpp
c9ddda2
metal : reduce command encoding overhead (llama/9698)
43d5a06
sync : ggml
c5e24da
test: fix OPT_STEP_ADAMW for test-backend-ops (ggml/974)
76aa810
vulkan : mul_mat: fix UB with small warps (ggml/952)
d1a29c6
ggml : fix ggml_cast (ggml/973)
c44d575
ggml: fix gradient allocation logic (ggml/966)
ad3f29d
ggml : define missing HWCAP flags (llama/9684)
1d52105
ggml : add run-time detection of neon, i8mm and sve (llama/9331)
12c0e23
Dan Johansson
commited on
Enable use to the rebar feature to upload buffers to the device. (llama/9251)
760f8c2
Markus Tavenrath
commited on
mtgpu: enable VMM (llama/9597)
e84b4f5
R0CKSTAR
commited on
ggml : remove assert for AArch64 GEMV and GEMM Q4 kernels (llama/9217)
50395aa
Charles Xu
commited on
cann: fix crash when llama-bench is running on multiple cann devices (llama/9627)
068c697
CUDA: remove bad assert (ggml/972)
91954a7
vulkan : multithread pipeline creation (ggml/963)
ba60f98
vulkan : fix build for GGML_VULKAN_RUN_TESTS, add TFLOPS to log (ggml/961)
85e2387
vulkan : argsort barriers must be under uniform control flow (ggml/951)
b2602d7
ggml : fix GGML_MAX_N_THREADS + improve formatting (ggml/969)
ad34655
server : ffmpeg overwrite leftover temp file (#2431)
2dafb8e
unverified
whisper : add large-v3-turbo (#2440)
f3283ba
unverified
tests : remove test-backend-ops (#2434)
050ba38
unverified
ci : disable failing CUDA and Java builds
ecef312
unverified
readme : fix references to download-ggml-model.sh (#2427)
3d92452
unverified
Hugo
commited on
make : remove "talk" target until updated
5fb8fce
ggml : add ggml-cpu-impl.h (skip) (#0)
958f2d3
sync : ggml
e22e2f8
talk-llama : sync llama.cpp
f91f98d
ggml : add AVX512DQ requirement for AVX512 builds (llama/9622)
14b5848
Eric Zhang
commited on
log : add CONT level for continuing previous log entry (llama/9610)
a29a4c5
threads: fix msvc build without openmp (llama/9615)
97b3eb5
Max Krasnyansky
commited on
cuda: add q8_0->f32 cpy operation (llama/9571)
6201c74
threads: improve ggml_barrier scaling with large number of threads (llama/9598)
aca04d5
Max Krasnyansky
commited on
ggml : AVX512 gemm for Q4_0_8_8 (llama/9532)
7349efc
metal : use F32 prec for K*Q in vec FA (llama/9595)
99c4239
Revert "[SYCL] fallback mmvq (ggml/9088)" (llama/9579)
5aceb3d
Akarshan Biswas
commited on
musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (llama/9526)
8ec75c3
R0CKSTAR
commited on
Fix merge error in #9454 (llama/9589)
3142fa9
CUDA: enable Gemma FA for HIP/Pascal (llama/9581)
97cb7ce
RWKV v6: RWKV_WKV op CUDA implementation (llama/9454)
8d3e707
ggml-alloc : fix list of allocated tensors with GGML_ALLOCATOR_DEBUG (llama/9573)
673df39
slaren
commited on
Update CUDA graph on scale change plus clear nodes/params (llama/9550)
6b63eb1
agray3
commited on