Commit History

CUDA: revise q8_1 data layout for mul_mat_q (llama/7824)
fcfd59e

JohannesGaessler commited on

vulkan : reuse parent extra for views (llama/7806)
b9b60de

slaren OccamRazor commited on

fix softmax r2r result wrong issue (llama/7811)
c3a7159

PPxin commited on

CUDA: refactor mmq, dmmv, mmvq (llama/7716)
849ff52

JohannesGaessler commited on

ggml : refactor rope norm/neox (llama/7634)
ded0c68

ggerganov commited on

Allow number of nodes in CUDA graph to change (llama/7738)
6124287

agray3 commited on

ggml : remove OpenCL (llama/7735)
4ff3b72

ggerganov commited on

ggml : prevent builds with -ffinite-math-only (llama/7726)
154f0f8

ggerganov commited on

llama : offload to RPC in addition to other backends (llama/7640)
eab8082

rgerganov slaren commited on

ggml : use OpenMP as a thread pool (llama/7606)
7e5d850

Masaya, Kato slaren ggerganov commited on

Vulkan Mixture of Experts (MoE) support (llama/7628)
ad9ee26

OccamRazor commited on

kompute : implement op_getrows_f32 (llama/6403)
fa0872f

woachk commited on

fix bug introduced in using calloc (llama/7701)
f22c7e4

Dave Airlie commited on

Fix FlashAttention debug test, FP32 assert (llama/7684)
1bed92f

JohannesGaessler commited on

CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (llama/7681)
d4c0faf

JohannesGaessler commited on

CUDA: quantized KV support for FA vec (llama/7527)
315df8c

JohannesGaessler commited on

ggml : fix loongson compile warnings (llama/7537)
c1442f3

ggerganov junchao-loongson commited on

faster avx512 exp implementation (llama/7551)
6dbbbab

chriselrod commited on

ggml : fix loongarch build (O2 issue) (llama/7636)
133ffbf

junchao-loongson commited on

metal : remove invalid asserts (llama/7617)
562afce

ggerganov commited on

metal : add missing asserts (llama/7617)
be552ab

ggerganov commited on

ggml : fix YARN + add tests + add asserts (llama/7617)
15da5f7

ggerganov commited on

cuda : non-cont concat support (llama/7610)
64d3007

ggerganov commited on

llama-bench : add support for the RPC backend (llama/7435)
d460266

rgerganov commited on

ggml : use atomic_flag for critical section (llama/7598)
68c6582

slaren commited on

examples : adapt to new ggml_concat (ggml/0)
36af6c5

ggerganov commited on

ggml : fix typo in ggml.c (llama/7603)
f06f1cb

jeffzhou2000 commited on

Align GEMM dispatch (llama/7566)
2171dc6

hengyu commited on

sycl : fix assert (llama/7563)
b4fb287

ggerganov commited on

vulkan: properly initialize vulkan devices for LLAMA_SPLIT_MODE_NONE (llama/7552)
da90a1e

Adriankhl commited on

rpc : resource management rework (llama/7562)
7571b13

rgerganov commited on

fix ggml_sycl_mul_mat_id() to match the change of api (llama/7436)
f0ee71c

Neo Zhang commited on

ggml : generalize GGML_OP_CONCAT (llama/7563)
8d359ad

ggerganov commited on

update HIP_UMA #7399 (llama/7414)
7097123

Djip007 slaren commited on

Allow multiple copy function pointers for CUDA graph kernel param updates (llama/7565)
143f6df

agray3 commited on

Fix q_xxs using mul_mat_q (llama/7459)
0be4f48

AidanBeltonS commited on

Add freq factors (llama/7495)
340b830

AidanBeltonS commited on

metal : add GGML_OP_REPEAT kernels (llama/7557)
0534b5d

ggerganov commited on

metal : disable FA kernel for HS=256 (llama/7556)
0c32e28

ggerganov commited on

ggml : restore ggml_rope_xpos_inplace (ggml/0)
0641dee

ggerganov commited on

ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (llama/7433)
51f504f

Masaya, Kato commited on

ggml : silence UB sanitizer error during iq2_xxs quantization (llama/0)
9f41704

ggerganov commited on

ggml : remove ggml_flash_attn and ggml_flash_ff (llama/7463)
4005bca

ggerganov commited on

ggml : drop support for QK_K=64 (llama/7473)
8737d46

ggerganov commited on

Update vulkan rope implementation to support frequency factors (llama/7475)
be0ec58

OccamRazor commited on

CUDA: fix FA out-of-bounds reads (llama/7479)
b38d0f9

JohannesGaessler commited on

CUDA: fix FA out-of-bounds writes (llama/7465)
2e26e3a

JohannesGaessler commited on

cuda : fix compile warning (llama/7454)
58db6c8

ggerganov commited on

CUDA: remove incorrect precision check (llama/7454)
eb4b5e0

JohannesGaessler commited on

cuda : fix rope + add tests (llama/7452)
215ce5c

ggerganov commited on