Commit History

js : remove un-needed request header from fetchRemote (#2119)
6c54394
unverified

Mark Karpelès commited on

cmake : fix metal embed sources path (#2110)
087b1a8
unverified

ggerganov commited on

main : dont print timings with --no-prints (#2108)
685d1c1
unverified

Daniel Ziegenberg commited on

main : add options for temperature control (#2088)
9a3f777
unverified

Daniel Ziegenberg commited on

whisper : switch back to F32 mask (#0)
3b7b90c
unverified

ggerganov commited on

whisper.android : update example, add field to print timestamp (#2072)
03fb680
unverified

codezjx commited on

cmake : fix json INTERFACE library (#2069)
0a1cadb
unverified

xcsong commited on

main : fix double quote escaping in csv output (#2090)
9952a85
unverified

mashizora commited on

metal : tune soft_max number of threads (#0)
99d668a

ggerganov commited on

whisper : remove old flash attn code (#0)
fd57e47

ggerganov commited on

ggml : try fix ppc64 (#0)
df78c25

ggerganov commited on

ggml : remove oboslete alibi code (skipme) (#0)
d25c1e3

ggerganov commited on

talk-llama : sync llama.cpp
f5f68d6

ggerganov commited on

sync : ggml
3ea4549

ggerganov commited on

ggml : optimize for ppc64le using VSX intrinsics (ggml/784)
05d3824

Hong Bo PENG ggerganov commited on

metal : fix indent (ggml/0)
d4f82d5

ggerganov commited on

ggml : restore sigmoid decl order (ggml/0)
67c5387

ggerganov commited on

ggml : resolve merge (ggml/0)
d692b06

ggerganov commited on

ggml : full ALiBi support (llama/7192)
192bda4

ggerganov commited on

metal : fix flash attention kernel requirements (llama/7169)
6cb3028

ggerganov commited on

Minor arithmetic improvement to mmvq wrapper kernel (llama/7172)
ae75124

Ouadie EL FAROUKI commited on

Vulkan Bugfixes and Improvements (llama/7084)
8dade62

OccamRazor commited on

CUDA: generalize FP16 fattn vec kernel (llama/7061)
ca79691

JohannesGaessler commited on

opencl : alignment size converted from bits to bytes (llama/7090)
2692ce5

albertjin Cebtenzzre commited on

Introduction of CUDA Graphs to LLama.cpp (llama/6766)
08fc76d

agray3 slaren commited on

metal : use `vm_allocate` instead of `posix_memalign` on macOS (llama/7078)
eb910b1

Gilad S commited on

ggml : introduce bfloat16 support (llama/6412)
81ec961

Justine Tunney commited on

metal : fix unused warning
24e883a

ggerganov commited on

Add an option to build without CUDA VMM (llama/7067)
38b1143

wtambellini commited on

gguf-split: add --no-tensor-first-split (llama/7072)
b9bc04d

Xuan Son Nguyen commited on

CUDA: CUDART < 11.7 workaround for __hmax, __hmax2 (llama/7019)
4cf786d

JohannesGaessler commited on

switch to using localizedDescription (llama/7010)
fd25ba6

bakkot commited on

metal : remove deprecated error code (llama/7008)
42a84fb

ggerganov commited on

metal : log more info on error (llama/6987)
d4dcef9

bakkot commited on

ggml : fix __MSC_VER -> _MSC_VER (llama/6977)
a83f2ae

ggerganov commited on

Fix more int overflow during quant (PPL/CUDA). (llama/6563)
531387f

dranger003 commited on

gguf : enforce that tensor names are unique (llama/6905)
22e446d

Xuan Son Nguyen slaren commited on

add device version in device list (llama/6959)
c022e9a

Neo Zhang arthw commited on

Reset schedule earlier to allow overlap with ggml graph computation on device (llama/6933)
3a8eea8

agray3 commited on

add basic tensor data validation function (llama/6884)
71e001c

slaren commited on

gguf : fix mismatch between alloc and free functions (llama/6929)
d8fb433

slaren commited on

Merge pull request from GHSA-p5mv-gjc5-mwqv
72b368d

ggerganov slaren commited on

ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (llama/6906)
f900de6

ggerganov commited on

ggml : fix MIN / MAX macros (llama/6904)
a1c0e2a

ggerganov commited on

ggml : move 32-bit arm compat in ggml-impl.h (llama/6865)
7343760

ggerganov commited on

llamafile : improve sgemm.cpp (llama/6796)
bfe2a5f

Justine Tunney commited on

ggml : fix calloc argument ordering. (llama/6820)
12af87c

Dave Airlie commited on

ggml : fix ggml_backend_cpu_supports_op() for CPY (llama/0)
d645791

ggerganov commited on

ggml : group all experts in a single ggml_mul_mat_id (llama/6505)
f0b5c67

slaren ggerganov commited on