openhub.net
Black Duck Software, Inc.
Open Hub
Follow @
OH
Sign In
Join Now
Projects
People
Organizations
Tools
Blog
BDSA
Projects
People
Projects
Organizations
W
whisper.cpp
Settings
|
Report Duplicate
0
I Use This!
×
Login Required
Log in to Open Hub
Remember Me
Very High Activity
Commits
: Listings
Analyzed
about 19 hours
ago. based on code collected
about 19 hours
ago.
Jan 09, 2025 — Jan 09, 2026
Showing page 1 of 127
Search / Filter on:
Commit Message
Contributor
Files Modified
Lines Added
Lines Removed
Code Location
Date
ruby : fix segmentation fault (#3591)
KITAITI Makoto
More...
5 days ago
sync : ggml
Georgi Gerganov
More...
10 days ago
ggml : bump version to 0.9.5 (ggml/1410)
Georgi Gerganov
More...
10 days ago
talk-llama : sync llama.cpp
Georgi Gerganov
More...
10 days ago
sync : ggml
Georgi Gerganov
More...
10 days ago
metal : add count_equal op (llama/18314)
gatbontonpc
More...
10 days ago
CUDA: fix KQ max calculation (llama/18487)
Johannes Gäßler
More...
10 days ago
metal : remove BF16 x F16 kernels (llama/18456)
Georgi Gerganov
More...
10 days ago
sycl: add newline at the end of CMakeLists.txt (llama/18503)
Aman Gupta
More...
10 days ago
Work around broken IntelSYCLConfig.cmake in Intel oneAPI 2025.x (llama/18345)
Rahul Sathe
More...
11 days ago
kleidiai: add and integrate SVE 256-bit vector-length kernel (llama/18458)
Charles Xu
More...
11 days ago
CUDA: add log line when mxfp4 acceleration is used (llama/18483)
Aman Gupta
More...
11 days ago
CUDA: fix replacment of bad archs in CMake (llama/18457)
Johannes Gäßler
More...
12 days ago
CUDA: Blackwell features for non-native builds (llama/18436)
Johannes Gäßler
More...
12 days ago
cuda: fix race condition in cumsum (llama/18448)
Aman Gupta
More...
12 days ago
HIP: Use mmq on MFMA devices for MUL_MAT_ID in cases where a lot of splits would be generated (llama/18202)
uvos
More...
13 days ago
Revert "ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON (#18413)" (llama/18426)
Aman Gupta
More...
13 days ago
rpc: fix segfault on invalid endpoint format (llama/18387)
o7si
More...
13 days ago
cmake: Added more x86_64 CPU backends when building with `GGML_CPU_ALL_VARIANTS=On` (llama/18186)
Boian Berberov
More...
13 days ago
ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON (llama/18413)
QDelta
More...
14 days ago
opencl: allow resizing transpose buffers (llama/18384)
lhez
More...
14 days ago
ggml-cuda: Use same regex for GGML_NATIVE=OFF (llama/18407)
Aman Gupta
More...
14 days ago
vulkan: preprocess mul_mat_id experts and discard workgroups more quickly (llama/18352)
Jeff Bolz
More...
15 days ago
vulkan: optimize decodeFuncB in coopmat2 mul_mat_id shader (llama/18349)
Jeff Bolz
More...
15 days ago
vulkan: Use BK=32 for coopmat2 mul_mat_id (llama/18332)
Jeff Bolz
More...
15 days ago
vulkan: small dequantization improvements (llama/18380)
Eve
More...
15 days ago
vulkan: Support UPSCALE w/antialias (llama/18327)
Jeff Bolz
More...
15 days ago
vulkan: handle rope with large number of rows (llama/18306)
Jeff Bolz
More...
15 days ago
CANN: implement the SSM_CONV operator (llama/17737)
0Marble
More...
16 days ago
ggml-cuda: fix regex for arch list (llama/18371)
Aman Gupta
More...
16 days ago
←
1
2
3
4
5
6
7
8
9
…
126
127
→
This site uses cookies to give you the best possible experience. By using the site, you consent to our use of cookies. For more information, please see our
Privacy Policy
Agree