ggerganov/llama.cpp

Port of Facebook's LLaMA model in C/C++

CC++CudaPythonMetalObjective-COther
This is stars and forks stats for /ggerganov/llama.cpp repository. As of 04 Mar, 2024 this repository has 42078 stars and 5916 forks.

llama.cpp Roadmap / Project status / Manifesto / ggml Inference of LLaMA model in pure C/C++ Hot topics ‼️ Breaking change: rope_freq_base and rope_freq_scale must be set to zero to use the model default values: #3401 Parallel decoding + continuous batching support added: #3228 Devs should become familiar with the new API Local Falcon 180B inference on Mac Studio falcon-180b-0.mp4 Table of Contents Description Usage Get...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
microsoft/p4vfsC#C++C233090
EliteAsian123/YARGC#ShaderLabOther30700
metabase/snowplow-eastwood-issueClojure0000
commadelimited/jQuery-Mobile-Icon-Pack-BuilderColdFusionCSSOther0010
imyuanx/chatgpt-proxyJavaScriptCSSDockerfile31804120
dukelec/cd_pnpCSSJavaScriptPython161+524+1
Andy-set-studio/modern-css-resetCSS2.9k04680
Berkeley-CS267/hw2-3CudaCMakeC0020
sumithpdd/flutterdevcampappDartHTMLSwift60450
BramVanImpeUcll/ip-major-2223ElixirCSSHTML20270