This is stars and forks stats for /Noeda/rllama repository. As of 26 Apr, 2024 this repository has 464 stars and 25 forks.
RLLaMA RLLaMA is a pure Rust implementation of LLaMA large language model inference.. Supported features Uses either f16 and f32 weights. LLaMA-7B, LLaMA-13B, LLaMA-30B, LLaMA-65B all confirmed working Hand-optimized AVX2 implementation OpenCL support for GPU inference. Load model only partially to GPU with --percentage-to-gpu command line switch to run hybrid-GPU-CPU inference. Simple HTTP API support, with the possibility of doing token sampling on client side It can load Vicuna-13B instruct-finetuned...
RLLaMA RLLaMA is a pure Rust implementation of LLaMA large language model inference.. Supported features Uses either f16 and f32 weights. LLaMA-7B, LLaMA-13B, LLaMA-30B, LLaMA-65B all confirmed working Hand-optimized AVX2 implementation OpenCL support for GPU inference. Load model only partially to GPU with --percentage-to-gpu command line switch to run hybrid-GPU-CPU inference. Simple HTTP API support, with the possibility of doing token sampling on client side It can load Vicuna-13B instruct-finetuned...
repo | techs | stars | weekly | forks | weekly |
---|---|---|---|---|---|
facebook/buck2 | RustStarlarkPython | 2.9k | 0 | 151 | 0 |
tui-rs-revival/ratatui | RustShell | 3.6k | +59 | 128 | +3 |
devmentors/Pacco | ShellPowerShellDockerfile | 715 | 0 | 187 | 0 |
logspace-ai/langflow | PythonTypeScriptJavaScript | 12.7k | 0 | 1.8k | 0 |
ministryofjustice/hmpps-workload | PLpgSQLKotlinDockerfile | 0 | 0 | 0 | 0 |
jina-ai/agentchain | PythonShellDockerfile | 519 | 0 | 43 | 0 |
gfreezy/seeker | RustShell | 593 | 0 | 47 | 0 |
epilys/gerb | RustCSSSCSS | 287 | 0 | 7 | 0 |
filecoin-project/ref-fvm | RustSolidityShell | 329 | 0 | 125 | 0 |
KTruong008/aichatbestie | SvelteTypeScriptJavaScript | 59 | 0 | 15 | 0 |