This is stars and forks stats for /okuvshynov/slowllama repository. As of 06 May, 2024 this repository has 291 stars and 16 forks.
slowllama Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs. slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch (unattainable) or inference, where we are likely to care about interactivity, we can still get something finetuned if you let it run for a while. Current version is...
slowllama Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs. slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch (unattainable) or inference, where we are likely to care about interactivity, we can still get something finetuned if you let it run for a while. Current version is...
repo | techs | stars | weekly | forks | weekly |
---|---|---|---|---|---|
DhanushNehru/Ultimate-Web-Development-Resources | HTMLJavaScriptCSS | 306 | 152 | ||
Automattic/node-canvas | JavaScriptC++C | 9.6k | 1.2k | ||
microsoft/msphpsql | PHPC++C | 1.7k | 399 | ||
chao325/QmaoTai | PythonDockerfileHTML | 290 | 71 | ||
recommenders-team/recommenders | PythonOther | 16.5k | 2.9k | ||
eric-ai-lab/MiniGPT-5 | Python | 380 | 18 | ||
AlaaLab/InstructCV | PythonShell | 206 | 17 | ||
FoxIO-LLC/ja4 | RustPython | 139 | 18 | ||
jdx/rtx | RustShellJust | 3.3k | 101 | ||
ponces/treble_build_aosp | ShellMakefile | 34 | 5 |