kuleshov-group/llmtune

4-Bit Finetuning of Large Language Models on One Consumer GPU

PythonCudaC++
This is stars and forks stats for /kuleshov-group/llmtune repository. As of 30 Apr, 2024 this repository has 560 stars and 63 forks.

LLMTools: Run & Finetune LLMs on Consumer GPUs LLMTools is a user-friendly library for running and finetuning LLMs in low-resource settings. Features include: 🔨 LLM finetuning in 2-bit, 3-bit, 4-bit precision using the LP-LoRA algorithm 🐍 Easy-to-use Python API for quantization, inference, and finetuning 🤖 Modular support for multiple LLMs, quantizers, and optimization algorithms 🤗 Share all your LLMs on the HuggingFace Hub LLMTools is a research project at Cornell University, and is based on...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
Docta-ai/doctaPythonShell1.6k+12132+4
yxlllc/DDSP-SVCPython1.2k01860
IceClear/StableSRPython1.2k+9680
J3ldo/UGC-SniperPythonHTML1610980
csunny/DB-GPTPythonHTMLOther7.2k01k0
widgetti/solaraPythonCSSVue1.1k0760
Ashes-of-the-Empire-Team/thetarkininitiativeShaderLabPythonHLSL00120
holyaustin/Stream-AbstractionSolidityJavaScriptPython25000
huggingface/chat-uiTypeScriptSvelteJavaScript3.7k04380
webwhiz-ai/webwhizTypeScriptHTMLCSS80302910