Lightning-AI/lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Python
This is stars and forks stats for /Lightning-AI/lit-llama repository. As of 06 May, 2024 this repository has 5248 stars and 450 forks.

⚡ Lit-LLaMA ️ ⚡ Lit-LLaMA ️ Independent implementation of LLaMA pretraining, finetuning, and inference code that is fully open source under the Apache 2.0 license. This implementation builds on nanoGPT. The open-source code in this repository works with the original LLaMA weights that are distributed by Meta under a research-only license. Looking for LLaMA 2? Meta AI has since released LLaMA 2. Additionally, new Apache 2.0 licensed weights are being released as part of the Open LLaMA project. To...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
nomic-ai/gpt4allC++QMLPython52.9k05.8k0
ymcui/Chinese-LLaMA-AlpacaPythonShell14.9k01.5k0
plasma-umass/ChatDBGC++PythonOther6170310
ShuhongChen/panic3d-anime-reconstructionPythonCudaC++6460540
mlfoundations/open_flamingoPythonShellMakefile2.9k02080
Cerebras/modelzooPythonShell7210970
vanhoefm/macstealerCPythonMakefile459+245+1
hikettei/cl-waffeCommon LispJupyter NotebookPython47020
HKUST-Aerial-Robotics/HKUST-ELEC5660-Introduction-to-Aerial-RoboticsC++CC#36050
manateelazycat/novaEmacs LispPython14010