hiyouga/LLaMA-Efficient-Tuning

Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)

Pythonreinforcement-learningtransformersfalconllamaquantizationlanguage-modelfine-tuningpeftdpopre-trainingbaichuanllmrlhfchatglmqlorainternlmllama2qwenflash-attention-2longlora
This is stars and forks stats for /hiyouga/LLaMA-Efficient-Tuning repository. As of 28 Apr, 2024 this repository has 4808 stars and 946 forks.

LLaMA Efficient Tuning πŸ‘‹ Join our WeChat. [ English | δΈ­ζ–‡ ] Changelog [23/09/27] We supported $S^2$-Attn proposed by LongLoRA for the LLaMA models. Try --shift_attn argument to enable shift short attention. [23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See this example to evaluate your models. [23/09/10] We supported using FlashAttention-2 for the LLaMA models. Try --flash_attn argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs. [23/08/18] We...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
goldfishh/chatgpt-tool-hubPython78201000
kdave/btrfsmaintenanceShellPython7780750
MaximePremont/Zappy_EpitechASP.NETC#ShaderLab13020
SynthstromAudible/DelugeFirmwareCC++Tcl371+2640
GNOME/libxml2CRPGLEHTML48203370
sampathshivakumar/Python-Source-CodeDockerfilePython10120
apache/pulsar-siteHTMLJavaScriptCSS280126+2
Blockstream/greenlightRustPythonDockerfile66+121+1
xinyu1205/Recognize_Anything-Tag2TextJupyter NotebookPython1.8k01750
Vahe1994/SpQRPythonC++4510330