This is stars and forks stats for /hiyouga/LLaMA-Efficient-Tuning repository. As of 06 Dec, 2023 this repository has 4808 stars and 946 forks.
LLaMA Efficient Tuning π Join our WeChat. [ English | δΈζ ] Changelog [23/09/27] We supported $S^2$-Attn proposed by LongLoRA for the LLaMA models. Try --shift_attn argument to enable shift short attention. [23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See this example to evaluate your models. [23/09/10] We supported using FlashAttention-2 for the LLaMA models. Try --flash_attn argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs. [23/08/18] We...
LLaMA Efficient Tuning π Join our WeChat. [ English | δΈζ ] Changelog [23/09/27] We supported $S^2$-Attn proposed by LongLoRA for the LLaMA models. Try --shift_attn argument to enable shift short attention. [23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See this example to evaluate your models. [23/09/10] We supported using FlashAttention-2 for the LLaMA models. Try --flash_attn argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs. [23/08/18] We...
repo | techs | stars | weekly | forks | weekly |
---|---|---|---|---|---|
goldfishh/chatgpt-tool-hub | Python | 782 | 0 | 100 | 0 |
kdave/btrfsmaintenance | ShellPython | 778 | 0 | 75 | 0 |
MaximePremont/Zappy_Epitech | ASP.NETC#ShaderLab | 13 | 0 | 2 | 0 |
SynthstromAudible/DelugeFirmware | CC++Tcl | 371 | +2 | 64 | 0 |
GNOME/libxml2 | CRPGLEHTML | 482 | 0 | 337 | 0 |
sampathshivakumar/Python-Source-Code | DockerfilePython | 1 | 0 | 12 | 0 |
apache/pulsar-site | HTMLJavaScriptCSS | 28 | 0 | 126 | +2 |
Blockstream/greenlight | RustPythonDockerfile | 66 | +1 | 21 | +1 |
xinyu1205/Recognize_Anything-Tag2Text | Jupyter NotebookPython | 1.8k | 0 | 175 | 0 |
Vahe1994/SpQR | PythonC++ | 451 | 0 | 33 | 0 |