jianzhnie/Chinese-Guanaco

Easy and Efficient Finetuning of QLoRA LLMs. (Supported LLama, LLama2, bloom, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.

PythonShellbloomllamaglmlorapeftllmsbaichuan-7bbaichuan-13b
This is stars and forks stats for /jianzhnie/Chinese-Guanaco repository. As of 30 Apr, 2024 this repository has 430 stars and 48 forks.

  👋🤗🤗👋 Join our WeChat. Efficient Finetuning of Quantized LLMs --- 低资源的大语言模型量化训练/部署方案 中文 | English This is the repo for the Efficient Finetuning of Quantized LLMs project, which aims to build and share instruction-following Chinese baichuan-7b/LLaMA/Pythia/GLM model tuning methods which can be trained on a single Nvidia RTX-2080TI, multi-round chatbot which can be trained on a single Nvidia RTX-3090 with the context len 2048. We uses bitsandbytes for quantization and is integrated with Huggingface's...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
openai/prm800kPython9990560
FreedomIntelligence/HuatuoGPTPython7090890
thomasasfk/sd-webui-aspect-ratio-helperPythonJavaScriptCSS2610580
Happyrobot33/PVEThemesSassPythonJavaScript32010
baolibin/BigdataScalaJavaShell1940530
ProperLLC/epcr-portal-apiScalaShell1000
succinctlabs/mudvrfSolidityTypeScriptGo50050
sgbaird/awesome-self-driving-labsTeXPython600100
iml-wg/HEPML-LivingReviewTeXPythonOther2750870
EpicGames/PixelStreamingInfrastructureTypeScriptJavaScriptShell40202210