InternLM/xtuner

A toolkit for efficiently fine-tuning LLM (InternLM, Llama, Baichuan, QWen, ChatGLM2)

Pythonchatbotllamaconversational-aipeftbaichuanlarge-language-modelsllmsupervised-finetuningchatglmllm-trainingchatglm2internlmllama2qwen
This is stars and forks stats for /InternLM/xtuner repository. As of 28 Apr, 2024 this repository has 419 stars and 35 forks.

English | įŽ€äŊ“中文 👋 join us on Twitter, Discord and WeChat 🎉 News [2023.09.20] Support InternLM-20B models! [2023.09.06] Support Baichuan2 models! [2023.08.30] XTuner is released, with multiple fine-tuned adapters on HuggingFace. 📖 Introduction XTuner is a toolkit for efficiently fine-tuning LLM, developed by the MMRazor and MMDeploy teams. Efficiency: Support LLM fine-tuning on consumer-grade GPUs. The minimum GPU memory required for 7B LLM fine-tuning is only 8GB, indicating that users can use nearly...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
FederatedAI/eggrollPythonScalaJava2400700
cmsc330fall23/cmsc330fall23PythonOCamlShell490190
cofactoryai/textbasePython1.3k03600
estebanpdl/osintgptPython2280320
sgammon/rules_graalvmStarlarkPythonOther12050
luban-agi/Awesome-AIGC-Tutorials1.9k01030
dwrensha/acronymy-assistantAGS ScriptJavaScriptPython22020
Poing-Studios/godot-admob-pluginGDScriptPython1470110
tastypepperoni/PPLBladeGoPython3270380
zenml-io/mlstacksHCLPythonShell2200200