hiyouga/ChatGLM-Efficient-Tuning

Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调

Pythontransformerspytorchloralanguage-modelalpacafine-tuningpefthuggingfacechatgptrlhfchatglmqlorachatglm2
This is stars and forks stats for /hiyouga/ChatGLM-Efficient-Tuning repository. As of 02 May, 2024 this repository has 3117 stars and 455 forks.

ChatGLM Efficient Tuning Fine-tuning 🤖ChatGLM-6B model with 🤗PEFT. 👋 Join our WeChat. [ English | 中文 ] If you have any questions, please refer to our Wiki📄. Notice This repo will not be maintained in the future. Please follow LLaMA-Efficient-Tuning for fine-tuning the language models (including ChatGLM2-6B). Changelog [23/07/15] Now we develop an all-in-one Web UI for training, evaluation and inference. Try train_web.py to fine-tune ChatGLM-6B model in your Web browser. Thank @KanadeSiina and @codemayq...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
TermuxHackz/X-osintPythonShell5130550
heyManNice/CheckMouseSCSSLessPython32000
eatingurtoes/g1lbertCFWPython43010
DSXiangLi/DecryptPrompt1.1k01140
FDlucifer/Proxy-AttackchainC#PythonHTML4110840
TykTechnologies/tyk-docsSCSSHTMLCSS6101240
microsoft/Swin3DCudaPythonC++91060
jacobsc050/phi-4-modelJupyter NotebookPythonFortran0040
canisminor1990/sd-webui-lobe-themeTypeScriptPythonOther1.4k01250
OpenGVLab/DragGANPythonCudaC++4.9k05290