tatsu-lab/stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

Pythondeep-learninglanguage-modelinstruction-following
This is stars and forks stats for /tatsu-lab/stanford_alpaca repository. As of 20 Apr, 2024 this repository has 26944 stars and 3836 forks.

Stanford Alpaca: An Instruction-following LLaMA Model This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model. The repo contains: The 52K data used for fine-tuning the model. The code for generating the data. The code for fine-tuning the model. The code for recovering Alpaca-7B weights from our released weight diff. Note: We thank the community for feedback on Stanford-Alpaca and supporting our research. Our live demo is suspended until...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
MasterBin-IIAU/UNINEXTPythonCudaC++1.3k01460
thu-ml/unidiffuserPythonJupyter Notebook1.1k+5690
yizhongw/self-instructPythonJupyter NotebookShell3.1k03600
ShreyaR/guardrailsPythonMakefile2.3k01500
hnmr293/sd-webui-cutoffPython998+1730
tangjyan/zh-cnSCSSJavaScriptHTML230310
world-class/REPLSCSSPython754+8203+1
TunSafe/TunSafeAssemblyPerlC++89802300
jellyfin/jellyfin-media-playerC++JavaScriptCMake2.2k02270
graphdeco-inria/nerfshopCudaC++Python3940180