tloen/alpaca-lora

Instruct-tune LLaMA on consumer hardware

Jupyter NotebookPythonDockerfile
This is stars and forks stats for /tloen/alpaca-lora repository. As of 20 Apr, 2024 this repository has 17162 stars and 2091 forks.

🦙🌲🤏 Alpaca-LoRA 🤗 Try the pretrained model out here, courtesy of a GPU grant from Huggingface! Users have created a Discord server for discussion and support here 4/14: Chansung Park's GPT4-Alpaca adapters: #340 This repository contains code for reproducing the Stanford Alpaca results using low-rank adaptation (LoRA). We provide an Instruct model of similar quality to text-davinci-003 that can run on a Raspberry Pi (for research), and the code is easily extended to the 13b, 30b, and 65b models. In...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
YBIFoundation/FundamentalJupyter Notebook90103890
mathworks/Call-Simulink-from-PythonMATLABPython8010
THUDM/ChatGLM-6BPythonShell34.8k+1774.7k+16
openai/evalsPythonJupyter NotebookJavaScript12.1k02.3k0
THUDM/GLMPythonShellDockerfile2.7k+62710
awslabs/mountpoint-s3RustShellPython3.4k0900
Noeda/rllamaRustDockerfile464+3250
facebook/buck2RustStarlarkPython2.9k01510
devmentors/PaccoShellPowerShellDockerfile71501870
giuspek/FormalMethods2023SMTPython3080