juncongmoo/pyllama

LLaMA: Open and Efficient Foundation Language Models

PythonShell
This is stars and forks stats for /juncongmoo/pyllama repository. As of 27 Apr, 2024 this repository has 2643 stars and 308 forks.

🦙 LLaMA - Run LLM in A Single 4GB GPU 📢 pyllama is a hacked version of LLaMA based on original Facebook's implementation but more convenient to run in a Single consumer grade GPU. The Hugging Face's LLaMA implementation is available at pyllama.hf. 📥 Installation In a conda env with pytorch / cuda available, run: pip install pyllama -U 🐏 If you have installed llama library from other sources, please uninstall the previous llama library and use pip install pyllama -U to install the latest version. 📦...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
emdgroup/foundry-dev-toolsPython840130
numba/numbaPythonCC++9k01.1k0
edtechre/pybrokerPython1.3k+7190+1
bEsPoKeN-tOkEns/token-testerSolidityPythonShell1460120
facebookincubator/buck2-preludeStarlarkPythonErlang310200
bazelbuild/platformsStarlarkShell880650
bazel-contrib/bazel-catalogShellStarlarkjq3000
Shopify/trackySwiftPython2570180
madox2/vim-aiVim ScriptPythonShell3950280
easychen/openai-gpt-dev-notes-for-cn-developerShell1.3k+5920