This is stars and forks stats for /juncongmoo/pyllama repository. As of 27 Apr, 2024 this repository has 2643 stars and 308 forks.
🦙 LLaMA - Run LLM in A Single 4GB GPU 📢 pyllama is a hacked version of LLaMA based on original Facebook's implementation but more convenient to run in a Single consumer grade GPU. The Hugging Face's LLaMA implementation is available at pyllama.hf. 📥 Installation In a conda env with pytorch / cuda available, run: pip install pyllama -U 🐏 If you have installed llama library from other sources, please uninstall the previous llama library and use pip install pyllama -U to install the latest version. 📦...
🦙 LLaMA - Run LLM in A Single 4GB GPU 📢 pyllama is a hacked version of LLaMA based on original Facebook's implementation but more convenient to run in a Single consumer grade GPU. The Hugging Face's LLaMA implementation is available at pyllama.hf. 📥 Installation In a conda env with pytorch / cuda available, run: pip install pyllama -U 🐏 If you have installed llama library from other sources, please uninstall the previous llama library and use pip install pyllama -U to install the latest version. 📦...
repo | techs | stars | weekly | forks | weekly |
---|---|---|---|---|---|
emdgroup/foundry-dev-tools | Python | 84 | 0 | 13 | 0 |
numba/numba | PythonCC++ | 9k | 0 | 1.1k | 0 |
edtechre/pybroker | Python | 1.3k | +7 | 190 | +1 |
bEsPoKeN-tOkEns/token-tester | SolidityPythonShell | 146 | 0 | 12 | 0 |
facebookincubator/buck2-prelude | StarlarkPythonErlang | 31 | 0 | 20 | 0 |
bazelbuild/platforms | StarlarkShell | 88 | 0 | 65 | 0 |
bazel-contrib/bazel-catalog | ShellStarlarkjq | 3 | 0 | 0 | 0 |
Shopify/tracky | SwiftPython | 257 | 0 | 18 | 0 |
madox2/vim-ai | Vim ScriptPythonShell | 395 | 0 | 28 | 0 |
easychen/openai-gpt-dev-notes-for-cn-developer | Shell | 1.3k | +5 | 92 | 0 |