jankais3r/LLaMA_MPS

Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs.

Pythonmacoschatmetalchatbotmltorchllamampsalpacaapple-siliconllmschatgptstanford-alpaca
This is stars and forks stats for /jankais3r/LLaMA_MPS repository. As of 19 Apr, 2024 this repository has 572 stars and 47 forks.

LLaMA_MPS Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs. As you can see, unlike other LLMs, LLaMA is not biased in any way 😄 Initial setup steps 1. Clone this repo git clone https://github.com/jankais3r/LLaMA_MPS 2. Install Python dependencies cd LLaMA_MPS pip3 install virtualenv python3 -m venv env source env/bin/activate pip3 install -r requirements.txt pip3 install -e . LLaMA-specific setup 3. Download the model weights and put them into a folder called models (e.g., LLaMA_MPS/models/7B) 4....
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
kivy/kivyPythonCythonC16.3k03.1k0
hyperonym/basaranPythonJavaScriptCSS1.2k0700
lucidrains/gigagan-pytorchPython1.5k0680
100daysofdevops/N-days-of-automationShellPythonDockerfile750800
noobnooc/ohmygptTypeScriptJavaScriptShell23801410
cosin2077/chatyTypeScriptCSSJavaScript4510420
retrage/gpt-macroRust4830100
toumorokoshi/toumorokoshi.github.comSCSSHTMLPython4010
clabby/substratumSolidityPython58050
ztjhz/FreeChatGPTTypeScriptCSSJavaScript6.1k+532.1k+26