This is stars and forks stats for /kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference repository. As of 05 May, 2024 this repository has 820 stars and 153 forks.
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A Clearly explained guide for running quantized open-source LLM applications on CPUs using LLama 2, C Transformers, GGML, and LangChain Step-by-step guide on TowardsDataScience: https://towardsdatascience.com/running-llama-2-on-cpu-inference-for-document-q-a-3d636037a3d8 Context Third-party commercial large language model (LLM) providers like OpenAI's GPT4 have democratized LLM use via simple API calls. However,...
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A Clearly explained guide for running quantized open-source LLM applications on CPUs using LLama 2, C Transformers, GGML, and LangChain Step-by-step guide on TowardsDataScience: https://towardsdatascience.com/running-llama-2-on-cpu-inference-for-document-q-a-3d636037a3d8 Context Third-party commercial large language model (LLM) providers like OpenAI's GPT4 have democratized LLM use via simple API calls. However,...
repo | techs | stars | weekly | forks | weekly |
---|---|---|---|---|---|
Jamie-Stirling/RetNet | Python | 893 | 0 | 82 | 0 |
JayZeeDesign/researcher-gpt | Python | 363 | 0 | 207 | 0 |
psychic-api/rag-stack | TypeScriptPythonHCL | 1.1k | 0 | 90 | 0 |
Fadi002/unshackle | ShellPythonC++ | 1.5k | 0 | 85 | 0 |
facebookresearch/amortized-optimization-tutorial | TeXPython | 212 | 0 | 12 | 0 |
microsoft/TypeChat | TypeScriptNunjucksJavaScript | 6.8k | +61 | 301 | +2 |
mattzcarey/code-review-gpt | TypeScriptJavaScriptCSS | 866 | +25 | 60 | +4 |
apptension/saas-boilerplate | TypeScriptPythonMDX | 1.1k | 0 | 93 | 0 |
YvanYin/Metric3D | PythonShell | 294 | 0 | 12 | 0 |
Maknee/minigpt4.cpp | C++PythonCMake | 482 | +4 | 17 | +2 |