intel-analytics/BigDL

Accelerate LLM with low-bit (INT3 / INT4 / NF4 / INT5 / INT8) optimizations using bigdl-llm

Jupyter NotebookScalaPythonJavaShellDockerfilepythonscalasparktensorflowkerastransformerspytorchbigdldistributed-deep-learninganalytics-zoollm
This is stars and forks stats for /intel-analytics/BigDL repository. As of 24 Apr, 2024 this repository has 4426 stars and 1145 forks.

BigDL-LLM bigdl-llm is a library for running LLM (large language model) on Intel XPU (from Laptop to GPU to Cloud) using INT4 with very low latency1 (for any PyTorch model). It is built on top of the excellent work of llama.cpp, ggml, gptq, bitsandbytes, qlora, llama-cpp-python, gptq_for_llama, chatglm.cpp, redpajama.cpp, gptneox.cpp, bloomz.cpp, etc. Latest update [New] bigdl-llm now supports QLoRA fintuning on Intel GPU; see the the example here. bigdl-llm now supports Intel GPU (including Arc,...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
mahmoud/awesome-python-applicationsJupyter Notebook14.9k02.6k0
mrousavy/react-native-blurhashKotlinSwiftJava1.7k0570
schism-dev/schismLLVMFortranC680740
pakeke-constructor/PushOpsLuaClojureGLSL15000
aws/eks-anywhere-build-toolingMakefileShellGo410790
GlitchyTurtle/Avatar-AddonJavaScript250280
comby-tools/combyOCamlShellStandard ML2.2k+2570
frones/ACBrPascalJavaC#12901510
mrash/fwknopPerlCRoff96602140
Plutonomicon/cardano-transaction-libPureScriptJavaScriptNix780490