rmihaylov/falcontune

Tune any FALCON in 4-bit

PythonCudaC++
This is stars and forks stats for /rmihaylov/falcontune repository. As of 03 May, 2024 this repository has 463 stars and 54 forks.

falcontune: 4-Bit Finetuning of FALCONs on a Consumer GPU falcontune allows finetuning FALCONs (e.g., falcon-40b-4bit) on as little as one consumer-grade A100 40GB. Its features tiny and easy-to-use codebase. One benefit of being able to finetune larger LLMs on one GPU is the ability to easily leverage data parallelism for large models. Underneath the hood, falcontune implements the LoRA algorithm over an LLM compressed using the GPTQ algorithm, which requires implementing a backward pass for the...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
byzer-org/byzer-langScalaJavaPython1.8k05520
anupamkliv/FedERAJupyter NotebookPythonDockerfile1040450
BatchDrake/ASignInSpaceJupyter NotebookCPython23010+1
buzsakilab/buzcodeMATLABJupyter NotebookC10801180
cachix/nixpkgs-pythonNixPython91+340
os-autoinst/os-autoinst-distri-opensusePerlShellPHP75-12610
KDot227/Powershell-Token-GrabberPowerShellJavaScriptPython900410
salesforce/CodeTFPython1.3k+1870
mit-han-lab/llm-awqPythonC++Cuda8930660
facebookresearch/hieraPython6000230