This is stars and forks stats for /Guitaricet/peft_pretraining repository. As of 03 May, 2024 this repository has 279 stars and 23 forks.
ReLoRA -- PEFT Pretraining Official code for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates https://arxiv.org/abs/2307.05695 Setup All requirements are listed in requirements.txt and kept up-to-date. cd peft_pretraining pip install -r requirements.txt Usage To train a model using ReLoRA, first, perform a warmup through regular training. Train language model with PEFT torchrun --nproc-per-node <N_GPUS> torchrun_main.py \ --model_config configs/llama_250m.json...
ReLoRA -- PEFT Pretraining Official code for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates https://arxiv.org/abs/2307.05695 Setup All requirements are listed in requirements.txt and kept up-to-date. cd peft_pretraining pip install -r requirements.txt Usage To train a model using ReLoRA, first, perform a warmup through regular training. Train language model with PEFT torchrun --nproc-per-node <N_GPUS> torchrun_main.py \ --model_config configs/llama_250m.json...
repo | techs | stars | weekly | forks | weekly |
---|---|---|---|---|---|
daenuprobst/molzip | PythonJupyter Notebook | 49 | 0 | 9 | 0 |
Muirey03/RemoteLog | Objective-CPython | 35 | 0 | 5 | 0 |
PoomSmart/PSHeader | Objective-CCShell | 11 | 0 | 3 | 0 |
kohenkatz/chamo | OCamlShellC | 0 | 0 | 0 | 0 |
ingydotnet/lingy | PerlClojureRaku | 39 | 0 | 4 | 0 |
Codium-ai/pr-agent | PythonOther | 2.3k | 0 | 155 | 0 |
amnemonic/Quansheng_UV-K5_Firmware | PythonCBatchfile | 275 | 0 | 58 | 0 |
jina-ai/vectordb | PythonShellDockerfile | 353 | 0 | 21 | 0 |
lavishsheth/code | Shell | 13 | 0 | 136 | 0 |
Shaunwei/RealChar | JavaScriptPythonSwift | 5.3k | 0 | 594 | 0 |