bigscience-workshop/petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

PythonOthernlpbloomdistributed-systemsmachine-learningdeep-learningchatbotpytorchfalcontransformerneural-networksllamagptpretrained-modelslanguage-modelsvolunteer-computingpipeline-parallelismguanacotensor-parallelismlarge-language-modelsllama2
This is stars and forks stats for /bigscience-workshop/petals repository. As of 27 Apr, 2024 this repository has 7655 stars and 372 forks.

Run large language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading Generate text with distributed Llama 2 (70B), Falcon (40B+), BLOOM (176B) (or their derivatives), and fine‑tune them for your own tasks — right from your desktop computer or Google Colab: from transformers import AutoTokenizer from petals import AutoDistributedModelForCausalLM # Choose any model available at https://health.petals.dev model_name = "petals-team/StableBeluga2"...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
EnableSecurity/wafw00fPythonOther4.5k08850
KartoffelToby/better_thermostatPythonSCSSShell4780840
xianyuyimu/TVBOX-JavaScriptPython86301540
kwindrem/GuiModsPythonQMLCSS1400190
znah/deepdream_cSourcePawnCPython2050240
loia5tqd001/Dang-Ky-Hoc-Phan-UITSCSSTypeScriptOther178090
rkalis/ethrouletteSolidityTypeScriptJavaScript470250
bsg-external/ee477-hammer-cadTclPythonMakefile0000
Ruzim/NSFC-application-template-latexTeXOther506+51160
stevenpetryk/mafsTypeScriptCSSOther3.1k0730