huggingface/accelerate

🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision

PythonOther
This is stars and forks stats for /huggingface/accelerate repository. As of 04 May, 2024 this repository has 5822 stars and 631 forks.

Run your *raw* PyTorch training script on any kind of device Easy to integrate 🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. 🤗 Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged. Here is an example: import torch import torch.nn.functional as F from datasets import...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
kernc/backtesting.pyPythonJavaScript4.1k08200
Drakkar-Software/OctoBotPythonOther2.3k06620
vitapramdevputra/DevFalcons8ApexOther12050
polybassa/PIC-BootloaderAssemblyPython2010
haiwen/seafileCPythonM411k01.5k0
VerifyTests/VerifyC#Other2.1k01180
starknet-edu/starknet-messaging-bridgeCairoSolidityJavaScript1110520
babashka/neilClojureEmacs LispOther2970230
WebPlatformForEmbedded/meta-wpeBitBakePHPC++740770
eslam3kl/SQLiDetectorBlitzBasicPythonDockerfile526+21040