NVIDIA/FasterTransformer

Transformer related optimization, including BERT, GPT

C++CudaCMakePythonShellMakefilepytorchtransformergptbert
This is stars and forks stats for /NVIDIA/FasterTransformer repository. As of 29 Apr, 2024 this repository has 4668 stars and 807 forks.

FasterTransformer This repository provides a script and recipe to run the highly optimized transformer-based encoder and decoder component, and it is tested and maintained by NVIDIA. Table Of Contents FasterTransformer Table Of Contents Model overview Support matrix Advanced Global Environment Performance BERT base performance BERT base performances of FasterTransformer new features BERT base performance on TensorFlow BERT base performance on PyTorch Decoding and Decoder performance Decoder and Decoding...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
vercel/serveTypeScriptHTMLJavaScript8.9k07190
LeaVerou/color.jsJavaScriptHTMLCSS1.6k0670
NVlabs/GCVitPythonShell3710450
nsidnev/fastapi-realworld-example-appPythonOther2.6k06130
PINTO0309/PINTO_model_zooPythonShellOther2.9k05150
MeViMo/youbitCythonPython644+226+1
UKPLab/sentence-transformersPython12.1k02.2k0
ritchieng/the-incredible-pytorch9.9k+111.9k+3
pret/pokeyellowAssemblyPythonOther609+4203+1
evgenyb/aks-workshopsBicepC#Shell270370