huggingface/text-generation-inference

Large Language Model Text Generation Inference

PythonRustCudaDockerfileC++JavaScriptOthernlpbloomdeep-learninginferencepytorchfalcontransformergptstarcoder
This is stars and forks stats for /huggingface/text-generation-inference repository. As of 30 Apr, 2024 this repository has 5391 stars and 565 forks.

Text Generation Inference A Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power Hugging Chat, the Inference API and Inference Endpoint. Table of contents Get Started API Documentation Using a private or gated model A note on Shared Memory Distributed Tracing Local Install CUDA Kernels Optimized architectures Run Falcon Run Quantization Develop Testing Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
ajndkr/lanarkyPythonMakefile8200590
vinceliuice/MacVentura-kdeQMLShellLatte59040
cryptax/androidreReasonPythonShell4760830
Lissy93/AdGuardian-TermRustShellMakefile7000260
Kudaes/EPIRustPython2790310
ZhangHanDong/prompt-description-languageRust227090
lancedb/lanceRustPythonJupyter Notebook2.7k01070
LeeeSe/MessAutoRust5910280
bgrabitmap/bgracontrolsPascalOther1570260
didier13150/rpmPOV-Ray SDLShellMakefile4010