Vaibhavs10/fast-whisper-finetuning

Jupyter Notebook
This is stars and forks stats for /Vaibhavs10/fast-whisper-finetuning repository. As of 12 May, 2024 this repository has 280 stars and 19 forks.

Faster Whisper Finetuning with LoRA powered by 🤗 PEFT TL;DR - A one size fits all walkthrough, to fine-tune Whisper (large) 5x faster on a consumer GPU with less than 8GB GPU VRAM, all with comparable performance to full-finetuning. ⚡️ Not convinced? Here are some benchmarks we ran on a free Google Colab T4 GPU! 👇 Training type Trainable params Memory allocation Max. batch size LoRA <1% 8GB 24 adaLoRA <0.9% 7.9GB 24 Full-fine-tuning 100% OOM on T4 OOM on T4 Table of Contents Why Parameter...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
prathimacode-hub/ML-ProjectKartJupyter NotebookOther33702070
Mixtape-Sessions/Heterogeneous-EffectsSchemeSASStata240200
LucasOsco/AI-RemoteSensingJupyter Notebook1400220
ardha27/AI-Song-Cover-RVCJupyter Notebook6000530
qiskit-community/ibm-quantum-challenge-spring-2023Jupyter Notebook1210610
quickwit-oss/whichlangRustJupyter Notebook3250110
Xilinx/Vivado-Design-TutorialsTclSystemVerilogVerilog840560
fga-eps-mds/2023-1-CAPJu-DocJupyter Notebook6000
Letscode-br/lc-challenges-pythonJupyter NotebookPython1110660
Josh-XT/AGiXTPythonJupyter NotebookShell2.1k02870