openai/CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

Jupyter NotebookPythonmachine-learningdeep-learning
This is stars and forks stats for /openai/CLIP repository. As of 05 May, 2024 this repository has 18095 stars and 2566 forks.

CLIP [Blog] [Paper] [Model Card] [Colab] CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
kwea123/nerf_plJupyter NotebookPython2.4k04480
hugo2046/Quantitative-analysisJupyter NotebookHTMLPython1.6k04800
fastai/fastaiJupyter NotebookPython24.6k07.5k0
aamini/introtodeeplearningJupyter NotebookPython6.6k03.3k0
bmild/nerfJupyter NotebookPython8.4k+361.2k+4
AammarTufail/machinelearning_ka_chillaJupyter Notebook30201720
saic-mdal/lamaJupyter NotebookPythonOther6.3k07020
chenyuntc/pytorch-bookJupyter NotebookPythonShell11.3k03.7k0
coolsnowwolf/luciLuaCHTML25205650
LibreELEC/LibreELEC.tvMakefileShellPython2k01.1k0